对于 70 后学习 AI,以下是一些建议:
需要注意的是,学习 AI 是一个长期的过程,需要保持耐心和坚持不懈的精神。
1970年获得实验心理学学士学位后,对大学本科学习颇感失望的辛顿放弃了剑桥大学,成了一名木匠。他一边做书架、木门,一边思考人类大脑的运作原理,自认为这是他喜欢的生活方式。作了一年多之后又有新想法了,因为靠木匠谋生并不是件容易的事情,对了解大脑也无帮助,所以,辛顿又考虑回归学术界,并决定尝试一个新方向:人工智能。对如此奇特多变的求学经历,辛顿谑称自己患上了“学习上的过动症”,在一个专业上无法稳定下来。但其实不然,辛顿始终都在寻求自己的方向。前一段,是多次缀学的“传奇”,后面的经历,便说明了辛顿认定方向后坚持不懈的“传奇”精神。他1972年去了爱丁堡,进入苏格兰爱丁堡大学攻读博士,这次可能算是走对路了,因为那儿有一位非常聪明的叫希金斯(Christopher Higgins,1923 – 2004)的教授,正在研究神经网络,这是辛顿长年累月思考认为可以用机器实现大脑功能的方向。但是,辛顿好像总是运气不佳,就在他开始追求这个目标时,希金斯教授改变了他的学术初衷,“叛变”到了AI的符号主义一边,认为联结主义的神经网络是无稽之谈。这显然是受了MIT的AI大佬闵斯基的影响。闵斯基的《Perceptron》一书于69年出版,几乎摧毁了神经网络领域,使得1972年成为神经网络有史以来最低潮的时期。因此,希金斯试图说服辛顿停止做神经网络,转做符号人工智能。于是辛顿说,再给我六个月,我会证明这是有效的;然后每六个月之后,辛顿再跟希金斯说与上次一模一样的话。两人磨磨唧唧地争论了五年,辛顿终于坚持研究备受冷落的神经网络并熬到了博士毕业。
2023年起,每个教育工作者,每个父母都应该意识到,我们孩子将要面对的世界,对教和学的要求,已经完全不一样了。真正的个性化学习时代已经到来。请回想一下,我们小时候的学习路径:从小到大在同一个大班(40-70人)接受同一个老师授课,学习同样的教材,完成同样的作业。而2016年,我去美国湾区访校,每个学校的管理层都在讨论如何为孩子提供personalized learning(个性化学习)。比如Khan Lab School(可汗学院旗下的K12学校),允许在孩子准备好的时候随时申请跳级,或者觉得自己没有ready,可以申请停留在之前的年级。7年前的访校给我带来了极大的认知震撼。虽然早就听说过项目制学习,个性化学习,但是看到每个学校积极落地的形态,带来的冲击完全不一样。7年后的今天,随着生成式AI智力崛起,个性化学习将逐渐普及到每个孩子。在未来教育的演变中,AI将作为教育生态系统的一部分,与人类教师共同协作,为孩子们提供完全不一样的学习体验。我们可能逐渐会看不同形态的AI +混合式教学,比如,让逝去的先贤和学生圆桌对谈。在那时,孩子们能够制定自己的学习节奏,根据自己的兴趣和目标和AI Tutor协商定制学习路径和学习材料。他们还可以用「费曼学习法」把学到的东西教给AI,从AI tutor处获得即时反馈。教育工作者不再仅仅是知识的传授者,而是成为学习的引导者以及支持学生在AI元年成长的伙伴。我们将有更多的时间和精力去关注孩子的全人发展,比如培养他们的创造力和社交智慧等。越是AI时代,这些有人味儿的元素可能会更珍贵。未来3年,对于教育从业者来说,任何能提升人机协作效率的点都蕴藏着巨大的机遇,比如AI作业批改,AI备课,AI定制教育规划,AI学前启蒙等。用4.0 V批改阅读理解题
of the special features of the game,but all those efforts proved irrelevant,or worse,once search was applied effectively at scale.Also important was the use of learning by self play to learn a value function(as it was in many other games and even in chess,although learning did not play a big role in the 1997 program that first beat a world champion).Learning by self play,and learning in general,is like search in that it enables massive computation to be brought to bear.Search and learning are the two most important classes of techniques for utilizing massive amounts of computation in AI research.In computer Go,as in computer chess,researchers' initial effort was directed towards utilizing human understanding(so that less search was needed)and only much later was much greater success had by embracing search and learning.In speech recognition,there was an early competition,sponsored by DARPA,in the 1970s.Entrants included a host of special methods that took advantage of human knowledge---knowledge of words,of phonemes,of the human vocal tract,etc.On the other side were newer methods that were more statistical in nature and did much more computation,based on hidden Markov models(HMMs).Again,the statistical methods won out over the human-knowledge-based methods.This led to a major change in all of natural language processing,gradually over decades,where statistics and computation came to dominate the field.The recent rise of deep learning in speech recognition is the most recent step in this consistent direction.Deep learning methods rely even less on human knowledge,and use even more computation,together with learning on huge training sets,to produce dramatically better speech recognition systems.As in the games,researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems---but it proved ultimately counterproductive,and a colossal waste of researcher's time,when,through Moore's law,massive computation became available and a means was found to put it to good use.In computer vision,there has been a similar pattern.Early methods conceived of vision as searching for edges,or generalized cylinders,or