人脸识别技术具有一定的价值。过去,AI 在人脸识别等分类判断任务上取得了成果,方便了我们的生活。然而,其使用过程中会让人感受到明显的机器感。相比之下,生成式 AI 在写文章、画画、写歌等方面展现出类似人类的智慧和能力。在 AI 研究中,搜索和学习是利用大量计算的两个重要技术类别。在计算机围棋、国际象棋、语音识别和计算机视觉等领域,都经历了从依赖人类知识到依靠统计方法和大量计算的转变。在 AI 相关的监管方面,对于像人脸识别这样的特定技术或应用,不采用僵化的法律定义,而是基于功能能力来设计应对挑战的方法,以适应 AI 的快速发展。
过去的其他AI,更多的应用成果是完成诸如人脸识别这样分类判断的任务,虽然方便了我们的生活,但在使用的过程中,我们能够清晰的感受到他不是人,而是充满机器感的僵硬程序。生成式AI的诞生,让我们看到原来AI可以和人一样创作交流,他没有像一些人期待的那样,最先做好擦桌子扫地的基础工作任务,而是在写文章、画画、写歌等方面涌现出人类般的智慧,其表现出的惊人能力,把一众平凡的人类个体“碾压的渣都不剩”。图4什么是生成式AI-1篇幅所限,更多有趣的知识和故事我们不做更多的展开了,关于AI发展史,我们就聊到这里。
of the special features of the game,but all those efforts proved irrelevant,or worse,once search was applied effectively at scale.Also important was the use of learning by self play to learn a value function(as it was in many other games and even in chess,although learning did not play a big role in the 1997 program that first beat a world champion).Learning by self play,and learning in general,is like search in that it enables massive computation to be brought to bear.Search and learning are the two most important classes of techniques for utilizing massive amounts of computation in AI research.In computer Go,as in computer chess,researchers'initial effort was directed towards utilizing human understanding(so that less search was needed)and only much later was much greater success had by embracing search and learning.In speech recognition,there was an early competition,sponsored by DARPA,in the 1970s.Entrants included a host of special methods that took advantage of human knowledge---knowledge of words,of phonemes,of the human vocal tract,etc.On the other side were newer methods that were more statistical in nature and did much more computation,based on hidden Markov models(HMMs).Again,the statistical methods won out over the human-knowledge-based methods.This led to a major change in all of natural language processing,gradually over decades,where statistics and computation came to dominate the field.The recent rise of deep learning in speech recognition is the most recent step in this consistent direction.Deep learning methods rely even less on human knowledge,and use even more computation,together with learning on huge training sets,to produce dramatically better speech recognition systems.As in the games,researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems---but it proved ultimately counterproductive,and a colossal waste of researcher's time,when,through Moore's law,massive computation became available and a means was found to put it to good use.In computer vision,there has been a similar pattern.Early methods conceived of vision as searching for edges,or generalized cylinders,or
address the challenges created by these characteristics,we future-proof our framework againstunanticipated new technologies that are autonomous and adaptive.Because we are notcreating blanket new rules for specific technologies or applications of AI,like facial recognitionor LLMs,we do not need to use rigid legal definitions.Our use of these defining characteristicswas widely supported in responses to our policypaper,81as rigid definitions can quickly becomeoutdated and restrictive with the rapid evolution ofAI.82We will,however,retain the ability toOne of the biggest problems in regulating AI is agreeing on a definition,Carnegie Endowment for International Peace,2022.Establishing a pro-innovation approach to regulating AI,Office for Artificial Intelligence,2022.As stated in government guidance on using AI in the public sector,we consider machine learning to be a subset of AI.Whilemachine learning is the most widely-used form of AI and will be captured within our framework,our adaptive and autonomousA pro-innovation approach to AI regulationadapt our approach to defining AI if necessary,alongside the ongoing monitoring and iteration ofthe wider regulatory framework.