人工智能不能解决目前所有的问题。例如在科学领域,我们还没有解决所有想要解决的问题,很多时候我们似乎没有选择研究内容的权利,比如大自然会迫使我们去研究某些方面。这就不可避免地让我们面对计算不可约性。
有许多问题都遵循类似的模式,如在游戏图中找到获胜的游戏序列、通过可能性图的移动寻找谜题的解决方案、在给定公理的情况下找到定理的证明、在给定基本反应的情况下寻找化学合成途径,以及解决大量的 NP 问题,这些问题中存在许多“非确定性”的计算路径。
在实际情况中,相关的图通常非常大,所以挑战在于不追踪整个可能性图的情况下找出要采取的步骤。常见的方法包括尝试为不同的可能状态或结果分配分数,并只追求分数最高的路径。在自动定理证明中,也常见从初始命题“向下”和从最终定理“向上”工作,试图找到路径在中间的交汇点。还有一个重要的想法是,如果确定了从 X 到 Y 存在路径,就可以将 X → Y 作为新规则添加到规则集合中。
另外,查看自动编码器内部可以提取出它提出的简化表示。当数据在神经网络中流动时,会努力保留重现原始输入所需的信息。如果某一层的元素较少,那么该层的元素就对应于原始输入的某种简化表示。以经过大量网络图像训练的标准现代图像自动编码器为例,给它输入一张猫的图片,它能成功复制出类似原图的东西,中间会有像素少得多的简化表示,虽然我们不知道模型中元素的含义,但它成功捕捉到了图片的本质。
总之,计算的不可约性将阻止我们完全依靠人工智能解决所有问题,总会有更多有待发现和需要更多计算才能达到的东西。
But we certainly know we haven’t yet solved everything we want in science.And in many cases it seems like we don’t really have a choice about what we need to study; nature,for example,forces it upon us.And the result is that we inevitably end up face-to-face with computational irreducibility.但我们当然知道我们还没有解决科学上我们想要的一切。在很多情况下,我们似乎并没有真正选择我们需要学习什么;我们只能选择学习什么。例如,大自然将其强加于我们。结果是我们不可避免地要面对计算不可约性。As we’ll discuss,AI has the potential to give us streamlined ways to find certain kinds of pockets of computational reducibility.But there’ll always be computational irreducibility around,leading to unexpected “surprises” and things we just can’t quickly or “narratively” get to.Will this ever end?No.There’ll always be “more to discover”.Things that need more computation to reach.Pockets of computational reducibility that we didn’t know were there.And ultimately—AI or not—computational irreducibility is what will prevent us from ever being able to completely “solve science”.正如我们将讨论的,人工智能有潜力为我们提供简化的方法来找到某些类型的计算可简化性。但总会存在计算的不可约性,导致意想不到的“惊喜”以及我们无法快速或“叙述性”到达的事情。这会结束吗?不,总会有“更多有待发现”。需要更多计算才能达到的东西。我们不知道存在一些计算可简化性。最终,无论是否有人工智能,计算的不可约性将阻止我们完全“解决科学问题”。
There are many kinds of problems that follow this same general pattern.Finding a winning sequence of plays in a game graph.Finding the solution to a puzzle as a sequence of moves through a graph of possibilities.Finding a proof of a theorem given certain axioms.Finding a chemical synthesis pathway given certain basic reactions.And in general solving a multitude of NP problems in which many “nondeterministic” paths of computation are possible.有许多种问题都遵循同样的一般模式。在游戏图中找到获胜的游戏序列。通过可能性图的一系列移动来寻找谜题的解决方案。在给定某些公理的情况下找到定理的证明。在给定某些基本反应的情况下寻找化学合成途径。一般来说,解决大量NP问题,其中许多“非确定性”计算路径都是可能的。In the very simple example above,we’re readily able to explicitly generate a whole multiway graph.But in most practical examples,the graph would be astronomically too large.So the challenge is typically to suss out what moves to make without tracing the whole graph of possibilities.One common approach is to try to find a way to assign a score to different possible states or outcomes,and to pursue only paths with(say)the highest scores.In automated theorem proving it’s also common to work “downward from initial propositions” and “upward from final theorems”,trying to see where the paths meet in the middle.And there’s also another important idea:if one has established the “lemma” that there’s a path from X to Y,one can add X → Y as a new rule in the collection of rules.
But now the idea is to look inside the autoencoder,and to pull out a reduced representation that it’s come up with.As data flows from layer to layer in the neural net,it’s always trying to preserve the information it needs to reproduce the original input.And if a layer has fewer elements,what’s present at that layer must correspond to some reduced representation of the original input.但现在的想法是查看自动编码器的内部,并提取它所提出的简化表示。当数据在神经网络中从一层流向另一层时,它总是试图保留重现原始输入所需的信息。如果一个层的元素较少,则该层的元素必须与原始输入的某种简化表示相对应。Let’s start with a standard modern image autoencoder,that’s been trained on a few billion images typical of what’s on the web.Feed it a picture of a cat,and it’ll successfully reproduce something that looks like the original picture:让我们从标准的现代图像自动编码器开始,它已经过网络上典型的数十亿张图像的训练。给它喂一张猫的图片,它会成功地复制出看起来像原始图片的东西:But in the middle there’ll be a reduced representation,with many fewer pixels—that somehow still captures what’s needed of the cat(here shown with its 4 color channels separated):但在中间会有一个减少的表示,像素少得多,但不知何故仍然捕捉到了猫的需要(这里显示了它的4个颜色通道分开):We can think of this as a kind of “black-box model” for the cat image.We don’t know what the elements(“features”)in the model mean,but somehow it’s successfully capturing “the essence of the picture”.我们可以将其视为猫图像的一种“黑盒模型”。我们不知道模型中的元素(“特征”)意味着什么,但不知怎的,它成功地捕捉了“图片的本质”。