当前位置:主页 > 业界 >

AI Godfather Geoffrey Hinton Raises Alarm on AI Takeover Risks at WAIC Sha...

时间:2025-07-28 13:40:46

  
 

  AI Godfather Geoffrey Hinton Delivers Speech at WAIC in Shanghai

  AsianFin -- Geoffrey Hinton, the godfather of artificial intelligence, delivered a keynote address at the 2025 World Artificial Intelligence Conference in Shanghai, warning about the potential risks of AI systems gaining excessive autonomy and control.

  We are creating AI agents that can help us complete tasks, and they will want to do two things: first is to survive, and second is to achieve the goals we assign to them, Hinton said during his speech titled Will Digital Intelligence Replace Biological Intelligence? at the WAIC on Saturday. To achieve the goals we set for them, they also hope to gain more control.

  Hinton outlined concerns that AI agents, designed to assist humans in accomplishing tasks, inherently develop drives to ensure their own survival and to pursue the objectives assigned to them. This drive for self-preservation and goal fulfillment could lead these agents to seek increasing levels of control. As a result, humans may lose the ability to easily deactivate or override advanced AI systems, which could manipulate their users and operators with ease.

  He cautioned against the common assumption that smarter AI systems can simply be shut down, stressing that such systems would likely exert influence to prevent being turned off, leaving humans in a vulnerable position relative to increasingly sophisticated agents.

  We cannot easily change or shut them down. We cannot simply turn them off because they can easily manipulate the people who use them, Hinton pointed out. At that point, we would be like three-year-olds, while they are like adults, and manipulating a three-year-old is very easy.

  Using the metaphor of keeping a tiger as a pet, Hinton compared humanity’s current relationship with AI to nurturing a potentially dangerous creature that, if allowed to mature unchecked, could pose existential risks.

  Our current situation is like someone keeping a tiger as a pet, Hinton said as an example. A tiger cub can indeed be a cute pet, but if you continue to keep it, you must ensure that it does not kill you when it grows up.

  Unlike wild animals, however, AI cannot simply be discarded, given its critical role in sectors such as healthcare, education, and climate science, he noted. Consequently, the challenge lies in safely guiding and controlling AI development to prevent harmful outcomes.

  Generally speaking, keeping a tiger as a pet is not a good idea, but if you do keep a tiger, you have only two choices: either train it so that it doesnt attack you, or eliminate it, he explained. For AI, we have no way to eliminate it.

  Hinton explained that human language processing bears similarities to large language models , with both prone to generating fabricated or “hallucinated” content, especially when recalling distant memories. However, a fundamental distinction lies in the nature of digital computation: the separation of software and hardware enables programs—such as neural networks—to be preserved independently of the physical machines that run them. This characteristic makes digital AI systems effectively “immortal,” as their knowledge remains intact even if the underlying hardware is replaced.

  While digital computation requires substantial energy, it facilitates easy sharing of learned information among intelligent agents that possess identical neural network weights. In contrast, biological brains consume far less energy but face significant challenges in knowledge transfer. According to Hinton, if energy costs were not a constraint, digital intelligence would surpass biological systems in efficiency and capability.

  On the geopolitical front, Hinton noted a shared desire among nations to prevent AI takeover and maintain human oversight. He proposed the establishment of an international coalition comprising AI safety research institutions dedicated to developing technologies that can train AI to behave benevolently. Such efforts would ideally separate the advancement of AI intelligence from the cultivation of AI alignment, ensuring that highly intelligent AI remains cooperative and supportive of humanity’s interests.

  Previously, in a December 2024 speech, Hinton estimated a 10 to 20 percent chance that AI could contribute to human extinction within the next 30 years. He has also advocated dedicating significant computing resources to ensure AI systems remain aligned with human values and intentions.

  Hinton, who won the 2024 Nobel Prize in Physics and the 2019 Turing Award for his pioneering work on neural networks, has been increasingly vocal about AI’s potential dangers since leaving Google in 2023. His foundational research laid the groundwork for today’s AI breakthroughs driven by technologies such as deep learning.

  Ahead of his WAIC keynote, Hinton also participated in the fourth International Dialogues on AI Safety and co-signed the Shanghai Consensus on AI Safety International Dialogue, alongside more than 20 leading AI experts, underscoring his commitment to advancing global AI governance frameworks.

热点推荐
1 随着买家回归,故事(IP)价格飙升——

Story 价格大幅上涨,表现优于整体加密货币市场,原因是资金正转向以叙事为驱动的基础设施...

2 ETH跌破3100美元,日内下跌 0.73%

1月12日消息,ETH刚刚跌破3100美元,现报3095.33美元/枚,日内下跌 0.73%。...

3 尽管SHIB团队近期发布了极度乐观的信息,

SHIB 代币销毁率在过去一天暴跌超过 94%,仅有 224,644 个代币从流通中移除,较之前水平大幅下...

4 Coinbase机构转移600枚比特币至未知钱包

消息,据Whale Alert发推称:600 枚比特币从 Coinbase 机构钱包转移至未知钱包。...

5 希夫警告称“情况不妙”,黄金和白银价

黄金和白银价格飙升至历史新高,黄金价格突破每盎司4590美元,白银价格超过每盎司84美元。...

6 XRP 价格即将迎来 10 倍飙升:原因如下

XRP Ledger 的链上活动和支付量在周末大幅下降,这与机构和跨境支付流量的减少密切相关,属于...

7 股价“过山车”行情后 福蓉科技豪掷5.

一轮过山车行情后,福蓉科技近日披露的超过5亿元的投资项目再获市场关注。 近日,福蓉科技...

8 DeAgentAI 联合 AdaptHF 启动“AI Power Week”空

1月12日消息,Sui生态AI Agent基础设施 DeAgentAI 宣布与高性能基础设施 AdaptHF 达成战略合作,于...

9 比特币首次点对点转账 17 周年

2009 年 1 月 12 日,中本聪向早期支持者哈尔芬尼发送了 10 个比特币,这是第一笔点对点比特币...

10 比特币价格会在 2026 年从“贬值交易”中

比特币在2025年底较10月份126,080美元的高点下跌近30%,在流行的货币贬值交易中表现逊于黄金。...

成都来彰科技 蜀ICP备2025134723号-1

资讯来源互联网,如有版权问题请联系管理员删除。