Close Menu
Tac Gear Drop
  • Home
  • News
  • Tactical
  • Guns
  • Survival
  • Videos
Facebook X (Twitter) Instagram
Tac Gear Drop
  • Home
  • News
  • Tactical
  • Guns
  • Survival
  • Videos
Subscribe
Tac Gear Drop
Home » Google DeepMind CEO: AI Could Result In “Catastrophic Outcomes”
Survival

Google DeepMind CEO: AI Could Result In “Catastrophic Outcomes”

Tommy GrantBy Tommy GrantDecember 8, 20252 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
Follow Us
Google News Flipboard Threads
Google DeepMind CEO: AI Could Result In “Catastrophic Outcomes”
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Google DeepMind CEO Demis Hassabis has warned that artificial general intelligence, or AGI, could arrive within the next decade. With it, there could be “catastrophic outcomes”.

Some of the catastrophes Hassabis warned of could be cyberattacks on energy or water infrastructure. Ruling classes could use HGI to destroy necessary infrastructure, eliminating human beings without the need for war.

Speaking at the Axios AI+ Summit in San Francisco last week, Hassabis described AGI as a model that exhibits “all the cognitive capabilities” of humans, including inventive and creative abilities, according to a report by RT.  He also argued that current large language models remain “jagged intelligences” with gaps in reasoning, long-term planning, and continual learning. However, he suggested that AGI could soon become a reality with continued scaling and “one or two more big breakthroughs.”

However, Hassabis did say that the period leading up to AGI is likely to include tangible risks and “catastrophic outcomes,” such as cyberattacks on energy or water infrastructure. “That’s probably almost already happening now… maybe not with very sophisticated AI yet,” he said, calling this the “most obvious vulnerable vector.” He added that bad actors, autonomous agents, and systems that “deviate” from intended goals all require serious mitigation. “It’s non-zero,” he said of the possibility that advanced systems could “jump the guardrail.”

Hassabis’ concerns echo broader warnings across the tech industry. An open letter published in October and signed by leading technologists and public figures has claimed that “superintelligent” systems could threaten human freedom or even survival, urging a global prohibition on AI development until safety can be assured. Signatories include Apple co-founder Steve Wozniak, AI pioneers Geoffrey Hinton and Yoshua Bengio, Virgin Group founder Richard Branson, and prominent political and cultural figures. -RT

According to mainstream media and artificial intelligence, the best way to survive any AI revolution is to embrace it instead of fighting against it. Sure, it threatens the very essence of what it means to be human, but that’s fine. Just accept that it will dominate and rule our lives.

AI is not the future. It’s already here.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link

Related Posts

Survival

The Strait of Hormuz Standoff: Trump’s War Has Already Lost and You’re Not Ready for What’s Coming

March 16, 2026
Survival

WAR ALERT: Israel Invades Lebanon

March 16, 2026
Survival

U.S. Energy Companies Could Bring In BILLIONS If War In Iran Continues

March 16, 2026
Survival

Treason? Trump Claims Iran “Is Being Decimated,” And Fake News Is Lying About It

March 16, 2026
Survival

Survival Fit: Why Physical Fitness Is the Most Overlooked Survival Skill

March 16, 2026
Survival

It Is Being Projected That “Peak War Panic” Could Hit The Global Financial System In 1 To 3 Weeks

March 16, 2026
Top Sections
  • Guns (588)
  • News (1,083)
  • Survival (1,969)
  • Tactical (1,918)
  • Videos (2,545)
© 2026 Tac Gear Drop. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.