Close Menu
Tac Gear Drop
  • Home
  • News
  • Tactical
  • Guns
  • Survival
  • Videos
Facebook X (Twitter) Instagram
Tac Gear Drop
  • Home
  • News
  • Tactical
  • Guns
  • Survival
  • Videos
Subscribe
Tac Gear Drop
Home » Google DeepMind CEO: AI Could Result In “Catastrophic Outcomes”
Survival

Google DeepMind CEO: AI Could Result In “Catastrophic Outcomes”

Tommy GrantBy Tommy GrantDecember 8, 20252 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link
Follow Us
Google News Flipboard Threads
Google DeepMind CEO: AI Could Result In “Catastrophic Outcomes”
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Google DeepMind CEO Demis Hassabis has warned that artificial general intelligence, or AGI, could arrive within the next decade. With it, there could be “catastrophic outcomes”.

Some of the catastrophes Hassabis warned of could be cyberattacks on energy or water infrastructure. Ruling classes could use HGI to destroy necessary infrastructure, eliminating human beings without the need for war.

Speaking at the Axios AI+ Summit in San Francisco last week, Hassabis described AGI as a model that exhibits “all the cognitive capabilities” of humans, including inventive and creative abilities, according to a report by RT.  He also argued that current large language models remain “jagged intelligences” with gaps in reasoning, long-term planning, and continual learning. However, he suggested that AGI could soon become a reality with continued scaling and “one or two more big breakthroughs.”

However, Hassabis did say that the period leading up to AGI is likely to include tangible risks and “catastrophic outcomes,” such as cyberattacks on energy or water infrastructure. “That’s probably almost already happening now… maybe not with very sophisticated AI yet,” he said, calling this the “most obvious vulnerable vector.” He added that bad actors, autonomous agents, and systems that “deviate” from intended goals all require serious mitigation. “It’s non-zero,” he said of the possibility that advanced systems could “jump the guardrail.”

Hassabis’ concerns echo broader warnings across the tech industry. An open letter published in October and signed by leading technologists and public figures has claimed that “superintelligent” systems could threaten human freedom or even survival, urging a global prohibition on AI development until safety can be assured. Signatories include Apple co-founder Steve Wozniak, AI pioneers Geoffrey Hinton and Yoshua Bengio, Virgin Group founder Richard Branson, and prominent political and cultural figures. -RT

According to mainstream media and artificial intelligence, the best way to survive any AI revolution is to embrace it instead of fighting against it. Sure, it threatens the very essence of what it means to be human, but that’s fine. Just accept that it will dominate and rule our lives.

AI is not the future. It’s already here.

Read the full article here

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email WhatsApp Copy Link

Related Posts

Survival

10 Alternatives To Everyday Items When The Stores Are Empty

December 10, 2025
Survival

A VICTORY for Informed Consent: CDC Panel Reverses Decades-Old Newborn Vaccine Policy

December 9, 2025
Survival

JD Vance: Mass Immigration Is Ruining The American Dream

December 9, 2025
Survival

Banks Are Closing Branches At An Astonishing Rate

December 9, 2025
Survival

Trump’s Venezuela Plans Could Create Millions of Migrant Refugees

December 9, 2025
Survival

Trump Reveals “Phase Two” of the Gaza Ceasefire

December 9, 2025
Top Sections
  • Guns (412)
  • News (622)
  • Survival (1,388)
  • Tactical (1,261)
  • Videos (1,992)
© 2025 Tac Gear Drop. All Rights Reserved.
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.