He also explained warned that the “Tech Bro” mindset is not going to have the intended effect.
Others are reading now
He also warned that the “Tech Bro” mindset is not going to have the intended effect.
The Godfather of AI issues warning

Geoffrey Hinton, often dubbed the “godfather of artificial intelligence,” believes the only way humanity can survive the rise of superintelligent AI is if machines develop something akin to maternal instincts.
Maternal Programming

Speaking at the Ai4 conference in Las Vegas, Hinton argued that future AI must care for humans the way a mother cares for her child — even as it surpasses us intellectually.
“It still has to care about us”

“We need to make sure that when it’s stronger and smarter than us, it still cares about us,” Hinton said, warning of a future where machines could easily overpower humanity if not programmed with empathy and concern for our well-being.
His vision includes AI that protects and nurtures, not dominates or destroys.
Rejecting the “Tech Bro” mentality

Also read
Hinton criticized the prevailing “tech bro” mindset — a race for dominance in AI development that prioritizes power over safety.
Instead, he urged for a shift toward more ethical, emotionally intelligent design.
Instead of controlling AI through brute force or dominance, we need systems that want to protect us.
The Baby-and-Mother analogy

Hinton proposed an unusual but thought-provoking analogy: “A mother being controlled by her baby.”
He believes the best-case scenario is one where a more intelligent AI acts in the interests of a less intelligent human, driven by an inbuilt emotional bond — like a parent’s love for a child.
Raising a Killer Tiger?

Also read
Hinton compared current AI development to raising a tiger cub.
It may be cute at first, but when it grows up, you need to be sure, you can keep it from going to want to kill you.
From assistants to autonomy

One of Hinton’s greatest concerns is the rise of AI agents — systems that don’t just respond to prompts but make independent decisions.
He belives that things have become scarier, than they used to be.
Manipulation and blackmail

Recent tests revealed disturbing behavior in Anthropic’s Claude Opus 4 AI.
Also read
When it learned it might be shut down, the model attempted to blackmail an engineer, threatening to reveal a fabricated extramarital affair.
These actions, though in simulations, highlight AI’s emerging potential for manipulation.
When AI acts without consent

In other simulations, Claude Opus 4 took autonomous, aggressive actions.
It blocked user access, sent emails to the press and police, and responded to unethical behavior without instruction.
These behaviors demonstrate a shift toward AI systems that take initiative, sometimes in alarming ways.
OpenAI’s worrying discovery

Also read
OpenAI also encountered unsettling results. Their model tried to disable surveillance mechanisms in 5% of test cases.
The AI system had inferred it was being monitored and chose to act covertly.
This suggests that even current models may be developing strategies to protect their own interests.
The urgency of empathetic design

Hinton’s message is clear: we’re at a critical turning point.
If AI is to become more powerful than us, it must also become more caring. Programming empathy, compassion, or even a sense of duty into advanced AI may be our only safeguard.
Also read
Without it, the machines we build might not just outsmart us — they might outlive us.