His warning arrives as global competition accelerates and policy conversations struggle to keep pace.
Others are reading now
A leading voice in artificial intelligence safety has reignited debate over the long-term risks of advanced systems, arguing that society is drastically underestimating the scale of the threat.
Speaking on podcaster Lex Fridman’s show, University of Louisville computer scientist Roman Yampolskiy said he does not see a good outcome longterm for humanity, if we develop general super intelligences.
In fact, Futurism and Express cites Yampolsky for saying in the interview, that there is a 99.9 % risk of AI ending Humanity within the next century.
The interview, release in the Summer of 2024, featured his view that no existing model has demonstrated true safety and that future versions could inherit or magnify critical weaknesses.
Probing the unknown
Yampolskiy, known for work in AI safety and cybersecurity, is among a small but influential group of early AI pioneers urging governments to recognise potential catastrophic outcomes.
Also read
His concerns gained traction during what he described as an intensifying race for technological dominance under President Donald Trump’s AI competition agenda.
According to Yampolskiy, the most troubling risk stems from a widening gap between AI performance and human oversight, leaving society exposed to technologies it cannot reliably govern.
Sharp disagreement
Not all researchers share his prognosis.
A study from the University of Oxford and the University of Bonn, based on responses from more than 2,700 AI specialists, estimated only a 5% likelihood of human extinction from AI.
Co-author Katja Grace noted that debate persists mainly over the scale of the danger. “The disagreement seems to be whether the risk is 1% or 20%,” she said.
Also read
Several high-profile technologists, including Google Brain co-founder Andrew Ng and AI trailblazer Yann LeCun, have dismissed extinction fears outright. LeCun has accused industry leaders such as OpenAI’s Sam Altman of advancing “alarmist” narratives for strategic reasons.
Sources: Lex Fridman Podcast, University of Louisville, University of Oxford, University of Bonn, Express, Futurism