Concerns over the reliability of generative AI have resurfaced after Alphabet chief executive Sundar Pichai urged the public to treat AI-generated information with caution.
Others are reading now
Concerns over the reliability of generative AI have resurfaced after Alphabet chief executive Sundar Pichai urged the public to treat AI-generated information with caution. In an interview with the BBC, he said current systems are still prone to errors and should be used alongside other sources, not as authoritative truth.
His remarks come as Google rolls out Gemini 3.0 and integrates more AI features into search, heightening scrutiny of how these tools handle news, science and other sensitive topics.
A call for caution
Pichai said people should “learn to use these tools for what they’re good at,” stressing that generative models remain fallible despite significant progress. AI can help with creative tasks, he noted, but users should not rely on it for factual accuracy without cross-checking.
He said this is why Google continues to emphasise traditional search and maintains an ecosystem of information sources rather than shifting entirely to AI-generated summaries. The company places disclaimers on its AI responses to signal that mistakes are possible, but those warnings have not shielded it from criticism.
Google previously faced mockery over inaccurate answers in its AI Overviews feature, an episode that fuelled wider concerns about systems that sometimes fabricate details to satisfy user prompts.
Also read
Accuracy under pressure
Researchers who track generative AI performance say the problem is structural. Chatbots are designed to produce fluent text, not to verify information, and often “make up answers to please us,” Gina Neff, a professor of responsible AI at Queen Mary University of London, told the BBC.
Neff argued that big tech firms cannot place the burden of fact-checking on users and must take more responsibility for reducing errors, especially in areas such as health, science and current events. She warned that relying on consumers to identify inaccuracies is inadequate for a technology being pushed into everyday life.
BBC analysis earlier this year found that major AI assistants, including those from OpenAI, Google, Microsoft and Perplexity, inaccurately summarised news stories in nearly half of tested cases.
The Gemini 3 rollout
Despite the concerns, Google is pressing ahead with its latest generation of consumer AI. Gemini 3.0, unveiled on Tuesday, promises stronger reasoning abilities and improved performance across image, audio and video inputs. It is also more tightly integrated into search through the company’s new “AI Mode,” which aims to give users a conversational experience that feels closer to interacting with an expert.
Pichai has described this integration as a “new phase” of the AI platform shift, part of Google’s effort to hold ground against rivals such as ChatGPT that have challenged its dominance in online search.
Also read
He acknowledged the constant tension between rapid development and building enough safeguards to minimise harm, but said Alphabet aims to be “bold and responsible at the same time.” The company has increased investment in AI security and is open-sourcing tools to detect AI-generated images.
A wider debate
Pichai also addressed resurfaced comments from Elon Musk, who once warned OpenAI’s founders that DeepMind could create an AI “dictatorship.” Pichai said no single company should have sole control over such powerful technology, but stressed that the industry now includes many developers.
Although the landscape is competitive, the debate over accuracy, safety and public trust remains central to how AI evolves. For Google, the challenge now is to accelerate innovation while convincing users that the systems guiding their searches can be relied upon — at least most of the time.
Sources: BBC, Digi24