Google AI under fire for providing misleading information

Written by Jakob A. Overgaard

May.26 - 2024 9:38 AM CET

Technology
Photo: Shutterstock.com
Photo: Shutterstock.com

Trending Now

Google’s new AI tool, Overviews, has come under scrutiny for delivering misleading and sometimes bizarre responses to user queries. Examples include suggesting adding glue to pizza sauce to prevent cheese from sliding and claiming that eating rocks can be beneficial for health.

As reported by numerous media including GreekReporter, users and international news outlets have shared numerous instances of these strange and incorrect answers.

While many of these examples are humorous, they raise serious concerns about the reliability of AI-generated information.

A spokesperson for Google, Colette Garcia, stated, "The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce."

Garcia added that Google had conducted extensive testing before launching the tool and is committed to improving the system based on user feedback. "We’re taking swift action where appropriate under our content policies and using these examples to develop broader improvements to our systems, some of which have already started to roll out," she said.

Despite Google’s assurances, the issue of AI hallucinations—false or fabricated responses generated by predictive models—remains a significant concern. These errors occur because AI models like those used by Google and OpenAI select the most likely sequence of words based on their training data, which can sometimes lead to incorrect or nonsensical outputs.

The implications of such errors are far-reaching. Misinformation is already a widespread problem on the internet, and inaccurate responses from AI tools could exacerbate the situation. This is particularly troubling given the increasing reliance on AI for information and the potential impact on areas like politics and public health.

For instance, one erroneous response from Google’s AI suggested that the United States had a Muslim president, falsely attributing this role to Barack Obama.

Such misinformation could have serious consequences, undermining public trust in AI technologies and the information they provide.

Google’s efforts to address these issues include acknowledging that their generative AI is experimental and conducting tests to simulate potential bad actors. They aim to prevent false or low-quality results from appearing in AI summaries.