Homepage AI Can AI answer your health questions? here’s what doctors say

Can AI answer your health questions? here’s what doctors say

Can AI answer your health questions? here’s what doctors say
Ascannio/shutterstock.com

In January, OpenAI introduced ChatGPT Health, a version of its chatbot built for health-related questions.

Others are reading now

More people are turning to AI chatbots for everyday questions, including health concerns.
Technology companies are now building tools specifically designed to answer medical queries.
These programs use large language models to analyze information and generate responses in plain language.
The goal is to help users better understand their health information.

New health-focused AI tools are emerging

In January, OpenAI introduced ChatGPT Health, a version of its chatbot built for health-related questions.
The company says the system can analyze medical records, wellness apps and wearable device data.
It aims to help users interpret health information and spot patterns in their data.
Access to the tool is limited for now, with a waiting list for early users.

Competitors are offering similar features

OpenAI is not alone in exploring AI-powered health tools.
Anthropic, another major AI company, has added comparable health features to some versions of its Claude chatbot.
These systems are designed to assist with understanding health information rather than replacing medical care.
Both companies stress that their tools should not diagnose diseases.

Chatbots are meant to assist, not replace doctors

AI companies say their chatbots should be used as informational tools.
They can summarize test results, explain medical terms or help users prepare questions for a doctor’s visit.
They can also highlight trends in medical records or fitness data.
However, they are not intended to provide medical diagnoses or treatment decisions.

They can be more personal than a Google search

Some doctors believe AI chatbots may improve how people look up health information online.
Unlike a general search engine, chatbots can tailor answers to the user’s situation.
They can incorporate details such as age, medications and health history.
This can produce responses that feel more relevant and specific.

Also read

Experts say responsible use can be helpful

Dr. Robert Wachter, a medical technology expert at the University of California, San Francisco, believes the tools have value when used carefully.
AI systems are not perfect and sometimes produce incorrect information.
But he says they can still offer guidance when people have no other immediate source of help.
“The alternative often is nothing, or the patient winging it,” Wachter said.
“And so I think that if you use these tools responsibly, I think you can get useful information.”

Providing more details improves AI responses

Experts recommend giving the chatbot as much context as possible.
Details about symptoms, medications and health history can help the system respond more accurately.
The more complete the information, the more tailored the answer tends to be.
Without enough context, the advice may be less useful or even misleading.

Some symptoms require immediate medical care

Doctors stress that certain symptoms should never be evaluated by a chatbot alone.
Shortness of breath, chest pain or a severe headache could signal a medical emergency.
In these cases, people should seek immediate medical attention instead of consulting AI tools.
A chatbot cannot replace urgent care.

Healthy skepticism is still important

Even in non-emergency situations, experts advise caution.
Dr. Lloyd Minor of Stanford University says AI responses should never be the only source of guidance.
Major health decisions require input from trained professionals.
“If you’re talking about a major medical decision, or even a smaller decision about your health, you should never be relying just on what you’re getting out of a large language model,” Minor said.

Sharing health data raises privacy questions

Many AI health tools work best when users upload personal medical information.
This may include medical charts, prescriptions or data from wearable devices.
But sharing this information with AI companies raises privacy concerns.
Users should understand how their data might be stored or used.

Also read

Health data shared with chatbots is not covered by HIPAA

In the United States, the HIPAA law protects medical records handled by doctors, hospitals and insurers.
However, that law does not apply to most technology companies that build AI chatbots.
This means the privacy rules for those platforms can be different.
“When someone is uploading their medical chart into a large language model, that is very different than handing it to a new doctor,” Minor said.

Companies say they apply extra safeguards

OpenAI and Anthropic say they add additional protections to users’ health data.
The companies say health information is stored separately from other data.
They also say this information is not used to train their AI models.
Users must opt in to share data and can disconnect their accounts at any time.

AI can still make mistakes

Research shows that AI chatbots can perform well on medical exams.
But they sometimes struggle in real conversations with users.
A 1,300-person Oxford University study found that chatbot users did not make better health decisions than people using online searches.
The main issue was communication.
Users often failed to provide enough details, while chatbots mixed correct and incorrect information.
For now, experts suggest checking answers with multiple sources or even multiple AI tools.

Ads by MGDK