HEALTH/LIFESTYLE

Parents Turn to ChatGPT for Medical Advice Over Doctors, New Study Finds

Published

on

A new study conducted by researchers at the University of Kansas has revealed a startling trend: parents are increasingly relying on ChatGPT for medical advice regarding their children’s health rather than consulting traditional healthcare professionals. This phenomenon raises critical questions about the trustworthiness of AI-generated content in the realm of healthcare.

Lead author and doctoral student Calissa Leslie-Miller expressed her concerns regarding this trend. “When we began this research, it was right after ChatGPT first launched — we had concerns about how parents would use this new, easy method to gather health information for their children,” she noted. The study, published in the Journal of Pediatric Psychology, involved 116 parents aged between 18 and 65.

Participants were presented with health-related texts authored by both healthcare professionals and ChatGPT without knowing the source. They rated the texts based on five criteria: perceived morality, trustworthiness, expertise, accuracy, and likelihood of reliance on the information. Surprisingly, many parents found it difficult to differentiate between the content produced by AI and that created by human experts. In cases of significant differences, ChatGPT was rated as more trustworthy, accurate, and reliable.

“This outcome was surprising to us, especially since the study took place early in ChatGPT’s availability,” Leslie-Miller stated. “We’re starting to see that AI is being integrated in ways that may not be immediately obvious, and people may not even recognize when they’re reading AI-generated text versus expert content.”

Despite the impressive capabilities of ChatGPT, researchers highlighted the risks associated with using AI for medical advice. Leslie-Miller cautioned that during the study, some iterations of AI output contained incorrect information, raising concerns about the potential consequences of relying on AI for child health matters. “AI tools like ChatGPT are prone to ‘hallucinations’ — errors that occur when the system lacks sufficient context,” she explained.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Copyright © 2024 The Frontier Voice. Powered and Designed by Tansal Technologies.