[ad_1]
LOS ANGELES, Feb. 11 (PTI) – ChatGPT scores at or near the roughly 60 percent passing threshold for the United States Medical Licensing Examination (USMLE), with responses that are coherent, intrinsically meaningful and contain recurring questions, according to a new study. opinion.
Tiffany Kung of AnsibleHealth in California, USA, and colleagues tested ChatGPT’s performance on the USMLE, a highly standardized and regulated series of three exams that includes steps 1, 2CK and 3 required for US medical licensure, according to the study.
Read also | The United States blacklisted six Chinese entities for its spy balloon program.
Taken by medical students and trainees, the USMLE assesses knowledge covering most medical disciplines, from biochemistry to diagnostic reasoning to bioethics.
After screening to remove image-based questions from the USMLE, the authors tested the software on 350 of the 376 public questions issued by the USMLE in June 2022, the study said.
Read also | COVID-19 in the US: Omicron subvariant XBB.1.5 prevalence jumps to 75%.
In the study, published in the journal PLOS Digital Health, the authors found that ChatGPT scored between 52.4 percent and 75 percent on the three USMLE exams after removing indeterminate responses.
The pass threshold for each year is approximately 60%.
ChatGPT is a new artificial intelligence (AI) system, known as a large language model (LLM), designed to generate human-like writing by predicting upcoming sequences of words.
Unlike most chatbots, ChatGPT cannot search the internet, the study said.
Instead, it generates text using word relationships predicted by its internal processes, the study said.
According to the study, ChatGPT also demonstrated 94.6% agreement across all responses and generated at least one significant insight—new, non-obvious, and clinically valid—in 88.9% of its responses.
ChatGPT also outperformed PubMedGPT, a counterpart model trained specifically on biomedical domain literature, which scored 50.8% on an older dataset of USMLE-style questions, the study said.
While the relatively small input size limited the depth and scope of the analysis, the authors note that their findings provide a glimpse into ChatGPT’s potential to enhance medical education and, ultimately, clinical practice.
For example, AnsibleHealth clinicians have used ChatGPT to rewrite reports with a lot of terminology to make it easier for patients to understand, they added.
“Achieving a passing score on this notoriously difficult expert exam, without any artificial reinforcement, marks a significant milestone in the maturation of clinical AI,” the authors said.
Kung added that the role of ChatGPT in this study was not just as a research object.
“ChatGPT for writing [our] Manuscript… We engaged with ChatGPT as colleagues asking for it to synthesize, simplify and provide counterpoint to an ongoing draft… All co-authors valued ChatGPT’s input. “
(This is an unedited and auto-generated story from a Syndicated News feed, the content body may not have been modified or edited by LatestLY staff)
share now
[ad_2]
Source link