Transformative potential of Large Language Models (LLMs) in data mining on Electronic Health Records.

Abstract

Objectives To explore the potential of Large Language Models (LLMs) to extract and structure information from free-text clinical reports, with a specific focus on identifying and classifying patient comorbidities in the electronic health records of oncology patients treated at the Virgen Macarena Hospital in Seville. We specifically evaluate the gpt-3.5-turbo-1106 and gpt-4-1106-preview models in comparison with the capabilities of specialized human evaluators.

Methods We implemented a script using the OpenAI API to extract structured information in JSON format from comorbidities reported in 250 personal history reports. These reports were manually reviewed in batches of 50 by five specialists in radiation oncology. A detailed analysis of the discrepancies between the GPT models and the physicians allowed us to establish the ground truth. We compared the results using metrics such as Sensitivity, Specificity, Precision, Accuracy, F-value, Kappa index, and the McNemar test, in addition to examining the common causes of errors in both humans and GPT models.

Results The GPT-3.5 model exhibited slightly lower performance compared to physicians across all metrics, though the differences were not statistically significant. GPT-4 demonstrated clear superiority in several key metrics. Notably, it achieved a sensitivity of 96.8%, compared to 88.2% for GPT-3.5 and 88.8% for physicians. However, physicians marginally outperformed GPT-4 in precision (97.7% vs. 96.8%). GPT-4 showed greater consistency, replicating exact results in 76% of the reports after 10 analyses, in contrast to 59% for GPT-3.5. Physicians were more likely to miss explicit comorbidities, possibly due to fatigue or distraction, while the GPT models more frequently inferred non-explicit comorbidities, sometimes correctly, though this also resulted in more false positives.

Conclusion The studied LLMs, with carefully designed prompts, demonstrate competence comparable to that of medical specialists in interpreting clinical reports, even in complex and confusingly written texts. Considering also their superior efficiency in terms of time and costs, these models represent a preferable option over human analysis for data mining and structuring information in large collections of clinical reports.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study did not receive any funding

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

Yes

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

On 18-1-2024, a favorable opinion was issued by the Research Ethics Committee of the Virgen Macarena and Virgen del Rocio University Hospitals. EC_IA_V1 (Version 1-Dec-2023).

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

Yes

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

Yes

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Yes

Data Availability

All data produced in the present study are available upon reasonable request to the authors

留言 (0)

沒有登入
gif