Abstract
Background Generative pretrained transformer (GPT) models are one of the latest large pretrained
natural language processing models that enables model training with limited datasets
and reduces dependency on large datasets, which are scarce and costly to establish
and maintain. There is a rising interest to explore the use of GPT models in health
care.
Objective We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction
using 374,787 free-text dental notes.
Methods We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset
of over 374,000 manually written sections of dental clinical notes. Each model was
trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%.
We report model performance in terms of next word prediction accuracy and loss. Additionally,
we analyze the performance of the models on different types of prediction tokens for
categories. For comparison, we also fine-tuned a non-GPT pretrained neural network
model, XLNet (large), for next word prediction. We annotate each token in 100 randomly
sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation,
etc.) and compare the performance of each model by token category.
Results Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2
model also performs better in manual evaluations, especially for names, abbreviations,
and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results
suggest that pretrained models have the potential to assist medical charting in the
future. We share the lessons learned, insights, and suggestions for future implementations.
Conclusion The results suggest that pretrained models have the potential to assist medical charting
in the future. Our study presented one of the first implementations of the GPT model
used with medical notes.
Keywords
natural language processing - generative pretrained transformer - text prediction
- electronic medical records