Even with the availability of advanced technologies such as voice recognition software that assist the transcription industry, manual editing and review provided by professional medical transcription services has its own relevance, proves a recent JAMA study.
Speech-recognition software such as Dragon Naturally speaking has now emerged as valuable tools for doctors and other clinicians, as it can automate the process of transcribing medical reports, thus reducing the efforts by physicians to record and send voice files for transcription. According to the latest market report by Technavio, the emergence of voice recognition technologies is considered as one of the key emerging trends in the global medical transcription market. Though this software is designed to convert audio files to text without human intervention, its accuracy is still a great concern.
The latest study published in JAMA Open Network finds 7.4 percent of the words were incorrectly transcribed by the automated software. To identify and analyze errors at each stage of the Speech-recognition (SR) assisted dictation process, the team from Brigham and Women’s Hospital, Harvard Medical School, and other prestigious institutions collected a stratified random sample of 217 notes dictated by 144 physicians between January 1 and December 31, 2016, at 2 healthcare organizations using Dragon Medical 360 | eScription (Nuance). These collected samples include44 operative notes, 83 office notes, and 40 discharge summaries from Partners HealthCare and 15 operative notes and 35 discharge summaries from UCHealth. They reviewed each note at the main processing stages of dictation. The team analyzed errors at different dictation processing stages using the same back-end SR system.
Medical record review was also conducted to validate notes’ content, such as by referring to a patient’s structured medication list to verify a medication order that was partially inaudible in the original audio recording.
The error rate represents the number of mistakes per 100 words. Key findings of this cross-sectional study include:
- Overall, 96.3 percent of the 217 notes included at least one error directly after dictation and before review by human transcriptionists or physicians themselves.
- Seven in 100 words in unedited clinical documents created with speech recognition technology involved errors, reaching an overall mean (SD) error rate of 7.4% and 1 in 250 words contained clinically significant errors.
- The rate of errors decreased substantially following revision by medical transcriptionists, to 0.4%. Errors were further reduced in signed notes (SNs), which had an overall error rate of 0.3%.
- The proportion of errors that were clinically significant increased from 5.7% in the original SR transcriptions to 8.9% after being edited by an MT, and then decreased to 6.4% in SNs.
- There were 329 errors in the 33-note subset. For the 171 errors that were identified by both annotators, inter-annotator agreement was 71.9%.
- Across all the original SR transcriptions, discharge summaries had higher error rates than other note types, and operative notes had lower error rates.
The first study of its kind to analyze errors at the different processing stages of documents created with a back-end SR system, this research with the comparatively low error rate in signed notes highlights the crucial role of manual editing and review in the SR-assisted documentation process.
Certain documentation errors can put patients at significant risk. Medical practices must make sure that even while using voice recognition software to ease their transcription process, they should consider manual review, quality assurance, and auditing at the last stage of the process to ensure that the reports are accurate enough. It is ideal to outsource such tasks to an experienced HIPAA-compliant medical transcription company that meets the standard accuracy level at a faster turnaround time.