Abstract:
The goal of our research is to design improved interfaces for medical expert systems.
Previously, the use of graphical techniques was explored to improve the acceptance
by clinicians of the user interface. Now that devices that accept spoken input are
available, we wish to design interfaces that take advantage of this potentially more
natural modality for interaction. To understand how clinicians might want to speak
to a medical decision-support system, we carried out an experiment that simulated
the availability of a spoken interface to the ONCOCIN medical expert system. ONCOCIN
provides therapy advice for patients on complex cancer therapy protocols based on
a description of the patient’s current medical status and laboratory-test values.
In the experiment, we had oncologists present a clinical case while observing the
ONCOCIN flowsheet display. A project member listened to the presentation and filled
in values for the flowsheet, as well as introducing purposeful misunderstandings of
the input. The results suggest that each individual developed a stereotypical grammar
for communicating with the program. Our experience with the purposeful miscommunications
suggests particular ways to tailor requests for repetition based on the part of the
utterance that was not understood.
Key-Words
Continuous-Speech Recognition - Human-Computer Interfaces - Decision-Support Systems
- Unseen-Operator Experiments - Oncology