Subscribe to RSS
DOI: 10.1055/s-0044-1788330
Patient–Clinician Diagnostic Concordance upon Hospital Admission
Authors
Funding This study was supported by the Agency for Healthcare Research and Quality.
Abstract
Objectives This study aimed to pilot an application-based patient diagnostic questionnaire (PDQ) and assess the concordance of the admission diagnosis reported by the patient and entered by the clinician.
Methods Eligible patients completed the PDQ assessing patients' understanding of and confidence in the diagnosis 24 hours into hospitalization either independently or with assistance. Demographic data, the hospital principal problem upon admission, and International Classification of Diseases 10th Revision (ICD-10) codes were retrieved from the electronic health record (EHR). Two physicians independently rated concordance between patient-reported diagnosis and clinician-entered principal problem as full, partial, or no. Discrepancies were resolved by consensus. Descriptive statistics were used to report demographics for concordant (full) and nonconcordant (partial or no) outcome groups. Multivariable logistic regressions of PDQ questions and a priori selected EHR data as independent variables were conducted to predict nonconcordance.
Results A total of 157 (77.7%) questionnaires were completed by 202 participants; 77 (49.0%), 46 (29.3%), and 34 (21.7%) were rated fully concordant, partially concordant, and not concordant, respectively. Cohen's kappa for agreement on preconsensus ratings by independent reviewers was 0.81 (0.74, 0.88). In multivariable analyses, patient-reported lack of confidence and undifferentiated symptoms (ICD-10 “R-code”) for the principal problem were significantly associated with nonconcordance (partial or no concordance ratings) after adjusting for other PDQ questions (3.43 [1.30, 10.39], p = 0.02) and in a model using selected variables (4.02 [1.80, 9.55], p < 0.01), respectively.
Conclusion About one-half of patient-reported diagnoses were concordant with the clinician-entered diagnosis on admission. An ICD-10 “R-code” entered as the principal problem and patient-reported lack of confidence may predict patient–clinician nonconcordance early during hospitalization via this approach.
Background and Significance
Diagnostic excellence, defined as an optimal, timely, cost-effective, convenient process to attain an accurate, precise, and understandable explanation about a patient's condition, is fundamental to reducing errors in the diagnostic process and improving safety.[1] The National Academy of Medicine (NAM) defines diagnostic errors as missed opportunities to make a timely diagnosis and communicate an accurate and timely explanation of the patients' health problems.[2] Over the past decade, much research has underscored the multifaceted collaboration required of patients, clinicians, and health systems to promote diagnostic excellence and prevent the harmful consequences of diagnostic errors.[1] [3] [4]
Emerging data suggest a rate of harmful diagnostic errors between 5 and 18% for hospitalized patients who transfer to intensive care, expire, or are readmitted.[5] [6] [7] [8] [9] In these studies, certain diagnostic processes, such as the initial assessment, history-taking, and diagnostic testing, contributed to these errors.[9] [10] While concerns about team communication and patient experience were less frequently observed, they may impede a timely and accurate explanation of the admission diagnosis for the patient.[11] More recently, advances in patient–clinician communication have emphasized the process by which patients and care partners assess the accuracy of the diagnostic explanation that they receive; the lack of explanation corresponds to patients' perception of their diagnoses as neither accurate nor inaccurate.[12] Emerging consensus underscores the importance of encouraging transparency in communication among patients and the care team to improve diagnostic safety.[13] [14] Acknowledging diagnostic uncertainty in this context—especially when there is a mismatch between patients' and clinicians' understanding of leading diagnoses—may lead to decreases in diagnostic errors experienced later during the hospital encounter.[15] [16]
The 21st Century Cures Act now requires that patients have easy access to their health information via online portals. Application-based questionnaires have the potential to serve as structured prompts for patients to review their health information and report concerns earlier during the hospital encounter as a means to engage more meaningfully in patient–clinician diagnostic communication.[17] Building upon recent work emphasizing suboptimal patient experiences as potential markers of diagnostic concerns in ambulatory settings,[17] [18] we developed and iteratively refined requirements for an inpatient-focused patient diagnostic questionnaire (PDQ) that assessed patients' understanding of their main reason for hospitalization, confidence in the admission diagnosis, and experiences with patient–clinician communication.[19] Given NAM's emphasis on communicating an accurate and timely explanation to the patient, a validated questionnaire might be useful for assessing shared understanding of the admission diagnosis and improving patient–clinician communication regarding the diagnostic process early during the hospital encounter. Such tools have implications for promoting an equitable learning health system that incorporates actionable insights from patient questionnaires into the care team decision-making process to achieve diagnostic excellence.[20] [21]
Objective
We piloted an application-based PDQ that assesses team communication and patient experiences about the diagnostic process and assessed the concordance of the admission diagnosis reported by the patient and entered by the clinician.
Materials and Methods
Overview and Prior Work
As part of an Agency for Healthcare Research and Quality Patient Safety Learning Lab, we employed a user-centered approach to develop, test, and refine requirements for a set of electronic health record (EHR)-integrated interventions that target key diagnostic process failures in hospitalized patients.[19] One intervention included the 10-item PDQ, which assessed patients' understanding of their admission diagnosis and confidence that it was correct; whether all symptoms were being addressed; satisfaction with care team communication about their diagnosis; and involvement in shared decision-making (see outcomes table below for previously published questions).[19] To gather patient concerns about the diagnostic process, the PDQ was incorporated into a web application optimized for mobile devices ([Fig. 1]).[22] [23] [24] During our user-centered design process, we observed that combining “Unsure” with “No” elicited a clear, dichotomous response from patients for each item in the PDQ.[19]


Setting and Participants
This study was approved by the Mass General Brigham (MGB) Human Research Committee and was conducted at Brigham and Women's Hospital, a 793-bed acute care hospital affiliated with MGB, from January 2021 to September 2021. Eligible participants included English-speaking patients aged 18 years or older who were admitted with any diagnosis to general medicine services for at least 24 hours identified using our EHR (Epic Systems, Inc.). All patients were required to have an admission diagnosis entered for the principal problem in the hospital problem list (HPL), which was facilitated by our general medicine admission order set. Questionnaire responses were accessible from an EHR-integrated dashboard used by research staff to administer the PDQ to patients and clinicians to view responses within their workflow.[19] [25] [26] [27] [28] [29] We did not exclude patients on the basis of the admission diagnosis.
Remote or In-Person Administration of Patient Diagnostic Questionnaire
Amidst social distancing constraints during the coronavirus disease 2019 (COVID-19) pandemic, a trained research assistant (RA) approached eligible patients 24 to 48 hours after admission via a remote and in-person workflow ([Fig. 1]) to administer the PDQ. Eligible patients, including those on precautions, were called via phones. Those who were not on precautions and could not be reached by phone were approached in person. Patients who were behaviorally or cognitively impaired (per the bedside nurse's clinical assessment) at the time of in-person approach were excluded. Participants could choose to have the RA complete the PDQ on their behalf or receive a secure hyperlink to the PDQ web application by email, which could be accessed on a mobile device. In the latter case, the RA confirmed receipt of the PDQ upon enrollment. Clinicians could view patients' responses and comments via an EHR-integrated dashboard, a workflow established in prior implementation efforts.[26] [28]
Data Collection
Demographic data, the principal problem entered in the HPL by the care team upon admission, and respective International Classification of Diseases 10th Revision (ICD-10) codes for that principal problem were retrieved from the EHR via the enterprise data warehouse (EDW). Responses submitted by the patient or by RAs on behalf of the patient were exported from the application (CSV format) and linked to EHR demographic, clinical, and administrative data retrieved via the EDW.
Measurements and Outcomes Assessment
The main outcome was defined as patient–clinician agreement in the admission diagnosis measured as full, partial, or no concordance between the patient-reported diagnosis entered in the PDQ and the principal problem entered by clinicians in the HPL in the EHR. A three-category scoring system (full concordance [1], partial concordance [0.5], and no concordance [0]) was used to assess patient–clinician concordance based on methodology established in prior work.[30] Two hospital medicine physicians (A.K.D., J.L.S.) independently rated each patient-reported and clinician-entered diagnosis pair using this scoring system. If the patient-reported and clinician-entered diagnosis both reflected the same symptom, clinical sign, or pathophysiological disease process, the diagnosis pair was scored “fully concordant.” If the diagnoses were related but not identical, the pair was scored “partially concordant.” For example, a partial score was entered if the diagnosis was from the same organ system (but not specific), reflected different but similar pathophysiological processes (e.g., infection vs. inflammation) identified in the same organ, or represented a secondary manifestation of the primary disease process (e.g., pleural effusion in context of pneumonia). See [Supplementary Appendix A] (available in the online version). All discrepancies were resolved during a final consensus review.
A priori we defined our main outcome, patient–clinician diagnostic concordance, using “full” ratings and nonconcordance using “partial” and “no” ratings. We hypothesized that partial concordance on admission could indicate suboptimal patient–clinician diagnostic communication. For example, a patient who thinks the correct diagnosis is a Crohn's flare may underappreciate the severity of complications related to Crohn's disease, which led to the current hospitalization (small bowel obstruction) unless specifically communicated by the care team.
Analysis
Descriptive statistics were used to report demographic characteristics of patient participants for whom the PDQ was submitted for concordant (full) and nonconcordant (partial, no) groups. We calculated Cohen's kappa (unweighted and weighted for ordinal ratings) to measure interrater reliability between preconsensus ratings (full, partial, no) by independent reviewers.
To analyze our primary concordance outcomes, we compared dichotomized responses (“no/unsure” vs. “yes”) for each PDQ item in concordant (full) versus nonconcordant (partial, no) diagnosis pairs using the Fisher's exact test for unadjusted analyses, and multivariable logistic regression for adjusted analyses with all other PDQ items as covariates. Odds ratios and 95% confidence intervals (CIs) were reported. For exploratory purposes, we conducted a similar analysis in which we alternatively defined concordance using “full” and “partial” ratings, and nonconcordance using “no” ratings.
For our final model of patient–clinician nonconcordance (defined using “partial” and “no” ratings), we used the dichotomized concordance score as the outcome (dependent variable) and the following candidate predictors (independent variables): baseline demographics; undifferentiated signs or symptoms for the admission diagnosis (ICD-10 “R-code”), an indicator of diagnostic uncertainty[19] [31] [32]; increasing risk of clinical deterioration within the first 24 hours of admission using the Epic's deterioration index[19] [33] [34]; mode of questionnaire submission; and two PDQ items (“Has your care team told you the main reason you're in the hospital, in a way you understand? [Q1]”; “Are you confident that your diagnosis is correct?” [Q2]) considered most relevant based on our prior work.[19] [30] Multivariable logistic regression was used to model the association of patient–clinician nonconcordance and candidate predictors. The c-statistic was reported. All analyses were performed in R version 4.3.2.
Results
Of 1,158 patients ([Fig. 2]) who were screened, 580 were eligible and 202 agreed to participate. The demographic characteristics of eligible patients who agreed (n = 202) or did not agree (n = 378) to participate were similar ([Supplementary Appendix B], available in the online version). Overall, 84 (41.6%) requested to complete the PDQ independently on their mobile device (50.8 [17.6] years of age, 35.7% male, 75.0% Caucasian, 88.1% non-Hispanic); and 118 (58.4%) asked the RA to complete and submit the PDQ on their behalf either by phone or in-person (59.0 [17.7] years of age, 52.5% male, 66.9% Caucasian, 89.8% non-Hispanic).


Of the 202 patients agreeing to participate, a total of 157 questionnaires were submitted and available for analysis (77.7% response rate). All 45 participants without a submitted questionnaire requested to complete the PDQ independently on their mobile devices. Overall, independent reviewers agreed on patient–clinician diagnosis concordance ratings in 131 (83.4%) of the 157 cases. Based on final consensus reviews, the clinician adjudicators rated patient-reported and clinician-entered admission diagnoses as fully concordant, partially concordant, and not concordant in 77 (49.0%), 46 (29.3%), and 34 (21.7%) cases, respectively. The unweighted and weighted Cohen's kappa values (95% CI) for preconsensus ratings between independent reviewers were 0.75 (0.66, 0.83) and 0.81 (0.74, 0.88), respectively. Demographic and administrative characteristics ([Table 1]) for respondents were mostly similar for concordant (full) and nonconcordant (partial, no) outcome groups with one notable exception: “R-code” admission diagnoses were more frequently reported in the nonconcordant group, with the top 5 diagnoses including abdominal pain (R10.9), shortness of breath (R06.02), chest pain (R07.9), nausea and vomiting (R11.2), and fever and chills (R50.9).
|
Concordance group |
Not concordant, n = 80 |
Concordant, n = 77 |
p |
|
|---|---|---|---|---|
|
Final concordance rating |
No, n = 34 |
Partial, n = 46 |
Full, n = 77 |
|
|
Age (y), mean (SD) |
54.3 (17.5) |
60.0 (17.1) |
58.2 (17.6) |
0.64 |
|
Female sex, n (%) |
19 (55.9) |
20 (43.5) |
40 (51.9) |
0.51 |
|
Race, n (%) |
||||
|
Caucasian |
22 (64.7) |
35 (76.1) |
53 (68.8) |
0.48 |
|
Non-Caucasian |
11 (32.4) |
10 (21.7) |
24 (31.2) |
|
|
Missing |
1 (2.9) |
1 (2.2) |
– |
|
|
Ethnicity, n (%) |
||||
|
Non-Hispanic |
29 (85.3) |
40 (87.0) |
71 (92.2) |
0.73 |
|
Hispanic |
4 (11.8) |
4 (8.7) |
5 (6.5) |
|
|
Unavailable (declined or missing) |
1 (2.9) |
2 (4.3) |
1 (1.3) |
|
|
Primary language English, n (%) |
34 (100.0) |
45 (97.8) |
77 (100.0) |
0.30 |
|
Socioeconomic status (median income by zip code), n (%) |
||||
|
Less than or equal to $47,000 |
3 (8.8) |
1 (2.2) |
1 (1.3) |
0.07 |
|
$47,001–63,000 |
4 (11.8) |
1 (2.2) |
11 (14.3) |
|
|
Greater than 63,000 |
25 (73.5) |
43 (93.5) |
64 (83.1) |
|
|
Missing |
2 (5.9) |
1 (2.2) |
1 (1.3) |
|
|
Insurance status, n (%) |
||||
|
Private |
16 (47.1) |
23 (50.0) |
44 (57.1) |
0.56 |
|
Public/government (Medicaid, Medicare) |
18 (52.9) |
23 (50.0) |
33 (42.9) |
|
|
Network PCP, n (%) |
14 (41.2) |
23 (50.0) |
40 (51.9) |
0.57 |
|
Van Walraven Elixhauser Score–mean (SD) |
11.6 (10.0) |
8.8 (9.8) |
8.4 (9.4) |
0.21 |
|
Admission diagnosis from HPL[a] |
||||
|
R-Code |
15 (44.1) |
21 (45.7) |
12 (15.6) |
<0.01 |
|
Non-R-Code |
19 (55.9) |
25 (54.3) |
65 (84.4) |
|
|
Deterioration index on admission |
||||
|
Less than 30 |
28 (82.4) |
36 (78.3) |
60 (77.9) |
0.86 |
|
Greater than or equal to 30 |
6 (17.6) |
10 (21.7) |
17 (22.1) |
|
|
Change in deterioration index 24 h into admission |
||||
|
Less than 0 |
20 (58.8) |
23 (50.0) |
48 (62.3) |
0.40 |
|
Greater than 0 |
14 (41.2) |
23 (50.0) |
29 (37.7) |
|
|
Mode of PDQ submission[b] |
||||
|
RA facilitated (verbal), n = 118 |
24 (70.6) |
35 (76.1) |
59 (76.6) |
0.78 |
|
Independently submitted (mobile device), n = 39 |
10 (29.4) |
11 (23.9) |
18 (23.4) |
|
Abbreviations: HPL, hospital problem list; PCP, primary care provider; PDQ, patient diagnostic questionnaire; RA, research assistant; SD, standard deviation.
a The most common diagnoses listed in the HPL were abdominal pain (n = 11), shortness of breath (n = 7), and chest pain (n = 6), among other symptom-oriented and nonsymptom-oriented diagnoses.
b All 118 patients who requested RA assistance completed their questionnaire. The 39 patients who independently submitted the PDQ via their mobile device had the following characteristics: 54.5 (17.6) years of age, 59% female, 84.6% Caucasian, 87.2% non-Hispanic, 59.0% in-network PCP.
Responses to individual PDQ items by concordance outcome are reported in [Table 2]. “No/unsure” responses were reported for all PDQ items and most frequently for Question 1 (42/157; 26.8%) and Question 2 (25/157; 15.9%). “No/unsure” responses for Question 2 were significantly more frequent in the nonconcordant (partial, no) compared with concordant (full) group in unadjusted analysis and when adjusted for all other PDQ items ([Table 2]). For our alternative definition of concordance (full, partial), “no/unsure” responses for both Questions 1 and 2 were significantly more frequent in the nonconcordant (no) compared with the concordant (full, partial) group in both unadjusted and adjusted analyses ([Supplementary Appendix C], available in the online version).
|
PDQ item, n (%) “no/unsure” |
Not concordant[a] |
Concordant[b] |
Not concordant vs. concordant |
||||
|---|---|---|---|---|---|---|---|
|
No n = 34 |
Partial n = 46 |
Full n = 77 |
Unadjusted OR (95% CI) |
p |
Adjusted OR (95% CI)[c] |
p |
|
|
Q1: Has the care team told you the main reason you're in the hospital in a way you understand? |
16 (47.1%) |
10 (21.7%) |
16 (20.8%) |
1.83 (0.84, 4.07) |
0.11 |
1.84 (0.86, 4.08) |
0.12 |
|
Q2: Are you confident that your diagnosis is correct? |
10 (29.4%) |
9 (19.6%) |
6 (7.8%) |
3.66 (1.30, 11.92) |
<0.01 |
3.43 (1.30, 10.39) |
0.02 |
|
Q3: Do you think your care team is treating your main medical problem appropriately? |
1 (2.9%) |
1 (2.2%) |
1 (1.3%) |
1.94 (0.10, 116.38) |
1.00 |
– |
– |
|
Q4: Is your care team addressing all of your symptoms? |
1 (2.9%) |
2 (4.3%) |
8 (10.4%) |
0.34 (0.06, 1.48) |
0.13 |
0.40 (0.07, 1.94) |
0.27 |
|
Q5: Have you had an opportunity to ask questions about your diagnosis? |
1 (2.9%) |
0 (0.0%) |
0 (0%) |
– |
1.00 |
– |
– |
|
Q6: Are you satisfied with how your care team has communicated to you about your diagnosis? |
2 (5.9%) |
0 (0.0%) |
4 (5.2%) |
0.47 (0.04, 3.39) |
0.44 |
1.10 (0.08, 16.6) |
0.94 |
|
Q7: Are you comfortable with your current involvement in the decision-making process? |
1 (2.9%) |
0 (0.0%) |
5 (6.5%) |
0.18 (<0.01, 1.70) |
0.11 |
0.53 (0.01, 12.20) |
0.71 |
|
Q8: Do you have enough information to be involved in making decisions about your care? N = 8 |
1 (2.9%) |
1 (2.2%) |
6 (7.8%) |
0.31 (0.03, 1.78) |
0.16 |
0.33 (0.01, 6.69) |
0.46 |
|
Q9: Do you feel that your care team is telling you all the information you need to know about your care? |
2 (5.9%) |
4 (8.7%) |
6 (7.8%) |
0.96 (0.24, 3.77) |
1.00 |
1.50 (0.23, 12.79) |
0.68 |
|
Q10: Does your care team always treat you with respect? |
0 (0.0%) |
2 (4.3%) |
2 (2.6%) |
0.96 (0.07, 13.58) |
1.00 |
1.59 (0.10, 53.9) |
0.76 |
Abbreviation: PDQ, patient diagnostic questionnaire.
a Not concordant defined as patient–clinician diagnoses pairs scored as 0 or 0.5 (no or partial concordance).
b Concordant defined as patient–clinician diagnoses pairs scored as 1 (full concordance).
c Adjusted using all PDQ items as independent variables, except for Q3 and Q5 (low frequency).
In our final multivariable model ([Table 3]), undifferentiated symptoms (ICD-10 “R-code”) for the admission diagnosis (4.02 [1.80, 9.55], p < 0.01) were significantly associated with patient–clinician nonconcordance (partial, no). A “no/unsure” response to Question 2 was of borderline significance (2.55 [0.92, 7.82]; p = 0.08). The c-statistic was 0.71.
Abbreviations: CI, confidence interval; ICD, International Classification of Diseases; OR, odds ratio; PDQ, patient diagnostic questionnaire.
Note: The c-statistic was 0.71.
Discussion
We piloted an application-based questionnaire designed to assess patient–clinician communication regarding the diagnostic process early during hospitalization administered via an independent and assisted workflow. While many participants requested to complete the PDQ on their mobile devices, just one-half did so. All 118 patients who requested assistance with the questionnaire had complete responses. Overall, about one-half of patient-reported diagnoses were fully concordant with the clinician-entered admission diagnosis identified from the HPL in the EHR. Patient-reported lack of confidence was significantly associated with nonconcordance when adjusting for all other PDQ items. In our final multivariable model, an ICD-10 “R-code” entered as the hospital principal problem was predictive of patient–clinician diagnostic nonconcordance upon admission.
We offer several explanations for these observations. First, many participants requested to complete the PDQ via a mobile device but failed to do so. This may reflect ongoing concerns about digital divides in hospitalized patients, including barriers to recruiting patients remotely, which were exacerbated during the COVID-19 pandemic.[35] [36] [37] Indeed, in-person digital navigators are increasingly recognized as critical to improve equitable recruitment and participation and may be particularly important for hospitalized patients who are older, require additional assistance with task completion, have fluctuating cognitive states, or hindered by other factors as observed in our cohort.[38] [39]
Second, because Questions 1 and 2 directly assessed patients' understanding of the admission diagnosis and confidence that it is correct, these questions were likely most useful for assessing concordance among patient-reported and clinician-entered diagnoses as suggested by prior work and our PDQ analyses ([Table 2]),[19] [30] regardless of the how concordance and nonconcordance were defined ([Supplementary Appendix C], available in the online version). Patients who were confident about their admission diagnosis (a “yes” response to Question 2) may have received more timely and higher quality communication with their clinicians. Alternatively, patients who responded “no/unsure” to Questions 1 and 2, may have reflected suboptimal communication with the care team upon admission. We note that the relatively low frequency of “no/unsure” responses to Questions 3 to 10 for participants suggests that the therapeutic relationship between the patient and care team was probably not poor in our cohort.
Third, while suboptimal concordance between patients and clinicians regarding the reason for hospitalization has been previously reported,[30] diagnostic nonconcordance was particularly notable among cases with “R-code” admission diagnoses (i.e., nonspecific symptoms such as abdominal pain). Such symptom-oriented diagnoses entered in the EHR suggest diagnostic uncertainty early during hospitalization, which may not have been adequately acknowledged by clinicians and can lead patients down complex diagnostic paths.[13] [15] [40] [41]
In contrast to recent work that validated a structured assessment of visit notes in the patient portal to enhance detection of diagnostic concerns in the ambulatory setting,[17] we piloted a questionnaire designed to engage patients in the complex diagnostic process in acute care settings. The PDQ was specifically designed to assess understanding of the admission diagnosis and quality of communication with clinicians early during the hospital encounter when uncertainty is high and could lead to downstream adverse events. Thus, the pattern of questionnaire responses could be incorporated into interventions that stratify patients at risk for diagnostic error. For example, a red flag in cases where patients report lack of confidence in their admission diagnosis could trigger the hospital care team to pause and reassess the working diagnosis (i.e., by taking a “diagnostic time-out”) to provide a timely and accurate explanation to the patient as we recently described.[19] Such prompts may aid the hospital care team in identifying patients on whom to have more in-depth conversations regarding their diagnosis using low health literacy friendly techniques.[42]
Our study has several limitations. First, small sample size may have led to limited ability to detect significant associations of PDQ items in the full multivariable model. Second, response bias may have been present, especially among the patients who declined to participate and those who requested to submit their questionnaires electronically but did not do so. Third, our study was conducted at one hospital and may have limited generalizability. Specifically, other hospitals may not have reliable entry of admission diagnoses in the EHR. We also assumed that the admission diagnoses did not change during the first 48 hours prior to PDQ completion. Patients could have received additional information during this time by virtue of looking at their portal or talking with their care team, which may have affected patient–clinician diagnostic concordance. Fourth, we excluded non-English speaking patients, a population that may be potentially vulnerable to consequences of suboptimal diagnostic communication. Lastly, we did not conduct in-depth interviews with patient participants to specifically assess sociodemographic factors (such as income and insurance challenges), health literacy, or other potential barriers for completing the PDQ on mobile devices. The use of interpreters and digital health navigators early during hospitalization are likely crucial for overcoming language, sociodemographic, and health literacy barriers to achieve diagnostic excellence equitably.[35] [38] [43] [44]
In conclusion, we piloted an application-based questionnaire and determined that ICD-10 “R-codes” and likely patient-reported lack of confidence predicted patient–clinician diagnostic nonconcordance. Patient-reported responses elicited from questionnaires in combination with EHR data could be useful as part of a comprehensive strategy for improving diagnostic safety and patient–provider communication about diagnoses in the hospital.
Clinical Relevance Statement
Patient–clinician communication about the diagnostic process during the hospital encounter is often suboptimal. Pattern of responses from inpatient-focused diagnostic questionnaires and certain ICD-10 diagnosis codes that suggest diagnostic uncertainty may predict patient–clinician diagnostic nonconcordance upon admission, a potential target for interventions to improve diagnostic safety.
Multiple-Choice Questions
-
In hospitalized patients, which factors could be used to identify patients who might benefit from interventions that improve patient–clinician communication about the diagnostic process?
-
ICD-10 R-code for the discharge diagnosis
-
Patients who report lack of confidence in the diagnosis entered in the EHR on admission
-
ICD-10 R-code for the admission diagnosis
-
Options b and c
-
None of the above
Correct Answer: The correct answer is option d. In the acute care setting, an ICD-10 R-code (e.g., abdominal pain, shortness of breath, fever, etc.) entered as the admission diagnosis was strongly associated with patient–clinician diagnostic nonconcordance. Patients who report lack of confidence in the admission diagnosis (e.g., by responding “no” or “unsure” when asked “Are you confident that your diagnosis is correct?”) may also predict diagnostic nonconcordance.
-
-
What types of cost-effective interventions might be considered to address digital divides equitably in hospitalized patients?
-
Use of clinicians to facilitate submission of electronic questionnaires
-
Use of nonclinically trained digital navigators to facilitate submission of electronic questionnaires
-
Options a and b
-
Non of the above
Correct Answer: The correct answer is option b. The use of digital navigators are increasingly recognized as important to achieve equity in digital health interventions. While clinicians can certainly be trained to facilitate submission of electronic questionnaires, this would add additional burden on clinicians and would not be a cost-effective approach compared to use of a non-clinically trained individual.
-
Conflict of Interest
None declared.
Protection of Human and Animal Subjects
This study was reviewed and approved by the Mass General Brigham Human Research Committee.
Authors' Contributions
All authors have contributed sufficiently and meaningfully to the conception, design, and conduct of the study; data acquisition, analysis, and interpretation; and/or drafting, editing, and revising the manuscript.
-
References
- 1 Yang D, Fineberg HV, Cosby K. Diagnostic excellence. JAMA 2021; 326 (19) 1905-1906
- 2 Committee on Diagnostic Error in Health Care. , Board on Health Care Services,; Institute of Medicine, The National Academies of Sciences, Engineering, and Medicine.; In: Balogh EP, Miller BT, Ball JR. eds. Improving Diagnosis in Health Care. Washington (DC):: National Academies Press (US);; 2015
- 3 Giardina TD, Shahid U, Mushtaq U, Upadhyay DK, Marinez A, Singh H. Creating a learning health system for improving diagnostic safety: pragmatic insights from US health care organizations. J Gen Intern Med 2022; 37 (15) 3965-3972
- 4 Shah NR, Gandhi TK, Bates DW. Diagnostic excellence and patient safety: strategies and opportunities. JAMA 2022; 327 (24) 2391-2392
- 5 Raffel KE, Kantor MA, Barish P. et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf 2020; 29 (12) 971-979
- 6 Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med 2019; 47 (11) e902-e910
- 7 Motta-Calderon D, Lam A, Kumiko S. et al. Preliminary prevalence estimate of diagnostic error in patients hospitalized on general medicine: analysis of a random stratified sample. J Hosp Med 2021 ;SHM Converge: Abstract 129
- 8 Auerbach AD, Astik GJ, O'Leary KJ. et al. Prevalence and causes of diagnostic errors in hospitalized patients under investigation for COVID-19. J Gen Intern Med 2023; 38 (08) 1902-1910
- 9 Auerbach AD, Lee TM, Hubbard CC. et al; UPSIDE Research Group. Diagnostic errors in hospitalized adults who died or were transferred to intensive care. JAMA Intern Med 2024; 184 (02) 164-173
- 10 Konieczny KLA, Carr K, Motta-Calderon D. et al. Diagnostic process failures associated with diagnostic error in the hospital. Society of Hospital Medicine Conference,. 2022
- 11 Griffin JA, Carr K, Bersani K. et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl) 2021; 9 (01) 77-88
- 12 Dukhanin V, McDonald KM, Gonzalez N, Gleason KT. Patient reasoning: patients' and care partners' perceptions of diagnostic accuracy in emergency care. Med Decis Making 2024; 44 (01) 102-111
- 13 Dahm MR, Crock C. Understanding and communicating uncertainty in achieving diagnostic excellence. JAMA 2022; 327 (12) 1127-1128
- 14 Santhosh L, Chou CL, Connor DM. Diagnostic uncertainty: from education to communication. Diagnosis (Berl) 2019; 6 (02) 121-126
- 15 Cifra CL, Custer JW, Smith CM. et al. Prevalence and characteristics of diagnostic error in pediatric critical care: a multicenter study. Crit Care Med 2023; 51 (11) 1492-1501
- 16 Schnock KO, Garber A, Fraser H. et al. Providers' and patients' perspectives on diagnostic errors in the acute care setting. Jt Comm J Qual Patient Saf 2023; 49 (02) 89-97
- 17 Giardina TD, Choi DT, Upadhyay DK. et al. Inviting patients to identify diagnostic concerns through structured evaluation of their online visit notes. J Am Med Inform Assoc 2022; 29 (06) 1091-1100
- 18 Giardina TD, Haskell H, Menon S. et al. Learning from patients' experiences related to diagnostic errors is essential for progress in patient safety. Health Aff (Millwood) 2018; 37 (11) 1821-1827
- 19 Garber A, Garabedian P, Wu L. et al. Developing, pilot testing, and refining requirements for 3 EHR-integrated interventions to improve diagnostic safety in acute care: a user-centered approach. JAMIA Open 2023; 6 (02) ooad031
- 20 Schnipper JL, Raffel KE, Keniston A. et al. Achieving diagnostic excellence through prevention and teamwork (ADEPT) study protocol: a multicenter, prospective quality and safety program to improve diagnostic processes in medical inpatients. J Hosp Med 2023; 18 (12) 1072-1081
- 21 Bourgeois FC, Hart NJ, Dong Z. et al. Partnering with patients and families to improve diagnostic safety through the OurDX tool: effects of race, ethnicity, and language preference. Appl Clin Inform 2023; 14 (05) 903-912
- 22 Malik MA, Motta-Calderon D, Piniella N. et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl) 2022; 9 (04) 446-457
- 23 Garber A, Malik M, Bersani K. et al. Improving patient-provider communication about diagnoses in the acute care setting: an EHR-integrated patient questionnaire. J Hosp Med 2020 ; SHM Converge: Abstract 387
- 24 Garber ACK, Malik M, Wu L. et al. Refining requirements for an electronic health record (EHR)-integrated intervention to improve diagnostic safety in acute care: a user-centered approach. JAMIA Open 2023; 6 (02) ooad031
- 25 Mlaver E, Schnipper JL, Boxer RB. et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12) 676-685
- 26 Fuller TE, Pong DD, Piniella N. et al. Interactive digital health tools to engage patients and caregivers in discharge preparation: implementation study. J Med Internet Res 2020; 22 (04) e15573
- 27 Fuller TE, Garabedian PM, Lemonias DP. et al. Assessing the cognitive and work load of an inpatient safety dashboard in the context of opioid management. Appl Ergon 2020; 85: 103047
- 28 Dalal AK, Piniella N, Fuller TE. et al. Evaluation of electronic health record-integrated digital health tools to engage hospitalized patients in discharge preparation. J Am Med Inform Assoc 2021; 28 (04) 704-712
- 29 Bersani KFT, Garabedian P, Espares J. et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01) 34-45
- 30 Dalal AK, Dykes P, Samal L. et al. Potential of an electronic health record-integrated patient portal for improving care plan concordance during acute care. Appl Clin Inform 2019; 10 (03) 358-366
- 31 Marshall TL, Hagedorn PA, Sump C. et al. Diagnosis code and health care utilization patterns associated with diagnostic uncertainty. Hosp Pediatr 2022; 12 (12) 1066-1072
- 32 Marshall TL, Nickels LC, Brady PW, Edgerton EJ, Lee JJ, Hagedorn PA. Developing a machine learning model to detect diagnostic uncertainty in clinical documentation. J Hosp Med 2023; 18 (05) 405-412
- 33 Bhise V, Sittig DF, Vaghani V, Wei L, Baldwin J, Singh H. An electronic trigger based on care escalation to identify preventable adverse events in hospitalised patients. BMJ Qual Saf 2018; 27 (03) 241-246
- 34 Steitz BD, McCoy AB, Reese TJ. et al. Development and validation of a machine learning algorithm using clinical pages to predict imminent clinical deterioration. J Gen Intern Med 2024; 39 (01) 27-35
- 35 Lyles CR, Wachter RM, Sarkar U. Focusing on digital health equity. JAMA 2021; 326 (18) 1795-1796
- 36 Adedinsewo D, Eberly L, Sokumbi O, Rodriguez JA, Patten CA, Brewer LC. Health disparities, clinical trials, and the digital divide. Mayo Clin Proc 2023; 98 (12) 1875-1887
- 37 Diamond JE, Kaltenbach LA, Granger BB. et al. Access to mobile health interventions among patients hospitalized with heart failure: insights into the digital divide from the CONNECT-HF mHealth substudy. Circ Heart Fail 2024; 17 (02) e011140
- 38 Rodriguez JA, Charles JP, Bates DW, Lyles C, Southworth B, Samal L. Digital healthcare equity in primary care: implementing an integrated digital health navigator. J Am Med Inform Assoc 2023; 30 (05) 965-970
- 39 Plombon S, SRudin R, Sulca Flores J. et al. Assessing equitable recruitment in a digital health trial for asthma. Appl Clin Inform 2023; 14 (04) 620-631
- 40 Meyer AND, Giardina TD, Khawaja L, Singh H. Patient and clinician experiences of uncertainty in the diagnostic process: current understanding and future directions. Patient Educ Couns 2021; 104 (11) 2606-2615
- 41 Rao G, Kirley K, Epner P. et al. Identifying, analyzing, and visualizing diagnostic paths for patients with nonspecific abdominal pain. Appl Clin Inform 2018; 9 (04) 905-913
- 42 Kripalani S, Bengtzen R, Henderson LE, Jacobson TA. Clinical research in low-literacy populations: using teach-back to assess comprehension of informed consent and privacy information. IRB 2008; 30 (02) 13-19
- 43 McDonald KM. Achieving equity in diagnostic excellence. JAMA 2022; 327 (20) 1955-1956
- 44 López L, Rodriguez F, Huerta D, Soukup J, Hicks L. Use of interpreters by physicians for hospitalized limited English proficient patients and its impact on patient outcomes. J Gen Intern Med 2015; 30 (06) 783-789
Address for correspondence
Publication History
Received: 15 March 2024
Accepted: 16 June 2024
Article published online:
18 September 2024
© 2024. Thieme. All rights reserved.
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Yang D, Fineberg HV, Cosby K. Diagnostic excellence. JAMA 2021; 326 (19) 1905-1906
- 2 Committee on Diagnostic Error in Health Care. , Board on Health Care Services,; Institute of Medicine, The National Academies of Sciences, Engineering, and Medicine.; In: Balogh EP, Miller BT, Ball JR. eds. Improving Diagnosis in Health Care. Washington (DC):: National Academies Press (US);; 2015
- 3 Giardina TD, Shahid U, Mushtaq U, Upadhyay DK, Marinez A, Singh H. Creating a learning health system for improving diagnostic safety: pragmatic insights from US health care organizations. J Gen Intern Med 2022; 37 (15) 3965-3972
- 4 Shah NR, Gandhi TK, Bates DW. Diagnostic excellence and patient safety: strategies and opportunities. JAMA 2022; 327 (24) 2391-2392
- 5 Raffel KE, Kantor MA, Barish P. et al. Prevalence and characterisation of diagnostic error among 7-day all-cause hospital medicine readmissions: a retrospective cohort study. BMJ Qual Saf 2020; 29 (12) 971-979
- 6 Bergl PA, Taneja A, El-Kareh R, Singh H, Nanchal RS. Frequency, risk factors, causes, and consequences of diagnostic errors in critically ill medical patients: a retrospective cohort study. Crit Care Med 2019; 47 (11) e902-e910
- 7 Motta-Calderon D, Lam A, Kumiko S. et al. Preliminary prevalence estimate of diagnostic error in patients hospitalized on general medicine: analysis of a random stratified sample. J Hosp Med 2021 ;SHM Converge: Abstract 129
- 8 Auerbach AD, Astik GJ, O'Leary KJ. et al. Prevalence and causes of diagnostic errors in hospitalized patients under investigation for COVID-19. J Gen Intern Med 2023; 38 (08) 1902-1910
- 9 Auerbach AD, Lee TM, Hubbard CC. et al; UPSIDE Research Group. Diagnostic errors in hospitalized adults who died or were transferred to intensive care. JAMA Intern Med 2024; 184 (02) 164-173
- 10 Konieczny KLA, Carr K, Motta-Calderon D. et al. Diagnostic process failures associated with diagnostic error in the hospital. Society of Hospital Medicine Conference,. 2022
- 11 Griffin JA, Carr K, Bersani K. et al. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis (Berl) 2021; 9 (01) 77-88
- 12 Dukhanin V, McDonald KM, Gonzalez N, Gleason KT. Patient reasoning: patients' and care partners' perceptions of diagnostic accuracy in emergency care. Med Decis Making 2024; 44 (01) 102-111
- 13 Dahm MR, Crock C. Understanding and communicating uncertainty in achieving diagnostic excellence. JAMA 2022; 327 (12) 1127-1128
- 14 Santhosh L, Chou CL, Connor DM. Diagnostic uncertainty: from education to communication. Diagnosis (Berl) 2019; 6 (02) 121-126
- 15 Cifra CL, Custer JW, Smith CM. et al. Prevalence and characteristics of diagnostic error in pediatric critical care: a multicenter study. Crit Care Med 2023; 51 (11) 1492-1501
- 16 Schnock KO, Garber A, Fraser H. et al. Providers' and patients' perspectives on diagnostic errors in the acute care setting. Jt Comm J Qual Patient Saf 2023; 49 (02) 89-97
- 17 Giardina TD, Choi DT, Upadhyay DK. et al. Inviting patients to identify diagnostic concerns through structured evaluation of their online visit notes. J Am Med Inform Assoc 2022; 29 (06) 1091-1100
- 18 Giardina TD, Haskell H, Menon S. et al. Learning from patients' experiences related to diagnostic errors is essential for progress in patient safety. Health Aff (Millwood) 2018; 37 (11) 1821-1827
- 19 Garber A, Garabedian P, Wu L. et al. Developing, pilot testing, and refining requirements for 3 EHR-integrated interventions to improve diagnostic safety in acute care: a user-centered approach. JAMIA Open 2023; 6 (02) ooad031
- 20 Schnipper JL, Raffel KE, Keniston A. et al. Achieving diagnostic excellence through prevention and teamwork (ADEPT) study protocol: a multicenter, prospective quality and safety program to improve diagnostic processes in medical inpatients. J Hosp Med 2023; 18 (12) 1072-1081
- 21 Bourgeois FC, Hart NJ, Dong Z. et al. Partnering with patients and families to improve diagnostic safety through the OurDX tool: effects of race, ethnicity, and language preference. Appl Clin Inform 2023; 14 (05) 903-912
- 22 Malik MA, Motta-Calderon D, Piniella N. et al. A structured approach to EHR surveillance of diagnostic error in acute care: an exploratory analysis of two institutionally-defined case cohorts. Diagnosis (Berl) 2022; 9 (04) 446-457
- 23 Garber A, Malik M, Bersani K. et al. Improving patient-provider communication about diagnoses in the acute care setting: an EHR-integrated patient questionnaire. J Hosp Med 2020 ; SHM Converge: Abstract 387
- 24 Garber ACK, Malik M, Wu L. et al. Refining requirements for an electronic health record (EHR)-integrated intervention to improve diagnostic safety in acute care: a user-centered approach. JAMIA Open 2023; 6 (02) ooad031
- 25 Mlaver E, Schnipper JL, Boxer RB. et al. User-centered collaborative design and development of an inpatient safety dashboard. Jt Comm J Qual Patient Saf 2017; 43 (12) 676-685
- 26 Fuller TE, Pong DD, Piniella N. et al. Interactive digital health tools to engage patients and caregivers in discharge preparation: implementation study. J Med Internet Res 2020; 22 (04) e15573
- 27 Fuller TE, Garabedian PM, Lemonias DP. et al. Assessing the cognitive and work load of an inpatient safety dashboard in the context of opioid management. Appl Ergon 2020; 85: 103047
- 28 Dalal AK, Piniella N, Fuller TE. et al. Evaluation of electronic health record-integrated digital health tools to engage hospitalized patients in discharge preparation. J Am Med Inform Assoc 2021; 28 (04) 704-712
- 29 Bersani KFT, Garabedian P, Espares J. et al. Use, perceived usability, and barriers to implementation of a patient safety dashboard integrated within a vendor EHR. Appl Clin Inform 2020; 11 (01) 34-45
- 30 Dalal AK, Dykes P, Samal L. et al. Potential of an electronic health record-integrated patient portal for improving care plan concordance during acute care. Appl Clin Inform 2019; 10 (03) 358-366
- 31 Marshall TL, Hagedorn PA, Sump C. et al. Diagnosis code and health care utilization patterns associated with diagnostic uncertainty. Hosp Pediatr 2022; 12 (12) 1066-1072
- 32 Marshall TL, Nickels LC, Brady PW, Edgerton EJ, Lee JJ, Hagedorn PA. Developing a machine learning model to detect diagnostic uncertainty in clinical documentation. J Hosp Med 2023; 18 (05) 405-412
- 33 Bhise V, Sittig DF, Vaghani V, Wei L, Baldwin J, Singh H. An electronic trigger based on care escalation to identify preventable adverse events in hospitalised patients. BMJ Qual Saf 2018; 27 (03) 241-246
- 34 Steitz BD, McCoy AB, Reese TJ. et al. Development and validation of a machine learning algorithm using clinical pages to predict imminent clinical deterioration. J Gen Intern Med 2024; 39 (01) 27-35
- 35 Lyles CR, Wachter RM, Sarkar U. Focusing on digital health equity. JAMA 2021; 326 (18) 1795-1796
- 36 Adedinsewo D, Eberly L, Sokumbi O, Rodriguez JA, Patten CA, Brewer LC. Health disparities, clinical trials, and the digital divide. Mayo Clin Proc 2023; 98 (12) 1875-1887
- 37 Diamond JE, Kaltenbach LA, Granger BB. et al. Access to mobile health interventions among patients hospitalized with heart failure: insights into the digital divide from the CONNECT-HF mHealth substudy. Circ Heart Fail 2024; 17 (02) e011140
- 38 Rodriguez JA, Charles JP, Bates DW, Lyles C, Southworth B, Samal L. Digital healthcare equity in primary care: implementing an integrated digital health navigator. J Am Med Inform Assoc 2023; 30 (05) 965-970
- 39 Plombon S, SRudin R, Sulca Flores J. et al. Assessing equitable recruitment in a digital health trial for asthma. Appl Clin Inform 2023; 14 (04) 620-631
- 40 Meyer AND, Giardina TD, Khawaja L, Singh H. Patient and clinician experiences of uncertainty in the diagnostic process: current understanding and future directions. Patient Educ Couns 2021; 104 (11) 2606-2615
- 41 Rao G, Kirley K, Epner P. et al. Identifying, analyzing, and visualizing diagnostic paths for patients with nonspecific abdominal pain. Appl Clin Inform 2018; 9 (04) 905-913
- 42 Kripalani S, Bengtzen R, Henderson LE, Jacobson TA. Clinical research in low-literacy populations: using teach-back to assess comprehension of informed consent and privacy information. IRB 2008; 30 (02) 13-19
- 43 McDonald KM. Achieving equity in diagnostic excellence. JAMA 2022; 327 (20) 1955-1956
- 44 López L, Rodriguez F, Huerta D, Soukup J, Hicks L. Use of interpreters by physicians for hospitalized limited English proficient patients and its impact on patient outcomes. J Gen Intern Med 2015; 30 (06) 783-789




