Appl Clin Inform 2024; 15(04): 733-742
DOI: 10.1055/s-0044-1788330
Research Article

Patient–Clinician Diagnostic Concordance upon Hospital Admission

Authors

  • Alyssa Lam

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Savanna Plombon

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Alison Garber

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Pamela Garabedian

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
  • Ronen Rozenblum

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
    2   Harvard Medical School, Boston, Massachusetts, United States
  • Jacqueline A. Griffin

    3   Department of Mechanical & Industrial Engineering, Northeastern University, Boston, Massachusetts, United States
  • Jeffrey L. Schnipper

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
    2   Harvard Medical School, Boston, Massachusetts, United States
  • Stuart R. Lipsitz

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
    2   Harvard Medical School, Boston, Massachusetts, United States
  • David W. Bates

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
    2   Harvard Medical School, Boston, Massachusetts, United States
  • Anuj K. Dalal

    1   Division of General Internal Medicine, Brigham and Women's Hospital, Boston, Massachusetts, United States
    2   Harvard Medical School, Boston, Massachusetts, United States

Funding This study was supported by the Agency for Healthcare Research and Quality.
 

Abstract

Objectives This study aimed to pilot an application-based patient diagnostic questionnaire (PDQ) and assess the concordance of the admission diagnosis reported by the patient and entered by the clinician.

Methods Eligible patients completed the PDQ assessing patients' understanding of and confidence in the diagnosis 24 hours into hospitalization either independently or with assistance. Demographic data, the hospital principal problem upon admission, and International Classification of Diseases 10th Revision (ICD-10) codes were retrieved from the electronic health record (EHR). Two physicians independently rated concordance between patient-reported diagnosis and clinician-entered principal problem as full, partial, or no. Discrepancies were resolved by consensus. Descriptive statistics were used to report demographics for concordant (full) and nonconcordant (partial or no) outcome groups. Multivariable logistic regressions of PDQ questions and a priori selected EHR data as independent variables were conducted to predict nonconcordance.

Results A total of 157 (77.7%) questionnaires were completed by 202 participants; 77 (49.0%), 46 (29.3%), and 34 (21.7%) were rated fully concordant, partially concordant, and not concordant, respectively. Cohen's kappa for agreement on preconsensus ratings by independent reviewers was 0.81 (0.74, 0.88). In multivariable analyses, patient-reported lack of confidence and undifferentiated symptoms (ICD-10 “R-code”) for the principal problem were significantly associated with nonconcordance (partial or no concordance ratings) after adjusting for other PDQ questions (3.43 [1.30, 10.39], p = 0.02) and in a model using selected variables (4.02 [1.80, 9.55], p < 0.01), respectively.

Conclusion About one-half of patient-reported diagnoses were concordant with the clinician-entered diagnosis on admission. An ICD-10 “R-code” entered as the principal problem and patient-reported lack of confidence may predict patient–clinician nonconcordance early during hospitalization via this approach.


Background and Significance

Diagnostic excellence, defined as an optimal, timely, cost-effective, convenient process to attain an accurate, precise, and understandable explanation about a patient's condition, is fundamental to reducing errors in the diagnostic process and improving safety.[1] The National Academy of Medicine (NAM) defines diagnostic errors as missed opportunities to make a timely diagnosis and communicate an accurate and timely explanation of the patients' health problems.[2] Over the past decade, much research has underscored the multifaceted collaboration required of patients, clinicians, and health systems to promote diagnostic excellence and prevent the harmful consequences of diagnostic errors.[1] [3] [4]

Emerging data suggest a rate of harmful diagnostic errors between 5 and 18% for hospitalized patients who transfer to intensive care, expire, or are readmitted.[5] [6] [7] [8] [9] In these studies, certain diagnostic processes, such as the initial assessment, history-taking, and diagnostic testing, contributed to these errors.[9] [10] While concerns about team communication and patient experience were less frequently observed, they may impede a timely and accurate explanation of the admission diagnosis for the patient.[11] More recently, advances in patient–clinician communication have emphasized the process by which patients and care partners assess the accuracy of the diagnostic explanation that they receive; the lack of explanation corresponds to patients' perception of their diagnoses as neither accurate nor inaccurate.[12] Emerging consensus underscores the importance of encouraging transparency in communication among patients and the care team to improve diagnostic safety.[13] [14] Acknowledging diagnostic uncertainty in this context—especially when there is a mismatch between patients' and clinicians' understanding of leading diagnoses—may lead to decreases in diagnostic errors experienced later during the hospital encounter.[15] [16]

The 21st Century Cures Act now requires that patients have easy access to their health information via online portals. Application-based questionnaires have the potential to serve as structured prompts for patients to review their health information and report concerns earlier during the hospital encounter as a means to engage more meaningfully in patient–clinician diagnostic communication.[17] Building upon recent work emphasizing suboptimal patient experiences as potential markers of diagnostic concerns in ambulatory settings,[17] [18] we developed and iteratively refined requirements for an inpatient-focused patient diagnostic questionnaire (PDQ) that assessed patients' understanding of their main reason for hospitalization, confidence in the admission diagnosis, and experiences with patient–clinician communication.[19] Given NAM's emphasis on communicating an accurate and timely explanation to the patient, a validated questionnaire might be useful for assessing shared understanding of the admission diagnosis and improving patient–clinician communication regarding the diagnostic process early during the hospital encounter. Such tools have implications for promoting an equitable learning health system that incorporates actionable insights from patient questionnaires into the care team decision-making process to achieve diagnostic excellence.[20] [21]


Objective

We piloted an application-based PDQ that assesses team communication and patient experiences about the diagnostic process and assessed the concordance of the admission diagnosis reported by the patient and entered by the clinician.


Materials and Methods

Overview and Prior Work

As part of an Agency for Healthcare Research and Quality Patient Safety Learning Lab, we employed a user-centered approach to develop, test, and refine requirements for a set of electronic health record (EHR)-integrated interventions that target key diagnostic process failures in hospitalized patients.[19] One intervention included the 10-item PDQ, which assessed patients' understanding of their admission diagnosis and confidence that it was correct; whether all symptoms were being addressed; satisfaction with care team communication about their diagnosis; and involvement in shared decision-making (see outcomes table below for previously published questions).[19] To gather patient concerns about the diagnostic process, the PDQ was incorporated into a web application optimized for mobile devices ([Fig. 1]).[22] [23] [24] During our user-centered design process, we observed that combining “Unsure” with “No” elicited a clear, dichotomous response from patients for each item in the PDQ.[19]

Zoom
Fig. 1 Hybrid independent (mobile) and assisted submission of an electronic patient diagnostic questionnaire upon hospital admission.

Setting and Participants

This study was approved by the Mass General Brigham (MGB) Human Research Committee and was conducted at Brigham and Women's Hospital, a 793-bed acute care hospital affiliated with MGB, from January 2021 to September 2021. Eligible participants included English-speaking patients aged 18 years or older who were admitted with any diagnosis to general medicine services for at least 24 hours identified using our EHR (Epic Systems, Inc.). All patients were required to have an admission diagnosis entered for the principal problem in the hospital problem list (HPL), which was facilitated by our general medicine admission order set. Questionnaire responses were accessible from an EHR-integrated dashboard used by research staff to administer the PDQ to patients and clinicians to view responses within their workflow.[19] [25] [26] [27] [28] [29] We did not exclude patients on the basis of the admission diagnosis.


Remote or In-Person Administration of Patient Diagnostic Questionnaire

Amidst social distancing constraints during the coronavirus disease 2019 (COVID-19) pandemic, a trained research assistant (RA) approached eligible patients 24 to 48 hours after admission via a remote and in-person workflow ([Fig. 1]) to administer the PDQ. Eligible patients, including those on precautions, were called via phones. Those who were not on precautions and could not be reached by phone were approached in person. Patients who were behaviorally or cognitively impaired (per the bedside nurse's clinical assessment) at the time of in-person approach were excluded. Participants could choose to have the RA complete the PDQ on their behalf or receive a secure hyperlink to the PDQ web application by email, which could be accessed on a mobile device. In the latter case, the RA confirmed receipt of the PDQ upon enrollment. Clinicians could view patients' responses and comments via an EHR-integrated dashboard, a workflow established in prior implementation efforts.[26] [28]


Data Collection

Demographic data, the principal problem entered in the HPL by the care team upon admission, and respective International Classification of Diseases 10th Revision (ICD-10) codes for that principal problem were retrieved from the EHR via the enterprise data warehouse (EDW). Responses submitted by the patient or by RAs on behalf of the patient were exported from the application (CSV format) and linked to EHR demographic, clinical, and administrative data retrieved via the EDW.


Measurements and Outcomes Assessment

The main outcome was defined as patient–clinician agreement in the admission diagnosis measured as full, partial, or no concordance between the patient-reported diagnosis entered in the PDQ and the principal problem entered by clinicians in the HPL in the EHR. A three-category scoring system (full concordance [1], partial concordance [0.5], and no concordance [0]) was used to assess patient–clinician concordance based on methodology established in prior work.[30] Two hospital medicine physicians (A.K.D., J.L.S.) independently rated each patient-reported and clinician-entered diagnosis pair using this scoring system. If the patient-reported and clinician-entered diagnosis both reflected the same symptom, clinical sign, or pathophysiological disease process, the diagnosis pair was scored “fully concordant.” If the diagnoses were related but not identical, the pair was scored “partially concordant.” For example, a partial score was entered if the diagnosis was from the same organ system (but not specific), reflected different but similar pathophysiological processes (e.g., infection vs. inflammation) identified in the same organ, or represented a secondary manifestation of the primary disease process (e.g., pleural effusion in context of pneumonia). See [Supplementary Appendix A] (available in the online version). All discrepancies were resolved during a final consensus review.

A priori we defined our main outcome, patient–clinician diagnostic concordance, using “full” ratings and nonconcordance using “partial” and “no” ratings. We hypothesized that partial concordance on admission could indicate suboptimal patient–clinician diagnostic communication. For example, a patient who thinks the correct diagnosis is a Crohn's flare may underappreciate the severity of complications related to Crohn's disease, which led to the current hospitalization (small bowel obstruction) unless specifically communicated by the care team.


Analysis

Descriptive statistics were used to report demographic characteristics of patient participants for whom the PDQ was submitted for concordant (full) and nonconcordant (partial, no) groups. We calculated Cohen's kappa (unweighted and weighted for ordinal ratings) to measure interrater reliability between preconsensus ratings (full, partial, no) by independent reviewers.

To analyze our primary concordance outcomes, we compared dichotomized responses (“no/unsure” vs. “yes”) for each PDQ item in concordant (full) versus nonconcordant (partial, no) diagnosis pairs using the Fisher's exact test for unadjusted analyses, and multivariable logistic regression for adjusted analyses with all other PDQ items as covariates. Odds ratios and 95% confidence intervals (CIs) were reported. For exploratory purposes, we conducted a similar analysis in which we alternatively defined concordance using “full” and “partial” ratings, and nonconcordance using “no” ratings.

For our final model of patient–clinician nonconcordance (defined using “partial” and “no” ratings), we used the dichotomized concordance score as the outcome (dependent variable) and the following candidate predictors (independent variables): baseline demographics; undifferentiated signs or symptoms for the admission diagnosis (ICD-10 “R-code”), an indicator of diagnostic uncertainty[19] [31] [32]; increasing risk of clinical deterioration within the first 24 hours of admission using the Epic's deterioration index[19] [33] [34]; mode of questionnaire submission; and two PDQ items (“Has your care team told you the main reason you're in the hospital, in a way you understand? [Q1]”; “Are you confident that your diagnosis is correct?” [Q2]) considered most relevant based on our prior work.[19] [30] Multivariable logistic regression was used to model the association of patient–clinician nonconcordance and candidate predictors. The c-statistic was reported. All analyses were performed in R version 4.3.2.



Results

Of 1,158 patients ([Fig. 2]) who were screened, 580 were eligible and 202 agreed to participate. The demographic characteristics of eligible patients who agreed (n = 202) or did not agree (n = 378) to participate were similar ([Supplementary Appendix B], available in the online version). Overall, 84 (41.6%) requested to complete the PDQ independently on their mobile device (50.8 [17.6] years of age, 35.7% male, 75.0% Caucasian, 88.1% non-Hispanic); and 118 (58.4%) asked the RA to complete and submit the PDQ on their behalf either by phone or in-person (59.0 [17.7] years of age, 52.5% male, 66.9% Caucasian, 89.8% non-Hispanic).

Zoom
Fig. 2 Study flow diagram.

Of the 202 patients agreeing to participate, a total of 157 questionnaires were submitted and available for analysis (77.7% response rate). All 45 participants without a submitted questionnaire requested to complete the PDQ independently on their mobile devices. Overall, independent reviewers agreed on patient–clinician diagnosis concordance ratings in 131 (83.4%) of the 157 cases. Based on final consensus reviews, the clinician adjudicators rated patient-reported and clinician-entered admission diagnoses as fully concordant, partially concordant, and not concordant in 77 (49.0%), 46 (29.3%), and 34 (21.7%) cases, respectively. The unweighted and weighted Cohen's kappa values (95% CI) for preconsensus ratings between independent reviewers were 0.75 (0.66, 0.83) and 0.81 (0.74, 0.88), respectively. Demographic and administrative characteristics ([Table 1]) for respondents were mostly similar for concordant (full) and nonconcordant (partial, no) outcome groups with one notable exception: “R-code” admission diagnoses were more frequently reported in the nonconcordant group, with the top 5 diagnoses including abdominal pain (R10.9), shortness of breath (R06.02), chest pain (R07.9), nausea and vomiting (R11.2), and fever and chills (R50.9).

Table 1

Characteristics of patient diagnostic questionnaire respondents, N = 157

Concordance group

Not concordant, n = 80

Concordant, n = 77

p

Final concordance rating

No, n = 34

Partial, n = 46

Full, n = 77

Age (y), mean (SD)

54.3 (17.5)

60.0 (17.1)

58.2 (17.6)

0.64

Female sex, n (%)

19 (55.9)

20 (43.5)

40 (51.9)

0.51

Race, n (%)

 Caucasian

22 (64.7)

35 (76.1)

53 (68.8)

0.48

 Non-Caucasian

11 (32.4)

10 (21.7)

24 (31.2)

 Missing

1 (2.9)

1 (2.2)

Ethnicity, n (%)

 Non-Hispanic

29 (85.3)

40 (87.0)

71 (92.2)

0.73

 Hispanic

4 (11.8)

4 (8.7)

5 (6.5)

 Unavailable (declined or missing)

1 (2.9)

2 (4.3)

1 (1.3)

Primary language English, n (%)

34 (100.0)

45 (97.8)

77 (100.0)

0.30

Socioeconomic status (median income by zip code), n (%)

 Less than or equal to $47,000

3 (8.8)

1 (2.2)

1 (1.3)

0.07

 $47,001–63,000

4 (11.8)

1 (2.2)

11 (14.3)

 Greater than 63,000

25 (73.5)

43 (93.5)

64 (83.1)

 Missing

2 (5.9)

1 (2.2)

1 (1.3)

Insurance status, n (%)

 Private

16 (47.1)

23 (50.0)

44 (57.1)

0.56

 Public/government (Medicaid, Medicare)

18 (52.9)

23 (50.0)

33 (42.9)

 Network PCP, n (%)

14 (41.2)

23 (50.0)

40 (51.9)

0.57

 Van Walraven Elixhauser Score–mean (SD)

11.6 (10.0)

8.8 (9.8)

8.4 (9.4)

0.21

Admission diagnosis from HPL[a]

 R-Code

15 (44.1)

21 (45.7)

12 (15.6)

<0.01

 Non-R-Code

19 (55.9)

25 (54.3)

65 (84.4)

Deterioration index on admission

 Less than 30

28 (82.4)

36 (78.3)

60 (77.9)

0.86

 Greater than or equal to 30

6 (17.6)

10 (21.7)

17 (22.1)

Change in deterioration index 24 h into admission

 Less than 0

20 (58.8)

23 (50.0)

48 (62.3)

0.40

 Greater than 0

14 (41.2)

23 (50.0)

29 (37.7)

Mode of PDQ submission[b]

 RA facilitated (verbal), n = 118

24 (70.6)

35 (76.1)

59 (76.6)

0.78

 Independently submitted (mobile device), n = 39

10 (29.4)

11 (23.9)

18 (23.4)

Abbreviations: HPL, hospital problem list; PCP, primary care provider; PDQ, patient diagnostic questionnaire; RA, research assistant; SD, standard deviation.


a The most common diagnoses listed in the HPL were abdominal pain (n = 11), shortness of breath (n = 7), and chest pain (n = 6), among other symptom-oriented and nonsymptom-oriented diagnoses.


b All 118 patients who requested RA assistance completed their questionnaire. The 39 patients who independently submitted the PDQ via their mobile device had the following characteristics: 54.5 (17.6) years of age, 59% female, 84.6% Caucasian, 87.2% non-Hispanic, 59.0% in-network PCP.


Responses to individual PDQ items by concordance outcome are reported in [Table 2]. “No/unsure” responses were reported for all PDQ items and most frequently for Question 1 (42/157; 26.8%) and Question 2 (25/157; 15.9%). “No/unsure” responses for Question 2 were significantly more frequent in the nonconcordant (partial, no) compared with concordant (full) group in unadjusted analysis and when adjusted for all other PDQ items ([Table 2]). For our alternative definition of concordance (full, partial), “no/unsure” responses for both Questions 1 and 2 were significantly more frequent in the nonconcordant (no) compared with the concordant (full, partial) group in both unadjusted and adjusted analyses ([Supplementary Appendix C], available in the online version).

Table 2

Patient–clinician admission diagnostic concordance by patient diagnostic questionnaire item, n = 157

PDQ item, n (%) “no/unsure”

Not concordant[a]

Concordant[b]

Not concordant vs. concordant

No

n = 34

Partial

n = 46

Full

n = 77

Unadjusted OR (95% CI)

p

Adjusted OR (95% CI)[c]

p

Q1: Has the care team told you the main reason you're in the hospital in a way you understand?

16 (47.1%)

10 (21.7%)

16 (20.8%)

1.83 (0.84, 4.07)

0.11

1.84 (0.86, 4.08)

0.12

Q2: Are you confident that your diagnosis is correct?

10 (29.4%)

9 (19.6%)

6 (7.8%)

3.66 (1.30, 11.92)

<0.01

3.43 (1.30, 10.39)

0.02

Q3: Do you think your care team is treating your main medical problem appropriately?

1 (2.9%)

1 (2.2%)

1 (1.3%)

1.94 (0.10, 116.38)

1.00

Q4: Is your care team addressing all of your symptoms?

1 (2.9%)

2 (4.3%)

8 (10.4%)

0.34 (0.06, 1.48)

0.13

0.40 (0.07, 1.94)

0.27

Q5: Have you had an opportunity to ask questions about your diagnosis?

1 (2.9%)

0 (0.0%)

0 (0%)

1.00

Q6: Are you satisfied with how your care team has communicated to you about your diagnosis?

2 (5.9%)

0 (0.0%)

4 (5.2%)

0.47 (0.04, 3.39)

0.44

1.10 (0.08, 16.6)

0.94

Q7: Are you comfortable with your current involvement in the decision-making process?

1 (2.9%)

0 (0.0%)

5 (6.5%)

0.18 (<0.01, 1.70)

0.11

0.53 (0.01, 12.20)

0.71

Q8: Do you have enough information to be involved in making decisions about your care? N = 8

1 (2.9%)

1 (2.2%)

6 (7.8%)

0.31 (0.03, 1.78)

0.16

0.33 (0.01, 6.69)

0.46

Q9: Do you feel that your care team is telling you all the information you need to know about your care?

2 (5.9%)

4 (8.7%)

6 (7.8%)

0.96 (0.24, 3.77)

1.00

1.50 (0.23, 12.79)

0.68

Q10: Does your care team always treat you with respect?

0 (0.0%)

2 (4.3%)

2 (2.6%)

0.96 (0.07, 13.58)

1.00

1.59 (0.10, 53.9)

0.76

Abbreviation: PDQ, patient diagnostic questionnaire.


a Not concordant defined as patient–clinician diagnoses pairs scored as 0 or 0.5 (no or partial concordance).


b Concordant defined as patient–clinician diagnoses pairs scored as 1 (full concordance).


c Adjusted using all PDQ items as independent variables, except for Q3 and Q5 (low frequency).


In our final multivariable model ([Table 3]), undifferentiated symptoms (ICD-10 “R-code”) for the admission diagnosis (4.02 [1.80, 9.55], p < 0.01) were significantly associated with patient–clinician nonconcordance (partial, no). A “no/unsure” response to Question 2 was of borderline significance (2.55 [0.92, 7.82]; p = 0.08). The c-statistic was 0.71.

Table 3

Multivariable analysis of patient–clinician admission diagnostic nonconcordance, n = 157

Variables, n (%)

Not concordant, n = 80

Concordant, n = 77

Unadjusted OR (95% CI)

p

Adjusted OR (95% CI)

p

Age > 65 y

32 (40.0%)

30 (39.0%)

1.04 (0.52, 2.08)

1.00

1.15 (0.57, 2.74)

0.70

Gender (male)

41 (51.3%)

37 (48.1%)

1.14 (0.58, 2.23)

0.75

1.44 (0.71, 2.95)

0.31

Ethnicity (Hispanic)

8 (10.0%)

5 (6.5%)

0.63 0.15, 2.29)

0.57

0.98 (0.28, 3.70)

0.98

Undifferentiated symptom (ICD-10 “R-code”) for admission diagnosis

36 (45.0%)

12 (15.6%)

4.39 (1.97, 10.35)

<0.01

4.02 (1.80, 9.55)

<0.01

Increasing deterioration index 24 h after admission (change > 0)

37 (46.3%)

29 (37.7%)

1.42 (0.72, 2.83)

0.33

1.47 (0.73, 2.99)

0.28

Mode of PDQ submission (independently via app)

21 (26.2%)

18 (23.4%)

1.17 (0.53, 2.58)

0.71

1.23 (0.56, 2.74)

0.61

PDQ Q1: Has the care team told you the main reason you're in the hospital in a way you understand? (no/unsure)

26 (32.5%)

16 (20.8%)

1.83 (0.84, 4.07)

0.11

1.54 (0.69, 3.48)

0.29

PDQ Q2: Are you confident that your diagnosis is correct? (no/unsure)

19 (23.8%)

6 (7.8%)

3.66 (1.30, 11.92)

0.01

2.55 (0.92, 7.82)

0.08

Abbreviations: CI, confidence interval; ICD, International Classification of Diseases; OR, odds ratio; PDQ, patient diagnostic questionnaire.


Note: The c-statistic was 0.71.



Discussion

We piloted an application-based questionnaire designed to assess patient–clinician communication regarding the diagnostic process early during hospitalization administered via an independent and assisted workflow. While many participants requested to complete the PDQ on their mobile devices, just one-half did so. All 118 patients who requested assistance with the questionnaire had complete responses. Overall, about one-half of patient-reported diagnoses were fully concordant with the clinician-entered admission diagnosis identified from the HPL in the EHR. Patient-reported lack of confidence was significantly associated with nonconcordance when adjusting for all other PDQ items. In our final multivariable model, an ICD-10 “R-code” entered as the hospital principal problem was predictive of patient–clinician diagnostic nonconcordance upon admission.

We offer several explanations for these observations. First, many participants requested to complete the PDQ via a mobile device but failed to do so. This may reflect ongoing concerns about digital divides in hospitalized patients, including barriers to recruiting patients remotely, which were exacerbated during the COVID-19 pandemic.[35] [36] [37] Indeed, in-person digital navigators are increasingly recognized as critical to improve equitable recruitment and participation and may be particularly important for hospitalized patients who are older, require additional assistance with task completion, have fluctuating cognitive states, or hindered by other factors as observed in our cohort.[38] [39]

Second, because Questions 1 and 2 directly assessed patients' understanding of the admission diagnosis and confidence that it is correct, these questions were likely most useful for assessing concordance among patient-reported and clinician-entered diagnoses as suggested by prior work and our PDQ analyses ([Table 2]),[19] [30] regardless of the how concordance and nonconcordance were defined ([Supplementary Appendix C], available in the online version). Patients who were confident about their admission diagnosis (a “yes” response to Question 2) may have received more timely and higher quality communication with their clinicians. Alternatively, patients who responded “no/unsure” to Questions 1 and 2, may have reflected suboptimal communication with the care team upon admission. We note that the relatively low frequency of “no/unsure” responses to Questions 3 to 10 for participants suggests that the therapeutic relationship between the patient and care team was probably not poor in our cohort.

Third, while suboptimal concordance between patients and clinicians regarding the reason for hospitalization has been previously reported,[30] diagnostic nonconcordance was particularly notable among cases with “R-code” admission diagnoses (i.e., nonspecific symptoms such as abdominal pain). Such symptom-oriented diagnoses entered in the EHR suggest diagnostic uncertainty early during hospitalization, which may not have been adequately acknowledged by clinicians and can lead patients down complex diagnostic paths.[13] [15] [40] [41]

In contrast to recent work that validated a structured assessment of visit notes in the patient portal to enhance detection of diagnostic concerns in the ambulatory setting,[17] we piloted a questionnaire designed to engage patients in the complex diagnostic process in acute care settings. The PDQ was specifically designed to assess understanding of the admission diagnosis and quality of communication with clinicians early during the hospital encounter when uncertainty is high and could lead to downstream adverse events. Thus, the pattern of questionnaire responses could be incorporated into interventions that stratify patients at risk for diagnostic error. For example, a red flag in cases where patients report lack of confidence in their admission diagnosis could trigger the hospital care team to pause and reassess the working diagnosis (i.e., by taking a “diagnostic time-out”) to provide a timely and accurate explanation to the patient as we recently described.[19] Such prompts may aid the hospital care team in identifying patients on whom to have more in-depth conversations regarding their diagnosis using low health literacy friendly techniques.[42]

Our study has several limitations. First, small sample size may have led to limited ability to detect significant associations of PDQ items in the full multivariable model. Second, response bias may have been present, especially among the patients who declined to participate and those who requested to submit their questionnaires electronically but did not do so. Third, our study was conducted at one hospital and may have limited generalizability. Specifically, other hospitals may not have reliable entry of admission diagnoses in the EHR. We also assumed that the admission diagnoses did not change during the first 48 hours prior to PDQ completion. Patients could have received additional information during this time by virtue of looking at their portal or talking with their care team, which may have affected patient–clinician diagnostic concordance. Fourth, we excluded non-English speaking patients, a population that may be potentially vulnerable to consequences of suboptimal diagnostic communication. Lastly, we did not conduct in-depth interviews with patient participants to specifically assess sociodemographic factors (such as income and insurance challenges), health literacy, or other potential barriers for completing the PDQ on mobile devices. The use of interpreters and digital health navigators early during hospitalization are likely crucial for overcoming language, sociodemographic, and health literacy barriers to achieve diagnostic excellence equitably.[35] [38] [43] [44]

In conclusion, we piloted an application-based questionnaire and determined that ICD-10 “R-codes” and likely patient-reported lack of confidence predicted patient–clinician diagnostic nonconcordance. Patient-reported responses elicited from questionnaires in combination with EHR data could be useful as part of a comprehensive strategy for improving diagnostic safety and patient–provider communication about diagnoses in the hospital.


Clinical Relevance Statement

Patient–clinician communication about the diagnostic process during the hospital encounter is often suboptimal. Pattern of responses from inpatient-focused diagnostic questionnaires and certain ICD-10 diagnosis codes that suggest diagnostic uncertainty may predict patient–clinician diagnostic nonconcordance upon admission, a potential target for interventions to improve diagnostic safety.


Multiple-Choice Questions

  1. In hospitalized patients, which factors could be used to identify patients who might benefit from interventions that improve patient–clinician communication about the diagnostic process?

    • ICD-10 R-code for the discharge diagnosis

    • Patients who report lack of confidence in the diagnosis entered in the EHR on admission

    • ICD-10 R-code for the admission diagnosis

    • Options b and c

    • None of the above

    Correct Answer: The correct answer is option d. In the acute care setting, an ICD-10 R-code (e.g., abdominal pain, shortness of breath, fever, etc.) entered as the admission diagnosis was strongly associated with patient–clinician diagnostic nonconcordance. Patients who report lack of confidence in the admission diagnosis (e.g., by responding “no” or “unsure” when asked “Are you confident that your diagnosis is correct?”) may also predict diagnostic nonconcordance.

  2. What types of cost-effective interventions might be considered to address digital divides equitably in hospitalized patients?

    • Use of clinicians to facilitate submission of electronic questionnaires

    • Use of nonclinically trained digital navigators to facilitate submission of electronic questionnaires

    • Options a and b

    • Non of the above

    Correct Answer: The correct answer is option b. The use of digital navigators are increasingly recognized as important to achieve equity in digital health interventions. While clinicians can certainly be trained to facilitate submission of electronic questionnaires, this would add additional burden on clinicians and would not be a cost-effective approach compared to use of a non-clinically trained individual.



Conflict of Interest

None declared.

Protection of Human and Animal Subjects

This study was reviewed and approved by the Mass General Brigham Human Research Committee.


Authors' Contributions

All authors have contributed sufficiently and meaningfully to the conception, design, and conduct of the study; data acquisition, analysis, and interpretation; and/or drafting, editing, and revising the manuscript.



Address for correspondence

Anuj K. Dalal, MD
Division of General Internal Medicine, Brigham and Women's Hospital, Harvard Medical School
Brigham Circle, 1620 Tremont Street, Suite BC-3-002HH, Boston, MA 02120-1613
United States   

Publication History

Received: 15 March 2024

Accepted: 16 June 2024

Article published online:
18 September 2024

© 2024. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany


Zoom
Fig. 1 Hybrid independent (mobile) and assisted submission of an electronic patient diagnostic questionnaire upon hospital admission.
Zoom
Fig. 2 Study flow diagram.