Keywords
Health equity - gender identity - informatics - health inequities - healthcare disparities
1 Introduction
Pervasive disparities in healthcare access and health outcomes between populations
reflect the ways in which socioeconomic power distribution shapes individual risks
and opportunities, including exposure to violence, discrimination, and environmental
burdens; access to stable and safe housing, food, and water; and access to appropriate
health services [[1]]. Health system biases further compound broader, societal inequities; biases in
health research and practice contribute to reduced trust and alienation, further reducing
access even when services are theoretically available [[2]
[3]
[4]]. Addressing health disparities requires major shifts across all elements of health
and healthcare. Medical informaticians have a crucial role to play in this larger
effort, as digital health represents a critical point of intervention. If we do not
effectively address bias and access disparities in digital health, health gaps will
widen and become more difficult to ameliorate, but in facing these challenges we can
close these gaps as only we can.
The World Health Organization (WHO) in Global Strategy on Digital Health 2020-2025
defines digital health broadly, including virtual care, remote monitoring devices,
smart wearables, tools for data exchange and sharing, artificial intelligence, and
more. We will address examples of how informatics can improve or exacerbate health
disparities through digital health tools like patient portals, telehealth, and machine
learning algorithms, but it is crucial for all elements of digital health to proactively
address bias and inequity in design and utilization [[5]].
The COVID-19 pandemic played a major role in increasing usage and acceptance of digital
health in healthcare [[6]
[7]
[8]]. Investments in digital health companies were about $5.4 billion in the first half
of 2020 during the pandemic [[9]]. The WHO perceives that digital health has the potential to reach more people and
provide them with access to available health services [[10], [11]]. The U.S. National Science and Technology Council reported that digital health
can save both costs and time for patients, and increase their access to health services
[[12]]. The pandemic has also spotlighted health inequity and ongoing failures to address
equity in healthcare and public health [[13]].
2 Methods
This work uses a narrative review method to summarize issues about inequities in digital
health and to discuss future directions for researchers and clinicians. We searched
the extant literature using a combination of relevant keywords (e.g., “digital health”,
“health equity”, “bias”, etc. derived from author consensus outline) using PubMed
and Google Scholar. This outline involved two rounds of author consensus regarding
the scope of digital health equity topics. This consensus shifted as part of the review
process, and a second round of consensus was sought. In the first round of searches,
we focused on (1) access and barriers, (2) algorithmic bias, (3) digital and nondigital
health literacy, and (4) surveillance and safety. In the second round of searches,
we focused on (1) equity in digital health, (2) digital determinants of health and
digital health equity, (3) epistemic justice in digital health, (4) aggregate data,
(5) individual data, and (6) user (or non-user) experience. Searches were thus carried
out in the following form for PubMed and for Google Scholar: “‘digital health equity’
OR ‘health surveillance’ OR ‘algorithmic bias’…”, etc. based on the consensus topics
to the point of narrative, thematic saturation.
For PubMed, the following approximate number of results were found in the first round
of searches, from 2019 to 2021: “access and barriers AND digital health” (n=224),
“algorithmic bias” (n=40), “digital health literacy OR nondigital health literacy”
(n=93), “surveillance AND safety AND digital health” (n=24).
For the second round, the following approximate number of results were found, from
2019 to 2022: “digital health equity” (n=14), “digital determinants of health” (n=2),
“epistemic justice” (n=30), “(aggregate data OR individual data) AND digital health”
(n=13), and “user experience OR non-user experience” (n=1,411). Responses including
additional sources from reviewers and editors were also included. Unfortunately, usage
of Google Scholar is biased, meaning results were likely different for searches run
by all authors, making it difficult to report exact or even approximate numbers of
results.
Articles were initially screened if they were published in English and discussed the
topic of interest. Our first round of searches took place in October and November
of 2021, with the secondary round of searches taking place in February 2022. We prioritized
our selection toward peer-reviewed manuscripts; however, we also included select gray
literature such as white papers for the following reasons: (1) to include a larger
and more comprehensive diversity of perspectives in the piece; (2) to recognize the
power dynamics which allow for publication in peer-reviewed journals, and how that
privilege may miss crucial perspectives; and (3) to consider lived experience in relationship
to informatics. We also prioritized sources based on recency, preferring sources published
from 2020 onward; however, select sources published before that time were considered
if they included information not covered by more recent literature.
3 Equity in Digital Health
3 Equity in Digital Health
As evidence mounts regarding the role of structural power and oppression in shaping
individual and population health, the implications are clear for the ethical and practical
duty of all involved in health promotion and health care to centrally address equity
and community solidarity in the design and implementation of health policies and tools.
The Lancet and Financial Times Commission on governing health futures 2030 urged the adoption of a values-based
framework for governing health to ensure that digital technologies support universal
health benefits and positive transformations. Their framework focuses on addressing
power asymmetries, public trust, and universal public health through practices grounded
in the foundational values of democracy, inclusion, equity, human rights, and solidarity
[[14]].
Centering equity in digital health means balancing improved reach with increased risk
in digital health; for example, people with highly stigmatized diagnoses may be more
able to access care in specialty clinics if they can do so through telehealth services,
but data breaches also pose greater risks for these patients [[15]]. It means ensuring that tools meant to expand access to care, like telehealth for
hard-to-reach populations, do not create a permanent barrier to screenings and services
that require in-person encounters and hands-on examination [[16]]. It means ensuring that digital tools collect and reference equitable data sources.
A recent Google app designed to assist dermatologists in diagnosing skin conditions
quickly came under fire when users reported that it does not work on Black or Brown
skin, maintaining or reinforcing existing disparities in representation of skin conditions
in dermatological teaching [[17], [18]]. This demonstrates the need for designing electronic health record (EHR) systems
that adequately reflect relevant categorizations of experience in a given community.
When EHRs fail to adequately reflect identities and experiences in a given socioeconomic
context, EHRs can present a technological barrier to safety, confidentiality, and
appropriateness in healthcare [[19]
[20]]. Equity in digital health means addressing structural power at-large as it shapes
individual health as well as data collection, use, management, storage, and sharing
in the health system.
3.1 Structured Power Determines Health Outcomes
The latter part of the 20th century and early 21st century observed a paradigm shift
in individual and public health from a primary focus on individual behavior and genetic
predetermination to an increasing focus on macro-level power dynamics [[21]
[22]
[23]]. This includes inequities in exposure to environmental risks such as pollutants;
access to material resources such as stable and appropriate housing, nutritious food,
and safe drinking water; exposure to war and other violence; and access to epistemic
resources such as formal education and the Internet. Health outcomes are mediated
both directly and indirectly by social standing; stigmatized groups carry both the
stress burden of stigma and structural harms associated with enacted stigma, violence,
and discrimination across all other areas of society. For this reason, global health
equity must consider the role of systems that structure stigma on a global level,
such as colonization and white supremacy [[24]
[25]
[26]]. These ideological and political systems have shaped the distribution of environmental
pollutants, of housing, of food, of violence, and of epistemic norms, including concepts
of demography, health and illness, and who has authority to participate in public
health strategy.
3.2 Digital Determinants of Health and Digital Health Equity
Digital health transforms the already-unequal landscape of health determinants by
emphasizing access to technology and digital literacy.Access to Internet-capable mobile
devices varies widely between and within countries. For example, the Pew Research
Center reported100% mobile phone ownership among adults in South Korea in 2018, versus
64% in India in the same year [[27]]. Across the global South, mobile users are more likely to use multiple SIM cards,
to pay for mobile usage via prepaid rather than monthly plans; to primarily use a
mobile device that is owned by the head of their household rather than a personal
mobile device; to use primarily browser-based rather than app-based mobile Internet;
and to employ browsers that reduce data consumption by hiding data-heavy elements
from websites [[28]].
Patterns of smartphone and mobile Internet adoption and use reflect individual and
regional socioeconomic power; digital health design should consider global and regional
patterns of access and use. Ensuring that health apps have a well-functioning and
data-frugal browser equivalent will expand usability for users in the global South,
users in rural areas, and lower-income users. Mobile health tools should account for
multiple users sharing a single device.
Within relatively wealthy countries, access and use patterns also reflect socioeconomic
disparities. For example, one study showed that Pokémon Go, an augmented reality-based
mobile game, provides an unequal number of spots in which users can engage with the
game based on their and others’ neighborhoods. Predominantly Black and Hispanic neighborhoods
in major cities in the United States, such as Chicago and New York, had fewer spots
to play the game than predominantly white and Asian neighborhoods [[29]].
Digital health developers should also consider family dynamics in their designs. Such
dynamics can be affected by various factors including race or ethnicity, culture,
and social identity. For example, working women with children may not be able to adopt
advice from stress-relief applications suggesting spending time with family. These
unrealistic recommendations can cause guilt or increase stress levels due to a perceived
failure to follow recommendations. Social identities and roles should therefore be
reflected in digital health apparatuses [[7]7,[30]]. To reduce the gap, diverse stakeholders should be involved and compensated for
their contributions from the beginning of the development phase to reflect their values
and perspectives. For instance, researchers could recruit ethnic minority or stigmatized
populations in order to reflect their cultural values and perspectives with the aim
of target intervention [[20]20,[31] 31,[32]]. A 2021 special issue of Global Policy addressing digital technology and health
equity spotlights the ways that financial and political power shape priorities in
health technology, including which technologies are viewed as important and how functionality
may influence power structures [[33]]. Authors of the issue highlighted the “gold rush” ethos of digital technology,
emphasizing the inherent tension in goals between profit motives and public health
priorities [[33]]. Other authors specifically named problems with “philanthrocapitalism” in digital
health, criticizing the reductionist and ineffective approach of education-based interventions
like the Motech Global Mobile Health Program [[34]]. They highlighted risks going far beyond individual and collective health including
erosion of basic liberties, increase of social conflict, wasted public funding, and
long-term harm to economic systems [[33]]. Developers and institutional adopters of digital health tools should proactively
and transparently assess for these risks in collaboration with prospective user and/or
patient groups.
4 Epistemic Justice in Digital Health
4 Epistemic Justice in Digital Health
Global health care and policy is organized around epistemic practices and norms that
are fundamentally entwined with the history of European global colonization and white
supremacy. The result is a system of knowledge production and sharing that habitually
enacts knowledge-based injustice, including unjustly discounting the credibility and
interpretive frameworks of some knowledge and knowledge-producers and according to
structural prejudices in health knowledge production and use [[35]35,[36]]. The effect of this form of injustice includes persistent assumptions, particularly
by those situated within the academic research organizations in the Global North,
that marginalized communities lack the capacity to meaningfully participate in research
or policy development; this renews the exclusion of their perspectives [[37]]. Instead of centering participation on those thought to be capable of participating,
participation should be viewed as a basic human right, and where capacity to participate
is compromised, capacity should be actively facilitated [[35]].
A key element of the epistemology of health and health care is conceptualizing human
groups, including determining which groups are medically relevant and in naming and
defining those groups. Demography is one epistemic framework for understanding health
significance in human groups. Demographic information is typically defined as the
statistical characteristics of human populations including, but not limited to, age,
gender identity, ethnicity, education, and employment status, among many others. Demographics
often form the basis of social determinants of health (SDoH) frameworks, which are
in turn intimately connected to health disparities research. However, it has been
noted that SDoH has “lost meaning within systems of care because of misuse and lack
of context, and large social gradients in health and clinical outcomes persist” [[38]]. For instance, race is oftentimes classified as an SDoH when the actual SDoH is
structural racism. As Crear-Perry et al. note “[by] defining the root causes of health
inequities, we can move the focus of intervention away from individual blame and misguided
theories of the biological basis of race and ethnicity…It is an economic, social,
and moral imperative that we center the experience of the communities that are the
most impacted when we look for solutions” [[38]]
4.1 Reporting of Demographic Information
Demography has historical and ongoing entanglements with the eugenics movements and
eugenic ideologies, which requires belief in inherent differences between groups in
order to justify disproportionate benefit and harm to different groups [[39]]. One risk of demographic frameworks is that they can tend to encourage naturalizing
health differences rather than conceptualizing health differences between groups as
reflective of structural power and oppression. Yearby reimagines a SDoH framework
which is multi-layered in approach, considering factors such as discrimination, civic
participation, incarceration, and law [[40]]. Informaticians must begin to grapple with these intertwined and complex systems
which are not fully represented in the health record, by becoming fluent in social
policy and public health, and examining structural discrimination and biases in all
involved systems [[40]].
Collecting demographic information means translating and flattening complex individual
identities and experiences into universalized categories, often for the ease of understanding
of the academic Global North. This privileges normative group categories and models
that are localized to racial, ethnic, gender, class, and religious perspectives. Demographic
tools in digital health must be designed to accommodate epistemic localization and
feedback responsiveness. Digital health developers should adopt a starting assumption
that demographic categories may need to be localized and re-localized to adapt to
dynamic developments of both social categories themselves and global understandings
of the health significance of different types of social categories.
Research connecting demographics to root causes of biases requires appropriate recording
and description. It requires trustworthiness in not only the patient-clinician relationship,
but also in the patient-informatician, clinician-informatician relationships, and
community-clinician relationships--all connections which lead to better health outcomes
[[41]41[42]
[43]]. Designing the best questions and answers does not always mean collecting the best
data when training and education are not present and persistent in these relationships
[[20]
[44]]. It is important to note significant differences in those same relationships to
mistrust of medical systems. Many communities have specific histories of abuse, neglect,
and violence originating in medical systems [[45]
[46]
[47]]. Others now claim such histories with no such basis [[48]
[49]
[50]]. Treating both situations as anti-science and anti-medicine on equal footing is
a denial of systematic and structural abuse and a likening of that abuse to conspiracy theories. Modern medicine was built
on white supremacist frameworks and practices such as involuntary experimentation
on enslaved Black people, forced sterilizations of Black people and Indigenous peoples,
and relegation of structural racism to supposed “genetic” differences based on scientific
racism [[46]46,[51]
[52]
[53]
[54]
[55]
[56]
[57]
[58]
[59]
[60]]. LGBTQIA+ patients are regularly turned away at the door and refused care, often
legally, based on their sexual orientation or gender identity [[20]20,[61]]. With this reality in mind, why should marginalized patients report demographic
characteristics? Of course, as with all health surveillance, demographics can help
elucidate larger public crises: issues of racism, sexism, ableism, homophobia, transphobia,
and other forms of discrimination. But that is just one step. Determining how providers
need to act to counteract these large oppressive systems is crucial to the future
of healthcare.
However, even from the provider side, there are significant issues of marginalization
based on demographic characteristics. Marginalized providers face everything from
microaggressions to direct violence. Some examples include an elderly white woman
telling a Black doctor not to “waste [their] affirmative action” [[62]] or a patient’s parent reporting that she’s glad to have a “usual straight” doctor
instead of someone who is gay [[63]], or patients threatening to shoot and kill Asian nurses [[64]]. Racial discrimination against hospital employees in the wake of the COVID-19 pandemic
led to several high-profile, multi-million dollar lawsuits [[65]65[66]
[67]]. On the other hand, research has shown that when marginalized patients are treated
by people who look like them or have their same experiences, patient outcomes are
better [[68]68[69]
[70]]. But if providers cannot answer to the structural racism in their own ranks, how
can one even begin to tackle persistent patient-side structural racism? NMA President
Leon McDougle noted in an interview just how racism continues to prevail in medical
communities: “The root cause is systemic racism dating back to chattel slavery… This
is a societal issue that will require cross-sector investment and collaboration to
remedy” [[71]].
In recognition of structural failures in trust and the continued vulnerability of
disclosures within health contexts, the collection of individual patient or user data
should be structurally collaborative. Disclosure should be prompted in a way that
makes clear why particular information is being collected and how it will be used;
disclosure should be optional as much as possible; when disclosure is a precondition
of service, this should be explained; and when data is used on the user or patient’s
behalf, this should be made transparent. A core demonstration of epistemic humility
in global health is trusting a patient or user to reasonably judge whether a particular
disclosure is safe and facilitating their decision-making process by indicating why
a particular element is relevant to their care.
4.2 Replicating (In)Equity in Design
One key area in which digital health influences health outcomes via epistemic justice
or injustice is gender and sexuality. Digital health applications can allow LGBTQIA+
to access care more privately and can act to reduce stigma [[72]]. However, digital health applications rarely consider gender equity in their design
[[30]]. Studies reported that most existing applications do not adopt standards including
gender identity, assigned gender at birth, or gender markers on health insurance documents,
and that they do not consider diversity in gender, sex, and sexual orientation (GSSO)
data [[73]
[74]]. Work by Kronk et al. [[20]], McClure et al. [[75]], and Davison et al. [[76]] has surveyed the current landscape of GSSO data in EHRs and provided newer frameworks
to reassess data collection standards. Recommendations in this work included an overhaul
to the existing Health Level 7 (HL7) sex and gender model, as well as implementation
of a two-step process (of gender identity and assigned gender at birth [AGAB]) in
clinical contexts.
Furthermore, GSSO data fields have been built using Eurocentric ideas of gender identity
and sexual orientation, which may be different from that of non-Eurocentric countries
outside of the United States, Canada, Australia, Germany, and France, for instance
[[61]]. As Kronk and Dexheimer point out: “[a] small segment of non-Eurocentric identities
were described [using Eurocentric terminology like] ‘transgender,’ ‘transsexual,’
or ‘transvestite’... such as hijra being described as ‘transsexuals’[[61]]. In order to disambiguate GSSO terminology, Kronk created the GSSO ontology, containing
over 14,000 terms on those topics [[77]]. However, the terminology is currently only available in English, and it also possesses
a relatively Eurocentric lens by virtue of its authorship. Constructing more collaborative
datasets which consider multiple cultural as well as linguistic perspectives and translating
those affirming terminologies into clinical care through vocabulary standards are
essential for better care outcomes in trans and gender-marginalized populations.
Inequity of design has also led to current digital health systems which do not reflect
the circumstances of women, particularly women from racial or ethnic minority backgrounds
and low-income women. National data shows that Black adults have similar rates of
internet access as white adults, and show the highest percentage of smartphone ownership
among race or ethnic groups [[78]]. However, usage of digital health is significantly low among women of color. Black
women showed a low enrollment rate in digital pregnancy services, physical activity
applications, and digital health for sexually transmitted diseases compared to women
from other race or ethnic groups. Being excluded or not participating in digital health
can harm not only women’s health, but also that of their families because women are
often responsible for their care [[30]]. These gaps perpetuate existing sexual and gender inequities [[73]]. Digital health should include diverse groups from the beginning of the development
phase to reflect their values and perspectives.
4.3 Health Information Standards
It is imperative that we collect data, design algorithms, and evaluate applications
equally and fairly by considering all possible factors caused by biases. However,
there is no clear definition or standard of “fairness” in machine learning algorithms
[[79]]; thus, it is difficult to measure the concept [[80]]. Furthermore, the disconnect between the public and private sectors in digital
health can also lead to racial bias in algorithms used in patient care. The U.S. Food
and Drug Administration has highlighted that privately funded machine learning algorithms
used in health care should have the same ethical standards as those developed by publicly
funded research (i.e., National Institute of Health, USA). Publicly funded research
is usually peer reviewed and evaluated by domain experts who can determine whether
the proposed algorithms contain biases. Also, studies are approved by their institutional
review boards (IRBs) which improves oversight of methods. However, the private sector
can face conflicts between protecting intellectual property and being transparent
to algorithmic design and inputs. Currently, there is no broadly agreed upon standard
for evaluating algorithm-based systems, and there are no federal, state, or local
regulations governing the use of these algorithms [[80]]. Regulators must understand structural racism to evaluate commercialized algorithms
perpetuating racial bias and to oversee data flows in the algorithm loop [[80]80,[81]]. Concepts of fairness in health information must be developed through participatory
and equitable processes and not centered on the epistemic perspective of researchers
in the Global North.
5 Aggregate Data
5.1 Algorithmic Bias
Currently, many health systems are adopting machine learning algorithms and software
to manage health using patient data such as clinical information, socio-demographic
information, laboratory values or diagnostic images [[81]81,[82]]. Although machine learning algorithms hold great potential for reducing health
care cost and increasing the efficiency of workflow, these algorithms can exacerbate
existing disparities and introduce unexpected ones [[83]]. Biases can be reflected in various stages of algorithm development, from collecting
data to designing and implementing algorithms in clinical practice.
Vulnerable populations in health care such as individuals marginalized due to sexual
orientation or gender identity, Black and Latine1 populations, and those with low
socioeconomic status, experience significant baseline health disparities. Those pre-existing
biases have the potential to be perpetuated by machine learning algorithms, reinforcing
deeply rooted stigma and discrimination [[86]].
Recently, Obermeyer et al. examined an algorithm used in the U.S. health system that
identified patients needing high-risk care management [[87]]. This study reported that the algorithm contained racial bias in cases in which
race was self-reported. Furthermore, those models can still have low performance even
when algorithms take racial and cultural factors into account. Coley et al. (2021)
reviewed two algorithms that predict suicide risk across racial and ethnic groups
[[88]]. These algorithms showed different results across racial and ethnic groups: they
accurately predicted the risk for white, Hispanic, and Asian patients while they less
accurately predicted the risk for Black and American Indian/Alaskan Native patients.
The latter groups did not report their race or ethnicity in the records [[88]88]. Furthermore, evidence around the genome showed that the collected dataset did
not represent diverse racial and ethnic groups [[89]
[90]]; most of the genomic databases were collected from people with European ancestry.
Once researchers develop treatment strategies based on the biased data, excluded populations
such as Black and Indigenous people may not experience the same treatment efficacy,
which could lead to harmful outcomes. Thus, it is critical to improve accuracy and
performance of predictive models for disadvantaged populations by ensuring their inclusion
in such models. To bridge the gap, there is a need for collaboration via multidisciplinary
system development teams from diverse backgrounds [[81]]. Otherwise, health disparities will be perpetuated and further embedded within
society, leading to greater health inequities [[91]].
As stated previously, there are no broadly agreed upon standards for evaluating algorithm-based
systems [[80]]. Recently, researchers proposed MINIMAR (MINimum Information for Medical AI Reporting),
a new framework “describing the minimum information necessary to understand intended
predictions, target populations, and hidden biases, and the ability to generalize
these emerging technologies” [[92]]. This framework can identify how data and information are collected to train a
model with reduced biases and equity issues. Ideally this new framework can be leveraged
to improve equity in AI models.
5.2 Surveillance and Safety
Mass health surveillance during the COVID-19 pandemic has proven indispensable, assisting
public health institutions and governments immensely with nearly real-time decision-making
capabilities. However, these systems led to nearly uncontrollable surveillance creep,
and have been used by various countries to invade privacy to extreme capacities, such
as using facial recognition to track infected persons [[93]], all for the “greater good” [[94]].
Meanwhile, security surrounding health data appears to have been increasingly compromised.
Over the last two years, millions of health-related documents, including such sensitive
information as Social Security numbers, health conditions, and medication lists have
been compromised. HIPAA Journal reports 642 data breaches in the United States involving at least 500 records in
2020 alone, theoretically leaking information equating to nearly 82% of the U.S. population
[[95]]. The sale of records on the dark web can net up to $1,000 USD per record, which
can then be used for purposes of extortion, coercion, and identity theft [[96]]. Choi, Johnson, and Lehmann showcased in 2019 that these data breaches are associated
with deterioration in timeliness of care and patient outcomes [[97]]. But these breaches have gone even further in directly impacting outcomes: in 2021,
an infant allegedly died due to care issues related to a hospital ransomware attack
[[98]].
Social media platforms and mobile devices have only increased vulnerabilities and
highlighted myriad issues with digital systems. In 2021, a former Meta (previously
Facebook) employee leaked thousands of documents, showcasing how Meta amplified the
voices of the anti-vaccination movements and other medical misinformation. Imran Ahmed,
of the Center for Countering Digital Hate, noted that nothing was done because “engagement
is the only thing that matters… [it] drives attention and attention equals eyeballs
and eyeballs equal ad revenue” [[99]]. Additional documents clearly showed that Meta knew that Instagram use was strongly
associated with depression, anxiety, and eating disorders [[100]]. Informaticians in academia and industry need to be aware of these vulnerabilities,
advocate for more individual-level and system-wide protections, and work to educate
patients and providers on how their information will be used and to whom it is available[[101]], especially when considering vulnerable populations, such as adolescents [[102]].
For providers, it can seem a difficult gap to leap. Over 4,000 anti-vaccination protesters
clashed with police in Athens in July 2021 [[103]]. Fake vaccinations and vaccination documentation run rampant [[104]]. “We must insist that trust hospitals… be held accountable for their actions”,
one waste pickers’ advocate noted [[105]]. The place of the medical provider amidst such chaos is right in the center of
it all. Providers cannot be apolitical actors, and paths need to be opened for more
equitable patient, and community, advocacy by providers [[106]].
6 Individual Data: Confidentiality, Stigma, and Criminalization
6 Individual Data: Confidentiality, Stigma, and Criminalization
Balancing the importance of health surveillance with security is critical to maintaining
public health. Such surveillance is necessary to eliminate sources of health problems
larger than just one person, including pathogenic spread and behavior, workplace hazards,
housing components, and water and air quality, among others. For example, the water
crisis in Flint, Michigan was ignored and largely dismissed by authorities until engineer
Marc Edwards and pediatrician Mona Hanna-Attisha showcased the presence of water lead
levels and its effects on blood lead levels [[107]]. However, even though extensive work showcased that lead levels in Flint had been
lowered to levels safe for human consumption, public trust had been broken: “The anger,
the lack of trust, it’s all justified,” Senator Jim Ananich reported [[107]107]. The very next year would see one of the most infamous medical misinformation
movements in world history that focused on vaccine resistance- one that would generate
upwards of $1.1 billion in annual revenue for social media sites [[108]].
6.1 Provider-Side Reporting on Health-Related Statuses or Conditions
On 1 September 2021 in Texas, Senate Bill (SB8) went into effect, banning abortion
at around six weeks, part of a continued assault on reproductive health rights. Six
months previously in Arkansas, House Bill (HB1570) effectively banned gender-affirming
care for transgender youth, signaling a mass introduction of anti-transgender bills
after the December 2020 Bell v Tavistock case, which was only overturned in September 2021. Both acts effectively criminalized
every aspect of their respective areas of care: making it illegal to provide the care
itself, resources concerning the care, and any assistance related to administering
that care.
With that in mind, transgender patients may feel uncomfortable providing information
about gender-affirming medications, preferring to engage stealthily in medical encounters
and to seek grey or black-market alternatives. Individuals seeking abortion services
may have to cross state or national borders for care. In an environment where a person
can be prosecuted for manslaughter as the result of a miscarriage, handcuffed and
restrained while in labor, forced to undergo Caesarian section or blood transfusion,
or charged under drug trafficking statutes for “delivering drugs to an infant through
the umbilical cord,” discussing medically salient information, or even seeking out
prenatal care, becomes a severe safety issue [[109]
[110]
[111]]. How informaticians present this information in systems can exacerbate these problems.
Additionally, providers have been known to attempt to cover their mistakes and discriminatory
actions, and to help other providers do so as well. In 2020, a trans man in the United
Kingdom undergoing metoidioplasty had vaginectomy performed without consent, and another
provider modified the consent form afterward in an attempt to avoid detection. The
damage done, a fundamental breach of provider-patient trust, resulted in mild penalties,
with one provider suspended for five months and the other for one year [[112]]. Cases of intersex genital mutilation (IGM) are not much better: although 2018
led to the depathologization of transness by the World Health Organization (WHO),
intersex conditions had “no end in sight for pathologisation” [[113]]. “The tendency of the medical profession to ‘cover its tracks’ through providing
false information… The mingling of damage both to intersex people’s bodies, and to
their core relationships through… professional betrayal” [[114]]. Informaticians become involved in these processes by codifying these issues, oftentimes
in clinical code sets such as SNOMED-CT, and then those sets are used by researchers
who assume pathology. For instance, until early 2022, SNOMED-CT codified “sodomy”
as a disorder. Today, SNOMED CT codes still pathologize transgender people under the
label of ‘gender identity disorder’ despite calls to remove such information, and
the term transgender still appears in problem lists [[20], [115]]. In general, while informaticians can create and enforce systems which are more
accountable, careful consideration should be made in deciding what should and should
not be recorded, and who that recording truly benefits. When it comes to patients
with disabilities, Dr. Lisa Iezzoni, a professor of medicine at Harvard Medical School,
reported in 2021 that 80% of physicians she surveyed “viewed quality of life of people
with disabilities [as] worse than that of other [nondisabled] people” and that only
around 41% of physicians felt confident in their ability to provide the same quality
of care to patients with disabilities as those without [[116]]. Integrating disability considerations into health care systems could potentially
help close this chasm. Mudrick et al. found that embedding disability accommodation
needs within the EHR was useful in visit planning, but that the structure needed to
be more flexible and more integrated with existing EHR infrastructure, such as with
scheduling [[117]]. However, there has been little, if any, research regarding how people with disabilities
feel about the current EHR landscape, what they would want or not want represented,
or the relationship between that representation and quality of care. As Turk and McDermott
noted in 2018, “[in] general, there are few articles that focus on” disabled populations
[[118]]. More research is needed in this domain, but it can certainly pull from the extensive
work of scholars in the fields of disability studies and crip theory [[119]
[120]
[121]].
6.2 Effects of Data Breaches on Patients
In areas where mental health-related stigma is high, leaks and breaches of sensitive
information can be extremely lucrative for those obtaining such information. Following
a data breach of Vastaamo in Finland, nearly 30,000 people were extorted, resulting
in 25,000 police reports [[15]]. Familial abuse, histories of rape, terminal conditions, suicidal thoughts and
more were released online for all to see [[15]15]. Retraumatization due to data breaches has been linked to anxiety, depression,
suicidal thoughts, and even post-traumatic stress disorder (PTSD).
Release of information related to physical illness has led inadvertently to similarly
bleak outcomes. In 2020, Peruvian trans woman Alejandra Monocuco was left to die by
paramedics after they learned she was HIV-positive [[122]]. In 2019, a Honduran trans woman seeking asylum, Roxana Hernandez, was left to
die in ICE (U.S. Immigration and Customs Enforcement) custody after suffering from
AIDS-related illness and being refused treatment [[123]]. Suicidal ideation and depression have been tied to diagnosis of sexually-transmitted
infections (STIs) and stigma following infection [[124]]. Stigma and misinformation related to STIs runs rampant, and disclosure of health
information without consent could lead to criminal prosecution in some cases.
Health information has also been used illegally in intelligence efforts, seeding public
mistrust of public health programs. For instance, a Pakistani physician allegedly
helped the CIA run a fraudulent hepatitis vaccine program in order to obtain DNA samples
of Osama bin Laden, leading to bin Laden’s execution by U.S. operatives. This event,
as described, violates medical neutrality as outlined in the Geneva Conventions, and
led to exacerbation of mistrust of medical systems in Pakistan [[125]]. The U.S. arm of Save the Children, which legitimately organized hepatitis B vaccinations
in Pakistan, was forced to evacuate the country. Refusals of the polio vaccination
spiked and medical personnel became victims of violent attacks [[125]125]. Fake videos spread like wildfire in 2019, claiming that polio vaccines cause
severe illness, leading to a mob of 500 setting fire to a health clinic in Peshawar
[[126]].
6.3 Reporting of Omics-Related Data
With the advent of newer technologies, like CRISPR-Cas9, concerns continue to mount.
In 2020, a Chinese court sentenced He Jiankui, a man who claimed to have created the
world’s first gene-edited babies using CRISPR, to several years in prison for “illegal
medical practice” and fined him 3 million yuan (USD$430,000) [[127]]. Even James Wilson, the primary investigator involved with the tragic death of
Jesse Gelsinger, has come to warn about not reenacting the “hyperaccelerated transition
to the clinic” of the 1990s [[128]]. From an informatics standpoint, EHR infrastructure, interoperability, standardization,
quality assurance, and privacy and data-security considerations are necessary for
bridging the gap toward more ethical and equitable clinical trials research in the
wake of Gelsinger’s death [[129], [130]].
The increased practice of consumer DNA-analysis-related services over the past decade,
such as 23 andMe and Ancestry.com, has led to numerous ethical and moral debacles.
In April 2018, law enforcement used “genetic genealogy” approaches to identify the
so-called ‘Golden State Killer’ who was last active in 1986 [[131]]. However, the legal process was shaky, as police avoided a requirement for a court
order by uploading sequence data cobbled together from old crime scene samples. For
instance, the 2014 wrongful arrest of Michael Usry based on a partial match in a DNA
database showcased significant privacy concerns [[131]], and it is possible, under the U.S. Genetic Information Nondiscrimination Act (GINA)
of 2008, that released genetic information could be used to deny long-term care insurance,
life insurance, or disability insurance [[132]
[133]
[134]]. This means that individuals need to carefully weigh risks and benefits of genetic
testing, which includes direct-to-consumer sites like 23andMe and Ancestry.com, as
a test result may be required to be disclosed to insurers. For instance, in September
2015, a 36-year-old woman with no current medical issues, was denied life insurance
because of a positive BRCA1 gene result [[133]]. This hurts patients twice over: denying necessary financial protections and making
prospective care impossible. An individual whose sister learned she had a BRCA gene
put it best: “This is not the calculation I want to be doing when it comes to my health”
[[133]133]. Recording genetic-related data in the EHR or in other medical systems may,
in these select circumstances, lead to worse health outcomes for patients.
The benefits of measuring genetic information are undeniable, yet without firm patient
protections it stands to be exploited by governments and corporations at the expense
of the health and well-being of the individual. Additionally, from an informatics
perspective, the availability of genomic data has far outpaced its ability to be analyzed
effectively, there is often a reluctance to share data because of its sensitive nature,
and EHRs have not implemented mechanisms to assist in data collection [[135], [136]]. Some groups have attempted to integrate genetic data into the EHR, while others
have characterized further issues with implementation, naming the current barriers
to implementation as lack of standards-compliant data structures, lack of means for
storage of such data, and representation of such data on a patient level [[137], [138]].
Bombard and Hayeems advanced the idea that digital decision support tools broaden
“the reach and efficiency of genome medicine by enabling easier access to testing
and counselling resources” while also noting the importance of “a human touch” [[139]]. This led to their suggestion in producing a hybrid digital model of human and
computer interaction. Importantly, the pair note that “[t]he quality of care afforded
by digital solutions is only as good as the data input into these systems… Existing
biases may therefore be reinforced by digital solutions, disproportionately disadvantaging
those already marginalized by genomic medicine” [[139]139]. Landry et al. echo this statement stating that “[the] lack of diversity in
genomic research can affect the understanding of the relationships between genes and
disease in unstudied populations including, erroneous rare variant-disease associations
in poorly studied populations, and insufficient evidence regarding the effect of variants
on disease in diverse populations” [[140]].
Often, informaticians, as end-users of data collected elsewhere, are stuck in a difficult
situation. We need to look farther and further for equitable information, such as
including data from the Human Heredity and Health in Africa (H3Africa) consortium,
or from the gnomAD population database [[141]]. We need to be clear about data biases in all of our work if no other data exists
so that tools do not overstep their imitations, and to make clear calls for continued
equitable data collection. Finally, we need to consider the context of contemporary
and historical mistreatment in data collection, and to not discount the present reality
of people represented by data points.
7 User (or Non-User) Experience
7 User (or Non-User) Experience
7.1 The Digital Divide
Due to the pandemic, many health services and resources, such as telehealth, have
moved to the internet. Early studies in digital health equity have focused on the
“digital divide”, the inequities between those who have access and those who do not
have access to technologies [[142]]. Studies show that people facing disadvantaged circumstances, such as limited income
to afford high-speed internet and advanced mobile devices, are unlikely to have equal
access to digital health [[143]
[144]
[145]]. This unintentional exclusion can lead to further disadvantage, thus worsening
health inequity [[145]]. Developing countries may have additional issues related to digital divide when
health systems are under-resourced and beholden to unsustainable financing mechanisms.
Equity of access to digital health must be considered as part of a complex system
[[146]
[147]
[148]]. Even if people have access to technology, digital health equity cannot be reached
without the ability to use the technology and make sense of digital health applications
[[149]].
7.2 Usability &Accessibility
Digital health resources can help facilitate data-based decision making for patients
and providers. However, this requires patient and provider–along with key others such
as family members and interpreters–to be fully able to access and use these resources.
A patient portal, care platform, or other digital tool must be accessible to users
with intellectual and communication-related disabilities as well as their family members,
interpreters, or other key users who may have different access needs than the patient
[[150]]. Currently, patient portals are often inaccessible to users who rely on assistive
technology, users with communication-related and intellectual disabilities, and trans
people [[20], [150]]. Additionally, patient portals can cause access barriers for trans users. Many
such patients may have legal gender markers that are not represented in patient interfaces,
which can encourage stigmatizing treatment by providers, billing errors, inappropriate
forms of address in procedurally generated communications, and worse health outcomes
associated with loss of trust and avoidance of care.
7.3 Telemedicine and Remote-Presence Health Care
The use of telehealth has dramatically expanded during the COVID-19 pandemic to reduce
virus transmission and provide low-cost services. Telehealth could mean increased
accessibility to healthcare by reducing the time it takes to access care, the cost
of providing care, and the need for patients and providers to share a physical location.
However, there is also potential to reinforce health inequities by reducing access
for people with disabilities and those with less access to high-bandwidth technology
or digital literacy. A further risk is creating stable disparities in access to assessments
that are generally only available in-person; for example, it is generally not possible
to assess for pneumonia by listening to lungs, to measure blood pressure, or to assess
fetal heart rate in telehealth contexts [[16]]. If telehealth is a central strategy for reducing access barriers, this could mean
that already medically marginalized communities receive care that routinely misses
key assessments.
7.4 Digital Literacy
While telehealth can help reach patient populations who are currently underserved,
including incarcerated populations and rural populations, these groups often lack
access to high-speed internet, secure devices, and digital literacy [[16], [77]
[78]
[79]]. Other groups that currently face structural barriers to accessing high-quality
care, like older adults, marginalized ethnic and racial groups, patients with low
socioeconomic status relative to their home countries, and patients located in countries
that are low- and middle-income on a global scale, also face digital literacy and
access barriers [[16]]. Telehealth-based strategies must consider these co-existing barriers.
In the same way that a provider in an in-person appointment helps orient the patient
to the clinical environment by indicating where to sit, what to expect, etc., the
provider in a digital encounter must be prepared to assist the patient in adopting
the new format or system and address any apprehensiveness about the efficacy of telehealth
interventions [[150], [151]]. This could mean providing patients with the opportunity to make a test call in
advance of their first telehealth appointment to facilitate comfort with the platform
and process [[152]]
8 Potential Futures
It may be easy to look at the current health equity landscape as irreparable, having
been built on hundreds of years of oppression, marginalization, and discrimination.
In this work, we have emphasized collaboration with user and patient groups to define
priorities, ensure accessibility and localization, and consider risks in development
and utilization of digital health tools. Additionally, we encourage consideration
of potential pitfalls in adopting these diversity, equity, and inclusion (DEI)-related
strategies.
When we think about creating a diverse, equitable, and inclusive informatics landscape,
it is not simply the creation of a committee of marginalized persons who make recommendations
to another mostly indifferent entity. Several independent groups have already put
together such recommendations, which have been available for years. It is not about
only updating one’s language. It is about making a material difference. As Tatiana
McInnis phrased it: “These words [diversity, equity, and inclusion], and the intentions
they seek to express, are well and good, yet they fall flat as [DEI] offices fail
and refuse to address systemic white domination, anti-Blackness, misogyny or any group-specific
violence in their mission statements” [[153]].
One significant problem with DEI offices and organizations is that they expect this
work, which effectively retraumatizes marginalized persons every day, to be free.
DEI is often built on a voluntary model, as a second career that marginalized people
have to do, with the unspoken threat that things will continue to be the way they
are without this uncompensated labor. In this sense, the lives and labor of marginalized
people are treated as commodities to add to the product environment of larger entities
[[154]].
In one of the author’s experiences, she was told up-front that the DEI office was
not about creating long-lasting solutions. It was about “quick wins” that make administrators
look good against the political background. This conceptualization feels endemic to
DEI, especially at large organizations like Google, where attempts to hold individuals
and systems accountable led directly to severe retaliation, as was the case with Meredith
Whittaker and Timnit Gebru. But these individuals, as well as many others like them,
have not given up the fight for equity. In 2017, Whittaker founded the AI Now Institute
with Kate Crawford at NYU, and, in December 2021, Gebru launched the Distributed Artificial
Intelligence Research Institute (DAIR).
In these cases, and numerous others, it is made clear that these DEI entities, as
McInnis put it, “are spaces of impossibility; they cannot do the things they are tasked
with as they are not empowered to hold community members accountable when they fail
to uphold stated investments in equity… They exist not to create systematic change
but as evidence that the work has already been done” [[153]]. In fact, it has been found that this dishonesty behind many organizations claiming
to promote DEI heightens concerns for marginalized peoples, rather than mitigating
them [[155]].
Further, in implementing DEI strategies within medical informatics, it is crucial
to be aware of these pitfalls to ensure that approaches are effective and change culture.
Interventions must not be centered around these “quick wins” that make for PR-friendly
headlines, but instead must confront power structures both within organizations and
in society at-large.
Transformative justice requires accountability on all levels. In the academic sphere,
it is fundamentally apparent that there is a lack of understanding, compassion, or
forethought from administrators. It is not uncommon to see a list of simple demands
for racial equity be pushed aside for a committee that can make recommendations but
has no real power. Oftentimes the only real change occurs after a breaking point has
been reached: graduate student unionization and striking in the United States has
proved as much. And if that’s the way it has to be, then it will continue to be so.
However, it should be made clear that equity in research is not the whole picture
of health equity. To quote one respondent cited in Everhart et al. 2021: “I’m not
interested in research; I’m interested in services” [[156]]. Researching inequity and showcasing its existence is only one piece of that puzzle.
For the most part, it is usually obvious that such inequities exist. It is the rare
minority of research which actually attempts to reduce or eliminate them.
Open-source research is a single step, to make our knowledge, which is in the general
interest, freely available. We, as scientists and researchers, need to be accountable
for how that research is used. Too often researchers will scoff at this idea. A few
years ago, a question to this end popped up on a well-known research website: “Does
the responsibility of researchers end with the scientific publication of their findings?”
The very idea that this question has to be asked is an abject failure of researcher
education.
The responsibility of researchers only begins with publication. The ethical duties
of research involve actively bettering the world around us, and so researchers should
keep in mind societal and policy implications of their work, both within the work
itself and with how that work is used afterward. Researchers need to be active collaborators
with implementers and policymakers. The success of research should not be judged by
its lead researcher’s h-index, but rather by its impact in society.