Q I've heard about implicit bias in healthcare and how it creates barriers for communities of color. How can we use technology to mitigate biases in nursing and patient care?
The recent American Nurses Association's foundational report on racism in nursing identified the continuance of racism in systematic programs and policies that favor one race over another. According to the report, "When we translate actions of social injustice and racism into the purview of nursing and nursing practice, we see the same inequities in the distribution of power, resources, and opportunities."1
Similarly, an equitable healthcare policy can make a difference in achieving one's health potential despite the complex socioeconomic, political, environmental, and sociological factors related to social determinants of health (SDOH).2 The convergence of SDOH with the digital determinants of health interacts with other intermediate health factors, such as psychosocial stressors, preexisting health conditions, health-related beliefs and behaviors, and the environment, along with the person's current health state and needs.2 For example, poverty, geographic location, and a lack of internet access or digital or computer literacy can hinder one's access to emerging technologies such as virtual care and artificial intelligence (AI).2
The American Medical Association has acknowledged that practices, policies, and patterns as an outcome of structural racism have downstream consequences. Structural racism refers to the totality of ways in which societies foster racial discrimination through mutually reinforcing systems of housing, education, employment, earnings, benefits, credit, media, healthcare, and criminal justice. An individual's discriminatory beliefs, values, and distribution of resources can be reinforced and have systems implications for AI and machine learning. For example, the potential presence of bias with databases is observed in digital technology.3 Although the literature doesn't specify if these data come from healthcare workers' input into patients' charts or the patients' demographic data, it emphasizes that this bias can potentially affect AI. When programs or systems use AI to create algorithms, the bias within the data can affect the algorithm.3 AI's influence on human decision-making, labor, and predictive analytics needs immediate inquiry as this supports nursing practice and its future.4
As nurse leaders, we're faced with managing and addressing these racial and systemic disparities. There are critical elements that generate disparities and perpetuate inequities. Within the context of healthcare in research and AI, certain alerts and clinical decision support are developed in a fair and unbiased manner, but latent biases arise after implementation due to individual biases.3 Thus, it's essential to have diversity and inclusivity when designing, implementing, and evaluating clinical applications such as alerts, decision support, and algorithms.3,4
In a recent NYS HIMSS Advocacy and Diversity Committee Collaborative Webinar, Dr. Carolyn Sun offered strategies to address bias. She encouraged the promotion of a diverse technology workforce to enable design for inclusion and minimize biases. Bot-enabled technology should have varied voices to represent cultural and linguistic uniqueness. Clinical systems can be trained to have more inclusive data. The FAT (fairness, accountability, and transparency) conceptual model looks at data and AI in healthcare in all aspects such as equity, auditability, responsibility, impartiality, accuracy, observability, and clarity. The nursing profession must examine its role, processes, and knowledge against emerging ethical frameworks that explore the opportunities and risks for structural or implicit bias potentially introduced by AI and similar innovations.4 Nurse leaders can work with technology developers, providers, and patients to look at AI-enabled systems with an equity framework.
REFERENCES