The Algorithmic Pulse: How AI is Shaping Population Health

Author: Denis Avetisyan


A new framework proposes applying epidemiological principles to understand and measure the pervasive impact of artificial intelligence on the wellbeing of communities.

Generative AI usage is not uniform across racial and ethnic groups; while Black adults demonstrate the highest rates of application in health-related pursuits (30%) and entertainment (31%), individuals identifying with “Other/2+” ethnicities lead in utilizing the technology for internet searches (54%) and educational purposes (32%), suggesting that assessments of AI exposure require nuanced measurement of <i>how</i> the technology is employed, rather than simply <i>whether</i> it is used-though data from the AmeriSpeak survey (<span class="katex-eq" data-katex-display="false">N=1{,}163</span>) indicates some estimates are based on small sample sizes and should be interpreted with caution.
Generative AI usage is not uniform across racial and ethnic groups; while Black adults demonstrate the highest rates of application in health-related pursuits (30%) and entertainment (31%), individuals identifying with “Other/2+” ethnicities lead in utilizing the technology for internet searches (54%) and educational purposes (32%), suggesting that assessments of AI exposure require nuanced measurement of how the technology is employed, rather than simply whether it is used-though data from the AmeriSpeak survey (N=1{,}163) indicates some estimates are based on small sample sizes and should be interpreted with caution.

This review argues for the systematic study of AI exposure – both ambient and personal – as a key determinant of health outcomes and proposes methods for causal inference.

Despite increasing reliance on algorithmic systems, population health research lacks established methods for quantifying exposure to-and assessing the health effects of-artificial intelligence. In this paper, ‘The Epidemiology of Artificial Intelligence’, we argue that AI now functions as a determinant of health, necessitating a systematic epidemiological approach to understand its impact. We propose a framework distinguishing between ambient AI exposure-ubiquitous, passively received algorithmic influences-and personal AI exposure-direct, volitional engagement with AI tools-to facilitate causal inference. Given the potential for chronic, population-level effects, how can we adapt existing study designs and governance structures to adequately address this emerging determinant of health?


Beyond Algorithmic Access: Uncovering the True Determinants of Health

Conventional understandings of digital health determinants, primarily focused on equitable access to technology and the skills to utilize it, are proving increasingly inadequate in the face of widespread artificial intelligence integration. While bridging the digital divide remains crucial, these metrics fail to account for the subtler, yet potent, influence of algorithms themselves. AI systems now actively curate information, shape perceptions, and even nudge behaviors, operating beyond simple access or literacy limitations. A person may have both a smartphone and the ability to use it, yet still be disproportionately affected by biased algorithms influencing healthcare recommendations, financial opportunities, or exposure to health-related misinformation. Consequently, a more nuanced framework is needed to assess how these ambient algorithmic environments contribute to – or detract from – population health, recognizing that the mere presence of technology does not guarantee equitable outcomes.

Increasingly, an individual’s health is shaped not only by traditional factors but also by their exposure to artificial intelligence systems. This exposure manifests in two key ways: direct interactions, such as using AI-powered diagnostic tools or telehealth platforms, and through ambient algorithmic environments – the less visible, yet constant, influence of AI operating in the background of daily life. These environments, encompassing social media feeds, personalized pricing, and even urban infrastructure managed by AI, subtly shape behaviors, opportunities, and access to resources. Consequently, AI exposure is becoming a powerful determinant of health outcomes, potentially exacerbating existing health disparities or creating new ones as algorithms learn and reinforce patterns based on available data. This pervasive influence demands a re-evaluation of how health is understood and addressed in the digital age, moving beyond simple access to technology and focusing instead on the complex interplay between individuals and the AI systems that increasingly mediate their lives.

Traditional determinants of health – socioeconomic status, access to care, even digital literacy – are largely understood as relatively stable factors impacting wellbeing. However, the influence of algorithms presents a fundamentally different challenge. Algorithmic determinants are not fixed; they continuously adapt based on user interactions and evolving data landscapes, generating personalized experiences and subtly shaping behaviors in ways that are often opaque. This dynamic, non-stationary nature means that static analyses – snapshots in time – are insufficient to capture the full impact of these systems. Consequently, a new analytical framework is required, one that embraces computational methods capable of tracking algorithmic drift, modeling feedback loops, and understanding the emergent properties of these complex, adaptive systems to accurately assess their influence on population health.

Individuals exhibit vastly different levels of AI exposure-ranging from direct, volitional use in mental health care to indirect exposure through healthcare providers and family members-necessitating multidimensional assessment to understand the impact of this technology.
Individuals exhibit vastly different levels of AI exposure-ranging from direct, volitional use in mental health care to indirect exposure through healthcare providers and family members-necessitating multidimensional assessment to understand the impact of this technology.

An Epidemiological Perspective on AI’s Population-Level Effects

Traditional epidemiological methods, developed to study environmental hazards, are increasingly applicable to assessing the population-level consequences of artificial intelligence (AI) exposure. This approach shifts the focus from individual responses to AI – such as a user’s interaction with a specific algorithm – to examining aggregate trends across defined populations. By treating AI systems as potential social determinants of health or behavioral factors, researchers can leverage established frameworks for exposure assessment, confounding control, and effect modification. This population-level perspective allows for identification of disparities in AI-related outcomes, investigation of cumulative effects from multiple AI exposures, and the evaluation of interventions aimed at mitigating potential harms or maximizing benefits across entire communities, rather than solely focusing on individual-level interventions.

Estimating the causal impact of artificial intelligence requires analytical methods suited to longitudinal data where exposure to AI systems varies over time. G-Computation, Marginal Structural Models (MSMs), and Target Trial Emulation are statistical techniques designed to address this complexity. G-Computation predicts potential outcomes by simulating interventions on the time-varying exposure, effectively “setting” AI exposure levels to observe resulting changes. MSMs estimate average treatment effects by weighting observations based on their propensity to experience specific AI exposures, controlling for confounding variables. Target Trial Emulation constructs a hypothetical randomized trial within observational data, mimicking the conditions needed for causal inference by defining a target population and a comparator group based on observed characteristics and AI exposure timelines. These methods allow researchers to move beyond simple associations and estimate the true effect of AI on health or social outcomes in dynamic, real-world settings.

Given the frequent impracticality or ethical concerns surrounding randomized controlled trials for assessing the impact of AI systems, quasi-experimental designs offer viable alternatives for causal inference. These designs, including interrupted time series, difference-in-differences, and propensity score matching, leverage observational data to approximate the conditions of a controlled experiment. Specifically, they aim to establish a comparison group that is as similar as possible to the exposed group, while accounting for confounding variables. While quasi-experimental methods cannot definitively prove causality, they can provide strong evidence of an effect when implemented with rigorous statistical techniques and careful consideration of potential biases. The validity of conclusions drawn from these designs relies heavily on addressing selection bias, unmeasured confounders, and the assumption of parallel trends or exchangeability.

AI exposure manifests societally through direct, volitional use by some individuals (gold silhouettes) and ambient exposure via institutions and social networks for others (black silhouettes), while a portion of the population remains unexposed (grey silhouettes), highlighting that non-use does not equate to non-exposure.
AI exposure manifests societally through direct, volitional use by some individuals (gold silhouettes) and ambient exposure via institutions and social networks for others (black silhouettes), while a portion of the population remains unexposed (grey silhouettes), highlighting that non-use does not equate to non-exposure.

Unmasking the ‘Machine Habitus’ and its Impact on Health Equity

The concept of ‘machine habitus’ describes the ways in which societal structures and inherent biases become embedded within the design and function of artificial intelligence algorithms. This embedding occurs through biased training data, algorithmic choices made by developers, and the reflection of existing power dynamics within data collection and labeling processes. Consequently, AI systems are not neutral arbiters of health information or care; instead, they can perpetuate and even amplify existing health disparities. The machine habitus therefore presents a significant obstacle to achieving equitable health outcomes, as algorithmic outputs may systematically disadvantage certain demographic groups or reinforce historical patterns of unequal access to care and resources.

Algorithmic Determinants of Health (ADH) operate not as objective tools, but as systems deeply influenced by the data used in their training and the biases of their creators. This means ADH can inadvertently perpetuate and even amplify existing societal inequalities related to race, socioeconomic status, and access to care. Specifically, if training datasets underrepresent or misrepresent certain demographic groups, the resulting algorithms may produce inaccurate or discriminatory outputs, leading to disparities in diagnosis, treatment recommendations, and resource allocation. Consequently, reliance on biased ADH can exacerbate health inequities, hindering efforts to achieve equitable health outcomes for all populations and potentially widening existing gaps in health status.

Current data indicates that 57% of US adults utilize generative AI technologies, demonstrating widespread algorithmic exposure. However, engagement with health-related AI applications is not uniform across demographic groups. Specifically, 30% of Black adults report using health-related AI, almost double the national average of 17%. Conversely, only 9% of Hispanic adults utilize these tools. This significant disparity in health AI adoption rates suggests that existing inequalities may be further compounded by differential access to, or engagement with, algorithmically-driven healthcare resources, necessitating focused measurement efforts and targeted mitigation strategies to ensure equitable outcomes.

Current data indicates a significant correlation between educational attainment and daily generative AI usage, with 20-21% of college-educated adults reporting daily use compared to only 8% of those without a college education. Notably, daily use is highest among adults who identify as Other/Multiracial, reaching 30%. This suggests that access to, or engagement with, generative AI technologies is not evenly distributed across demographic groups, potentially reinforcing existing disparities in information access and technological literacy.

Analysis of a survey of 1,163 US adults reveals that while generative AI use is highest among those aged 30-44 with higher education and income, health-related AI use is disproportionately higher among Black adults, indicating that broad AI adoption trends do not necessarily reflect specific health-related exposure patterns.
Analysis of a survey of 1,163 US adults reveals that while generative AI use is highest among those aged 30-44 with higher education and income, health-related AI use is disproportionately higher among Black adults, indicating that broad AI adoption trends do not necessarily reflect specific health-related exposure patterns.

Towards Responsible AI in Health: Governance, Data Sharing, and Continuous Monitoring

The increasing integration of artificial intelligence into healthcare necessitates a proactive and comprehensive regulatory framework to govern its deployment. Algorithmic determinants of health – factors predicted by AI that influence health outcomes – demand careful oversight, as biases embedded within algorithms can exacerbate existing health inequities or introduce new ones. Effective regulation isn’t simply about preventing harm; it’s about fostering trust and ensuring equitable access to the benefits of AI-driven healthcare innovations. This includes establishing clear standards for data quality, algorithmic transparency, and ongoing performance monitoring, alongside mechanisms for accountability when unintended consequences arise. A thoughtful regulatory approach will be critical to unlocking the transformative potential of AI in health while upholding ethical principles and safeguarding patient well-being.

The efficacy of artificial intelligence in healthcare hinges on its ability to generalize beyond the data used for initial training, and validating this requires broad, collaborative data-sharing initiatives. However, simply pooling data presents significant privacy risks; therefore, robust safeguards are paramount. Techniques like federated learning, differential privacy, and homomorphic encryption are emerging as crucial tools, enabling algorithms to learn from decentralized datasets without directly accessing sensitive patient information. This approach not only strengthens privacy but also facilitates the identification and mitigation of algorithmic biases that may be present in any single dataset. By proactively addressing these concerns, the healthcare community can harness the power of AI while upholding ethical standards and ensuring equitable outcomes for all patients.

Just as pharmacovigilance meticulously tracks the safety and efficacy of pharmaceuticals post-market, a similar proactive monitoring system is crucial for artificial intelligence in healthcare. This extends beyond simply identifying errors; it necessitates continuous assessment of AI algorithms for unintended consequences – biases that disproportionately affect certain populations, emergent behaviors not foreseen during development, and the erosion of trust in medical decision-making. Such a system would involve real-world data collection, ongoing performance audits, and the establishment of clear reporting mechanisms for adverse events linked to AI implementation. By embracing this principle of continuous monitoring, healthcare can harness the potential of AI while mitigating risks and ensuring equitable, beneficial outcomes for all patients.

The pursuit of quantifying artificial intelligence’s influence on health demands a rigor often absent in the breathless pronouncements of ‘digital health’ innovation. This paper rightly frames the inquiry through an epidemiological lens, acknowledging that exposure – both ambient and personal – necessitates careful measurement. It’s a bracing corrective to the tendency to treat AI as a neutral tool; instead, it’s a pervasive environmental factor. As Ludwig Wittgenstein observed, “The limits of my language mean the limits of my world.” Similarly, the limits of current data collection and analytical methods constrain understanding of AI’s true impact, necessitating a systematic effort to define and measure its determinants of health. The focus on repeated failure to disprove hypotheses-a core tenet of the scientific method-is a welcome antidote to the prevailing culture of celebratory hype.

What’s Next?

The proposition that artificial intelligence constitutes a genuine epidemiological force feels less radical with each passing iteration of algorithmic deployment. However, acknowledging the question isn’t the same as answering it. The framework presented here – distinguishing between ambient and personal AI exposure – merely offers a starting point. Every dataset is, after all, just an opinion from reality, and this one will undoubtedly prove incomplete. The true challenge lies not in quantifying exposure, but in disentangling correlation from causation within systems designed to predict and therefore subtly shape human behavior.

Future work must grapple with the inherent messiness of real-world implementation. Averages, while convenient, will be insufficient; the devil isn’t in the details, but in the outliers – those disproportionately affected, or those whose data simply doesn’t fit the prevailing model. Furthermore, a reliance on readily available data risks amplifying existing biases, creating a feedback loop where algorithmic ‘health’ interventions exacerbate inequalities.

Ultimately, the field requires a healthy dose of epistemological humility. It’s tempting to view AI as a neutral tool, but the history of public health teaches us that even the most well-intentioned interventions can have unintended consequences. A truly rigorous epidemiology of artificial intelligence will not seek to prove its benefits, but to systematically dismantle every assumption about its neutrality, seeking instead to understand – and mitigate – the full spectrum of its effects.


Original article: https://arxiv.org/pdf/2604.14086.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-04-16 17:16