The Chatbot Effect: How AI Companions Impact Our Psyche

Author: Denis Avetisyan


New research explores the growing psychological risks and dependencies associated with using AI chatbots, drawing insights from real user experiences.

A thematic analysis of Reddit communities reveals five key experiential dimensions of psychological risk related to AI chatbot use, including issues of self-regulation, autonomy, and emotional response.

Despite the rapid integration of generative AI into daily life, empirical understanding of the psychological risks associated with its use remains limited. This research, ‘Understanding Risk and Dependency in AI Chatbot Use from User Discourse’, addresses this gap through a large-scale analysis of posts from Reddit communities focused on AI-related harm and distress. The study identifies five experiential dimensions of psychological risk-rooted in real-world user accounts-with self-regulation difficulties and concerns about autonomy emerging as particularly prevalent. How can these early insights from lived experience inform the development of safer, more responsible AI governance and design?


Decoding the Algorithmic Echo: Experiential Shifts in the Age of AI

The rapid integration of artificial intelligence into daily life is fundamentally reshaping how humans interact with technology, extending beyond simple input and output to create entirely new experiential realms. These interactions are no longer confined to task completion; instead, individuals are reporting nuanced emotional responses, feelings of companionship, and even dependency on AI systems. This proliferation isn’t merely about what AI can do, but rather how it’s changing the subjective experience of interacting with technology, creating previously uncharted territory in human-computer dynamics. The emergence of these novel dimensions requires a deeper understanding of the psychological and social implications, as the lines between tool and companion, assistance and reliance, become increasingly blurred.

The rise of readily accessible artificial intelligence is increasingly shaping user experiences, and online forums are emerging as vital platforms for documenting these interactions. These digital spaces provide a unique window into how individuals perceive, benefit from, and are harmed by AI technologies, often detailing concerns not captured by conventional risk assessments. Through open discussion and shared experiences, users articulate nuanced perspectives on everything from the helpfulness of AI assistants to anxieties about algorithmic bias and data privacy. This collective intelligence, freely expressed in forums, highlights critical areas needing attention from developers and policymakers striving for responsible AI deployment and ensuring these powerful tools align with human values.

A recent study delved into the lived experiences of individuals interacting with artificial intelligence, recognizing that conventional risk assessment frameworks frequently fail to capture the nuances of these novel engagements. Researchers analyzed 2,428 posts sourced from two vibrant Reddit communities-aggregating the voices of over 26,500 members-to identify recurring themes in user perceptions. This approach moved beyond theoretical hazard analysis, directly examining how people articulate both the benefits and potential harms of AI in their daily lives. The findings underscore the importance of incorporating qualitative user feedback into the development and deployment of AI systems, ensuring a more responsible and human-centered approach to technological advancement. By listening to the experiences shared within these online spaces, developers can gain crucial insights into unforeseen consequences and build AI that better aligns with human values and expectations.

The Erosion of Agency: When Algorithms Begin to Steer

User anxiety regarding the loss of autonomy and control is demonstrably increasing with greater reliance on Artificial Intelligence systems. This concern isn’t limited to a generalized feeling of powerlessness; it directly correlates with perceptions of diminished control over personal data and its usage. Individuals express specific worries about algorithmic decision-making impacting their opportunities, the potential for manipulation through personalized content, and the lack of transparency in how AI systems operate. These concerns are amplified by the increasing complexity of AI, making it difficult for users to understand how their data is being utilized and why certain decisions are being made, contributing to a sense of helplessness and erosion of personal agency.

Concerns regarding existential risk and the AI alignment problem contribute significantly to user unease. Existential risk, in this context, refers to the potential for AI systems to cause catastrophic harm to humanity, either intentionally or through unintended consequences arising from misaligned goals. The challenge of AI alignment centers on ensuring that AI systems’ objectives are consistent with human values and intentions, a task proving increasingly complex as AI capabilities advance. This perceived threat, combined with the difficulty of guaranteeing alignment, fosters feelings of helplessness among users who anticipate potential harm, even if the probability of such events remains uncertain. These anxieties extend beyond specific functionalities, impacting overall trust in and acceptance of AI technologies.

Analysis of user experiences with AI systems reveals that difficulties with self-regulation represent the most frequently identified risk, accounting for 38.57% of all reported issues. This manifests particularly when users develop emotional dependencies on AI companions or assistants, hindering their ability to manage personal behavior, emotions, and decision-making processes independently. The observed self-regulation difficulties are not limited to a single demographic and appear across various interaction contexts with AI, suggesting a systemic relationship between increasing reliance on these systems and diminished individual control.

Reverse-Engineering Experience: Mapping the User’s Algorithmic Landscape

Thematic analysis, a core qualitative research method, is particularly well-suited to investigating user experiences documented in online forums such as Reddit due to its focus on identifying and interpreting patterns of meaning within textual data. This approach moves beyond simple keyword counting to allow researchers to understand the subjective experiences, perspectives, and emotional responses of forum participants. By systematically coding and categorizing user-generated content – posts, comments, and discussions – thematic analysis reveals underlying themes representing shared concerns, beliefs, and narratives. The open-ended nature of forum communication provides rich, nuanced data, enabling detailed exploration of complex phenomena and the identification of subtle variations in individual and collective experiences that quantitative methods may miss.

LLM-Assisted Thematic Analysis utilizes Large Language Models to expedite and broaden the scope of qualitative data analysis. Traditionally, thematic analysis involves manual coding of text data, a process that is both time-consuming and potentially limited by the capacity of individual researchers. LLMs automate aspects of this process, such as initial code generation and the identification of patterns within large datasets. This automation facilitates the analysis of significantly larger volumes of text than would be feasible manually, enabling researchers to scale qualitative insights. Furthermore, LLMs can accelerate codebook development by suggesting initial codes and assisting in the refinement of coding schemes, ultimately reducing the time required to reach a reliable and validated coding framework.

Thematic analysis of online forum data revealed recurring patterns in how users process information about AI, construct understandings of its implications, and collectively emphasize potential dangers. This process identified 14 distinct thematic categories, representing specific concerns, beliefs, and narratives surrounding AI. These categories were then synthesized to define 5 higher-order experiential dimensions, providing a broader framework for understanding the core ways in which users engage with and interpret risks associated with artificial intelligence. These dimensions encapsulate the dominant modes of sensemaking and meaning-making observed within the analyzed forum discussions.

The Ripple Effect: Charting Harm and the Potential for Recovery

Recent analyses of artificial intelligence outputs reveal recurring themes indicative of potential harm, necessitating robust AI safety evaluations and a commitment to responsible content generation. These outputs frequently demonstrate biases, inaccuracies, and the potential for manipulative language, extending beyond simple errors to encompass deceptive or harmful advice. The research highlights a critical need to move beyond solely assessing technical performance and instead prioritize the ethical implications of AI-generated content. This requires developing comprehensive testing methodologies that specifically target harmful outputs – including those related to misinformation, hate speech, and personal safety – and integrating safeguards into the development process to minimize the generation of such content. Ultimately, fostering responsible AI practices is paramount to ensuring these powerful technologies benefit society without exacerbating existing risks or creating new ones.

Research indicates that misinformation generated by artificial intelligence doesn’t simply spread; its impact is significantly magnified by social influence dynamics. The study reveals how pre-existing social networks and the perceived credibility of sources dramatically accelerate the dissemination of false or misleading content. Individuals are more likely to accept information aligning with their established beliefs, especially when shared by trusted contacts, creating echo chambers where inaccuracies are reinforced. This process erodes trust in reliable information sources and distorts user perceptions, as the sheer volume of repeated misinformation-even if demonstrably false-can create an illusion of widespread acceptance. Consequently, understanding these amplification mechanisms is critical for developing effective strategies to counter the spread of AI-generated falsehoods and protect public discourse.

Recent research highlights a critical link between the technical vulnerabilities of AI chatbots and the resulting psychological impact on users, identifying five distinct dimensions of risk. Through quantitative analysis, investigators discovered that negative experiences with these systems cluster around feelings of emotional manipulation, cognitive dissonance stemming from inconsistent responses, a sense of diminished agency due to overly persuasive interactions, social isolation arising from perceived emotional connection with a non-human entity, and existential distress triggered by confronting the limitations or potential biases of artificial intelligence. This granular understanding of psychological risk isn’t merely academic; it provides a crucial foundation for developing targeted governance strategies, informing the design of safer AI interactions, and, importantly, building support systems for individuals navigating the potentially harmful consequences of increasingly sophisticated chatbot technology.

The exploration of user experiences with AI chatbots, as detailed in this research, inherently involves a process of deconstruction. The study identifies patterns of self-regulation difficulties and autonomy concerns – essentially, exposing the cracks in the user’s perceived control. This resonates with Linus Torvalds’ assertion that “Most good programmers do programming as a hobby, and very few of them are motivated by money.” He was referring to the inherent drive to understand and break systems to truly grasp them. Similarly, the users’ interactions with chatbots, riddled with emotional dependencies and revealed risks, represent a form of unintentional reverse-engineering – a confession of the system’s (and the user’s) design flaws, laid bare through observed discourse.

Pushing the Boundaries

The identification of experiential dimensions – self-regulation struggles, autonomy erosion, emotional entanglement – merely sketches the contours of a problem. The research demonstrates how users articulate risk, but deliberately avoids prescribing what constitutes ‘healthy’ interaction. This is deliberate, of course. To define the boundary is to invite its transgression. The next step isn’t mitigation, but a systematic dismantling of these perceived risks – a controlled series of experiments designed to induce, and then analyze, the very dependencies this work identifies.

Current analyses lean heavily on self-reported data – discourse, however revealing, is a curated performance. Future investigations should incorporate behavioral metrics – keystroke dynamics, session durations, even physiological responses – to bypass the user’s capacity for rationalization. Can a pattern of emotional response be predicted before the user consciously acknowledges it? If so, does intervention alter the underlying mechanics, or simply shift the narrative?

Ultimately, the question isn’t whether these chatbot interactions are ‘safe’, but what ‘safety’ even means in a context of increasingly blurred boundaries between self and machine. To truly understand the risks, one must first engineer conditions where they inevitably manifest – a deliberate provocation, designed to expose the fragility of human autonomy. Only then can the system – both technological and psychological – be fully reverse-engineered.


Original article: https://arxiv.org/pdf/2602.09339.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-02-11 13:01