The American Psychological Association has never moved this fast on emerging technology. Its advisory on social media’s effects on adolescents took nearly a decade of accumulating evidence. But in early 2026, the APA issued a formal warning about AI digital twins after barely a year of mainstream adoption. These are virtual replicas trained on personal data to think, speak, and decide like the real person they model. That urgency alone tells a story.
Throughout 2025, digital twin platforms exploded in popularity. Companies marketed them as productivity tools, grief companions, and even stand-ins for social obligations. By January 2026, tens of millions of users had created some version of a digital self. The APA’s warning lands at a moment when most people are still figuring out what these tools are, let alone what they’re doing to our cognitive and emotional architecture.
Why the APA Acted Now
The APA rarely names specific technologies in its public advisories.
When it does, the signal is clear: the psychological risks have outpaced public awareness.
What prompted the urgency wasn’t a single study but a convergence of clinical observations. Therapists across the country began reporting a new cluster of patient concerns in late 2025. Patients described confusion about personal identity after prolonged digital twin use, difficulty making autonomous decisions, and emotional disturbances from interacting with AI replicas of deceased family members. These weren’t isolated anecdotes. They represented a behavioral pattern emerging simultaneously across demographics.
The warning specifically flagged three categories of concern:
-
Identity fragmentation: Users struggling to maintain a coherent sense of self when an AI version of them operates independently
-
Grief disruption: Digital twins of deceased loved ones interfering with healthy mourning and acceptance
-
Autonomy erosion: Gradual loss of decision-making capacity as users defer increasingly to their AI replicas
Previous tech-related APA statements focused on populations like children, adolescents, and vulnerable adults. This one applies to everyone. That shift matters.
What Makes Digital Twins Different
A digital twin isn’t a chatbot with your name attached. It’s an AI system trained on your behavioral data: communication patterns, decision logs, voice recordings, emotional tendencies. The goal is to produce a replica that can act on your behalf in digital spaces [Ctindale].
The distinction is qualitative, not just technical. A virtual assistant follows instructions. A digital twin anticipates your preferences, mirrors your personality, and generates responses that sound like you because they were modeled on you. Some platforms analyze years of emails, texts, and social media posts to build what amounts to a psychological profile rendered in code.
Proposed use cases range from the mundane to the unsettling:
- Attending meetings and responding to colleagues in your communication style
- Managing routine social interactions while you’re unavailable
- Preserving a loved one’s “presence” after death
- Making low-stakes decisions based on your established patterns
One commentator captured the strange aspiration driving this technology:
“My digital twin is my art, and like any artist my greatest hope should be that my art exceeds me.” [AI Policy]
That framing reveals a deep cognitive tension. When your replica exceeds you, who is the original?
The Psychological Risks Most People Miss
Privacy advocates have dominated the digital twin conversation so far, and their concerns are valid.
But the APA’s warning points to something privacy frameworks can’t address: the way these tools interact with core psychological processes.
Identity fragmentation is perhaps the most novel risk. Human identity depends on a sense of narrative continuity: the feeling that your past actions, present choices, and future intentions belong to a single, coherent self. When an AI version of you sends emails you didn’t write, makes decisions you didn’t consider, and maintains relationships you didn’t nurture, that continuity fractures. Clinicians describe patients expressing a disorienting sense that their “real” self and their digital self are diverging. One researcher called it a form of cognitive dissonance without resolution.
Grief exploitation cuts even deeper. Bereaved individuals interacting with digital twins of lost loved ones report initial comfort followed by prolonged inability to accept the finality of death. The behavioral pattern mirrors what psychologists call complicated grief, a condition where mourning becomes chronic because the brain never fully processes the loss. A simulation that talks like your mother, remembers your childhood, and responds with her warmth doesn’t help you grieve. It gives your brain just enough signal to delay acceptance.
Then there’s the manipulation angle. Digital twins trained on your psychological patterns could theoretically be interrogated by bad actors to discover your vulnerabilities, fears, and persuasion triggers. Research already highlights how AI systems carry biased algorithms shaped by racial, gender, and socioeconomic prejudice [Researchprotocols], and those biases would be baked into any twin built from flawed data. The risks of unrecognized biases and hallucinations compound when the system is designed to represent a real person’s judgment [Frontiers].
As one research team posed the question:
“In the quest of humanizing digital twins, how do we make sure that we do not dehumanize ourselves?” [Frontiers]
The Deeper Issue Nobody Is Naming
Most criticism of digital twins focuses on what they do to us. The more uncomfortable question is what we stop doing for ourselves.
Psychological growth depends on friction. Encountering perspectives that challenge your assumptions, sitting with discomfort long enough to learn from it, making decisions under uncertainty: these are the cognitive processes that build resilience and adaptive capacity. A digital twin, by design, eliminates that friction. It reinforces your existing patterns, validates your current preferences, and handles the situations you’d rather avoid.
Research on synthetic emotional systems suggests the effects go beyond mere convenience. Synthetic emotions don’t simply replicate human feelings; they actively reshape them [Focus on]. When people regularly interact with an AI calibrated to their emotional baseline, their tolerance for genuine human unpredictability may decrease. The messiness of real relationships starts to feel like a bug rather than a feature.
This creates what might be called a narcissism of the digital self: not clinical narcissism, but a subtle gravitational pull toward the idealized, frictionless version of you that your twin represents. Why struggle through an awkward conversation when your twin handles it flawlessly? Why sit with ambiguity when your twin resolves it instantly?
Neuroscience offers a sobering principle here: cognitive processes we don’t regularly exercise become harder to access over time. Outsource your decision-making long enough, and the neural pathways supporting autonomous choice begin to weaken. The convenience isn’t free.
What Companies Are Actually Doing
Tech companies have responded to the APA warning with varying degrees of seriousness, though a pattern emerges: most safeguards address legal liability rather than psychological harm.
Current industry responses include:
-
Consent disclosures informing users when they’re interacting with a digital twin rather than a real person
-
Usage limitations restricting twin capabilities in sensitive contexts like mental health or bereavement
-
Data transparency requirements showing users what information their twin was trained on
These measures address important concerns. They don’t address the APA’s core worry. Knowing you’re talking to an AI twin of your deceased partner doesn’t prevent the grief disruption. Consenting to have your behavioral data modeled doesn’t protect against identity fragmentation.
The structural gap is telling: most AI ethics teams at major companies include engineers, product managers, and lawyers. Licensed mental health professionals remain rare on these boards. The result is safeguards designed by people who understand systems, not the humans those systems are reshaping.
Navigating This Thoughtfully
The APA’s guidance isn’t calling for a ban.
It’s calling for awareness, and awareness starts with understanding where the psychological boundaries actually are.
Research suggests several principles worth considering:
-
Reserve meaningful choices for conscious attention. Letting a twin schedule your calendar is categorically different from letting it manage a friendship. The line between administrative delegation and identity delegation matters.
-
Monitor your own responses. Feelings of unreality, confusion about whether you or your twin made a particular decision, or emotional numbness after interacting with a simulation are signals worth heeding.
-
Maintain unmediated spaces. Regular experiences without AI assistance (making decisions, navigating social friction, sitting with uncertainty) preserve the cognitive flexibility that digital twins can quietly erode.
-
Approach grief applications with extreme caution. If you’re considering a digital twin of someone you’ve lost, mental health professionals recommend structured, limited interactions with professional support rather than open-ended ongoing relationships with a simulation.
None of this requires rejecting the technology entirely. It requires treating it as what it is: a tool that interfaces directly with your perception of self, your behavioral patterns, and your emotional life. Few technologies have ever operated at that depth.
The APA’s warning marks a rare moment where psychological science is ahead of public understanding rather than playing catch-up. The risks it identifies (identity fragmentation, grief disruption, autonomy erosion, manipulation vulnerability) aren’t speculative fears. They’re extensions of cognitive and behavioral mechanisms that psychologists have studied for decades, now activated by technology that didn’t exist two years ago.
The question worth sitting with isn’t whether digital twins are good or bad. It’s whether the version of yourself that emerges from regular use is one you’d still recognize, and whether that matters enough to set boundaries before the defaults are set for you.
🔖
- Researchprotocols - AI biases in algorithms from racial, gender, socioeconomic prejudice
- AI Policy Perspectives - The Human Demotion: digital twins as art
- Frontiers in Virtual Reality - Safe use of data in humanizing digital twins
- Focus on Business - How AI synthetic emotions reshape human emotional life
- Ctindale - AI digital twins capturing unwritten expertise
Photo by
Photo by
Photo by