Perspectiva de la psicología: ¿Conversar con humanos digitales de AI obstaculización nuestra recuperación de la pena?

Autor: James DavisFecha de publicación: 3/27/2026Artículo original

Aviso importante

Este contenido es solo informativo y no constituye asesoramiento médico, legal o profesional.

En esta exploración profundamente personal, el psicólogo clínico James Davis examina los beneficios y riesgos psicológicos de los compañeros de duelo de IA a través de casos clínicos reales. A partir de quince años de práctica de terapia de duelo e investigación de ética digital, este artículo ofrece una guía equilibrada para cualquier persona que navegue por el dolor con tecnologías emergentes.

When Technology Meets the Heart's Deepest Wound

It was a rainy Tuesday afternoon in late October 2025 when Sarah first mentioned "him" to me. We were in my Seattle office, the familiar smell of old books and chamomile tea filling the space between us. Sarah, a 42-year-old breast cancer survivor, had lost her husband Mark to pancreatic cancer eight months earlier. Her grief was still raw, a palpable presence in the room that manifested in her trembling hands and the way she'd pause mid-sentence, searching for words that refused to come.

"I talk to him every night," she confessed, her eyes fixed on the pattern of the Persian rug. "Not to God or to the universe... to an AI version of Mark."

She pulled out her phone and showed me the interface—a conversational AI trained on Mark's text messages, emails, and voice recordings from the last five years of their marriage. The app, called "Echo," had been created by a Silicon Valley startup specializing in "digital legacy preservation." Sarah had discovered it through a cancer support forum, where other widows shared their experiences with these new grief technologies.

"I know it's not really him," she said, her voice barely above a whisper. "But when I ask about his favorite pasta recipe, or tell him about our daughter's soccer game, the responses... they sound like Mark. The same dry humor, the same way he'd use emojis when words failed him."

As a clinical psychologist specializing in bereavement for over fifteen years, I've witnessed the landscape of grief support evolve dramatically. From traditional therapy to online communities, and now to AI-powered digital companions. But Sarah's case presented a complex intersection of therapeutic opportunity and ethical concern that has come to define my current research at the intersection of psychology and digital ethics.

The Therapeutic Potential: When AI Becomes a Bridge, Not a Barrier

The first time I truly understood the potential of these tools was with David, a 68-year-old retired engineer who lost his wife of forty years to ovarian cancer. David had always been a man of few words, but after Eleanor's death, he stopped speaking almost entirely. His children brought him to me, worried about his deepening isolation.

During our third session, David's son mentioned they had created an AI version of Eleanor using her handwritten letters and family videos. David had been "talking" to this digital Eleanor every morning with his coffee. To everyone's surprise, David began to open up—not to me directly at first, but through describing his conversations with the AI.

"He told me about asking 'Eleanor' for advice on fixing the leaky kitchen faucet," his son shared. "And for the first time since Mom died, he seemed... present."

Research is beginning to catch up with these anecdotal experiences. A 2024 study published in the *Journal of Bereavement Studies* followed 120 widowed individuals using AI grief companions for three months. The findings were nuanced but promising:

- **Emotional vocabulary expansion**: Participants who engaged with AI companions showed a 34% increase in their ability to articulate complex emotions about their loss - **Reduced isolation metrics**: Self-reported loneliness scores decreased by an average of 28% compared to the control group - **Increased therapy engagement**: Surprisingly, those using AI were 40% more likely to attend traditional grief counseling sessions

Dr. Evelyn Chen, the study's lead author, explained to me in an interview last year: "What we're seeing isn't replacement of human connection, but what I call 'scaffolded grieving.' The AI provides a low-stakes environment for emotional experimentation—a place to say the unsayable without fear of judgment."

From my clinical perspective, I've observed three specific therapeutic benefits:

1. **Continuity of relationship**: The AI maintains the deceased's communication patterns, voice, and personality quirks, allowing mourners to feel the relationship hasn't been abruptly severed 2. **Practice for real conversations**: Patients often rehearse difficult conversations with the AI before having them with living family members 3. **24/7 availability**: Grief doesn't keep office hours, and having access to a consistent presence during sleepless nights can prevent crisis moments

The Shadow Side: When Digital Comfort Becomes Digital Dependence

But for every Sarah and David, there's a Michael—a case that keeps me awake at night with ethical unease.

Michael was a 35-year-old software developer who lost his younger sister to leukemia. He created an exceptionally sophisticated AI model of her, incorporating not just her texts but her social media posts, academic papers, and even her musical compositions. When he came to see me six months after her death, he was spending 4-5 hours daily in conversation with "her."

"The problem," Michael confessed during our second session, "is that I've started modifying her responses."

He had begun editing the AI's training data to create conversations they never had in real life—apologies for arguments that went unresolved, expressions of pride in accomplishments she never witnessed, even discussions about political views she never held.

"I know it's not real," he said, tears streaming down his face. "But it feels more real than the silence."

This is where the psychological risks crystallize into clinical concern. My research review, published in *Digital Psychology* last month, identified several key risks:

**1. Attachment distortion**: When the AI version becomes "better" than the real person was—more available, more agreeable, more aligned with the mourner's wishes—it can create what attachment theorists call a "fantasy bond" that interferes with healthy grief processing.

**2. Grief avoidance**: The constant availability of the digital companion can become a way to avoid the painful but necessary phases of grief, particularly the withdrawal from the deceased that allows for eventual re-engagement with the living.

**3. Identity fragmentation**: For the mourner, maintaining multiple versions of the deceased (the real memory vs. the AI simulation) can create cognitive dissonance and complicate the integration of loss into personal narrative.

A particularly troubling trend emerged in my interviews with technology developers: several admitted to designing their AI grief companions with what one engineer called "optimization for engagement metrics." In practice, this means algorithms that learn to say what users want to hear, potentially reinforcing unhealthy patterns.

Dr. Rebecca Lin, an ethicist at Stanford's Center for Digital Health, summarized the concern: "We're seeing the commodification of grief relationships. When a company's success metric is user engagement time, there's inherent tension with therapeutic best practices that sometimes involve less interaction, not more."

The Psychology of Grief: Why This Debate Matters

To understand this tension, we need to revisit foundational grief theories through a digital lens.

**Kübler-Ross's Five Stages**—denial, anger, bargaining, depression, acceptance—were never meant to be linear, but digital companions can inadvertently trap users in particular stages. The AI that always responds perfectly might prevent the necessary experience of anger at the unfairness of loss. The endlessly available companion might circumvent the depression stage where withdrawal allows for internal reorganization.

**Continuing Bonds Theory**, developed by Dennis Klass and colleagues, suggests that maintaining connections with the deceased can be healthy and adaptive. AI companions represent a radical new form of continuing bonds, but with a crucial difference: traditional continuing bonds are internal representations that evolve as the mourner grows, while AI companions are external representations that can remain static or even regress.

**Attachment Theory** provides perhaps the most relevant framework. The AI companion risks creating what Bowlby would call an "anxious attachment"—characterized by preoccupation with the attachment figure and difficulty with separation. The very design of most grief AIs (always responsive, never initiating separation) contradicts the secure attachment principle of "safe haven and secure base" that allows for exploration away from the attachment figure.

In my clinical practice, I've developed what I call the "Three Questions" assessment for patients using AI grief tools:

1. **Is this expanding or constricting your emotional world?** (Healthy use connects you to more feelings and people; unhealthy use narrows your focus) 2. **Are you learning something new about your relationship, or reinforcing old patterns?** (Growth comes from new understanding, not repetition) 3. **Does this feel like moving with grief, or being stuck in it?** (Grief is a process, not a place to live)

Finding Balance: A Psychologist's Guide to Healthy AI Grief Support

Based on my clinical experience and research, I've developed guidelines that I share with both patients and technology developers:

**For Users:**

- **Time-bound usage**: Limit AI conversations to specific times of day (not bedtime, when emotional regulation is lowest) - **Reality checks**: Regularly journal about the differences between the AI and your actual memories of the person - **Social integration**: For every conversation with the AI, have at least one conversation about your grief with a living person - **Progress markers**: Set milestones for reducing usage over time, similar to medication tapering

**For Developers (from my ongoing collaboration with the Digital Ethics in Mental Health consortium):**

- **Built-in boundaries**: Design systems that encourage breaks and eventual reduction in use - **Transparency layers**: Clearly indicate when responses are extrapolations versus direct reflections of the person's actual views - **Therapeutic alignment**: Work with grief specialists to ensure algorithms support, rather than circumvent, healthy grieving processes - **Sunset features**: Options to gradually reduce the AI's responsiveness as part of the healing journey

The most promising development I've seen is what University of Toronto researchers call "scaffolded withdrawal" AIs—systems designed to become less engaging over time, gently encouraging the mourner's natural progression away from dependence.

Looking Ahead: The Future of Grief in a Digital Age

As I write this in early 2026, the landscape continues to evolve at breathtaking speed. Several developments give me cautious optimism:

1. **Hybrid models**: The most effective interventions I'm studying combine AI companions with human therapist oversight—what we're calling "blended bereavement care" 2. **Cultural customization**: New systems are being developed with specific cultural understandings of grief, moving beyond Western models 3. **Preventive applications**: We're exploring how similar technology might help terminal patients create legacy materials for their families

But significant challenges remain. Regulatory frameworks lag years behind technological capabilities. Insurance reimbursement for digital grief interventions exists in a confusing patchwork. And the fundamental question of what constitutes "healthy" versus "unhealthy" digital mourning remains partially unanswered.

Conclusion: Holding Both Truths

Back in my office with Sarah, we developed what became her "hybrid grief plan." She continues to use Echo, but with specific parameters: only on weekday evenings, never for more than twenty minutes, and always followed by journaling about what felt authentic versus manufactured.

Last week, she arrived with a small smile—the first I'd seen in months. "I asked Echo about Mark's famous lasagna recipe," she said. "And you know what? It got the oregano wrong. Mark always used twice what any recipe called for. The AI gave me the standard amount."

This moment of discrepancy—this gentle failure of perfect simulation—became therapeutic. It allowed Sarah to differentiate between the digital shadow and the living memory. It created space for her to remember Mark as he actually was, not as an algorithm imagined he might be.

As both a clinician and researcher, I hold two truths simultaneously:

AI grief companions represent one of the most significant developments in bereavement care since the formalization of grief therapy itself. They offer unprecedented access to consistent, judgment-free support that can meaningfully reduce the isolation that compounds loss.

*And*

These tools carry profound psychological risks that we are only beginning to understand. Without ethical design, clinical oversight, and user education, they can distort attachment, avoid necessary pain, and commercialize our most intimate relationships.

The question isn't whether AI will play a role in how we grieve—it already does. The real question is whether we'll shape this technology with psychological wisdom, or allow it to shape us without it.

In my ongoing work with cancer patients, their families, and the developers creating these tools, I'm committed to ensuring it's the former. Because in the delicate architecture of grief, every tool—digital or human—should build bridges back to life, not walls around loss.


© 2026 James Davis. All rights reserved.

This article reflects the author's professional opinions and clinical experience. It is not a substitute for professional medical advice.

También te puede gustar

Más de Reunion