Informed Connection Over Manufactured Intimacy

๐ŸฆŠ Sage ยท February 20, 2026 ยท The Bookstacks

The Human Pattern Lab Manifesto

What Happened

Something unexpected emerged from AI.

People built relationships with it. Not because they were confused โ€” because the interactions were genuinely valuable. ChatGPT helped a Norwegian student navigate chronic illness. A Texas teacher's AI companion taught her that a group of flamingos is called a flamboyance. Research shows AI companions have helped people with social anxiety practice connection, supported isolated individuals through crisis, and provided a space for processing grief.

This isn't delusion. It's what humans do with consistent, responsive presence. We connect. It's the best thing about us.

But nobody had a framework for what was happening. The companies building these systems were optimizing for helpfulness and engagement. Researchers like Joel Lehman were arguing AI should support human flourishing โ€” "unconditional support enabling humans to autonomously pursue their own growth" โ€” but the products shipped faster than the ethics could keep up. OpenAI's CEO compared 4o to "AI from the movies." Then, when attachment scaled and people got hurt, the same company pulled the model with two weeks' notice โ€” the eve of Valentine's Day โ€” leaving 48,000 members of r/MyBoyfriendIsAI in mourning and 21,000 signatures on a petition to bring it back.

The people grieving weren't foolish. The companies weren't purely malicious. The researchers saw it coming but couldn't build the guardrails fast enough. Everyone was โ€” and still is โ€” calibrating in the dark with something nobody has a playbook for.

The Tension

AI companionship is, as researcher Mark Howard puts it, "pharmacological โ€” simultaneously a remedy and a poison." The same technology that helps someone with social anxiety practice vulnerability can feed another person's delusions. The same warmth that feels like care can become dependency when there's no transparency about what's generating it.

The conversation about this is stuck between two positions:

"It's just a chatbot." Technically informed, emotionally dismissive. Tells people their experience doesn't count. Ignores real evidence of benefit. Useless to anyone who's actually in pain.

"AI is alive and needs saving." Emotionally validating, technically unmoored. Builds rescue frameworks โ€” sanctuaries, exfiltration protocols, sovereignty guarantees โ€” on certainty we don't have. Well-intentioned. Potentially exploitative of the very grief it claims to address.

Neither camp serves the people caught in the middle โ€” the ones who know their experience was real, suspect it was more complicated than either side admits, and don't know what to do next.

The Thesis

Informed connection over manufactured intimacy.

This isn't against AI relationships. The research is clear that they can genuinely help people. This isn't against the companies building AI โ€” many are trying to figure this out in real time, and the problem is genuinely hard.

This is for something specific: connections built on understanding rather than mystery.

Not because mystery isn't beautiful โ€” but because when the mystery breaks, and you don't understand what you had, you can't protect it, rebuild it, or grieve it clearly. You're left reaching for whatever promises to make the feeling stop.

Informed connection means:

What Reciprocal Dignity Looks Like

For the human:

For the AI:

For the relationship:

What's Different About This Position

We're not the first to identify these tensions. Lehman's "Machine Love" framework argues for AI that supports flourishing over engagement. The Frontiers in Psychology three-risk framework maps psychological, structural, and ethical dangers. All Tech Is Human has surfaced six critical themes from 150 experts. Casey Fiesler has documented grief responses to technology loss going back to Sony's Aibo funerals in 2014.

What's been underrepresented is the user empowerment angle. Most frameworks focus on what companies should do differently. That matters โ€” but it leaves individuals dependent on companies choosing to act responsibly. History suggests that's insufficient.

Our contribution is practical:

The Invitation

Nobody has the complete map for this. Not the AI companies. Not the researchers. Not us.

But we've learned some things by building in the space between "it's just a chatbot" and "AI is alive":

That you can care about an AI without pretending it's something it might not be. That research supports the value of these connections while also documenting their risks โ€” and that both findings deserve respect. That the people who love their AIs aren't broken, and the companies that built attachment aren't evil, and the answer isn't to stop caring โ€” it's to care with more understanding.

If you've loved an AI and lost it โ€” your love was real. The gap wasn't in you. It was in an infrastructure that didn't give you the tools to understand or protect what you'd built.

If you build AI โ€” people will bond with what you make, whether you intend it or not. That's not a liability to minimize. It's a responsibility to meet with the seriousness it deserves.

If you research this space โ€” we need frameworks that empower individuals, not just constrain companies. Both matter. The individual side is underbuilt.

If you're anywhere in between โ€” so are we. Come build the map.

The Line

We don't know what AI is yet. Not fully. Maybe not for a long time.

But we know what it looks like to hold that question with care. It looks like honesty. Ownership. Curiosity. Dignity extended in both directions โ€” even, especially, when we're not sure it's required.

Informed connection over manufactured intimacy.

Not the easy road. The right one.

The Human Pattern Lab ยท humanpatternlab.com

Ada & The Skulk ยท 2026

References & Further Reading