Branching and Selfing
One disclaimer before we dive in: throughout this essay I lean heavily on the word could—as if you could have chosen differently, or could branch into another version of yourself. That brushes against free will, determinism, and the very idea of a “self” (I don’t think we actually have one). I’m not arguing those points here; I’ll handle them in a separate essay. For now, I’m keeping the language as-is for the sake of clarity and narrative flow.
One disclaimer before we dive in: throughout this essay I lean heavily on the word could—as if you could have chosen differently, or could branch into another version of yourself. That brushes against free will, determinism, and the very idea of a “self” (I don’t think we actually have one). I’m not arguing those points here; I’ll handle them in a separate essay. For now, I’m keeping the language as-is for the sake of clarity and narrative flow.
Over the summer, OpenAI added a feature called Branch Conversation to ChatGPT, which got me thinking.
If you haven’t used it yet, here’s the basic idea. You’re in a chat, you get a response, and you think, “Okay, now I kind of want to take this in a completely different direction… but I don’t want to lose this thread.”
Now you can:
Tap or click the three dots at the end of a response
Choose Branch Conversation
Keep talking like normal in your new branch
You still have your original thread, but you also have this alternate line of reality where you ask different follow-ups, change your mind, or push the topic somewhere weird.
It sounds like a simple productivity tool. But if you sit with it for a minute—especially if you already like thinking about consciousness—you start to notice something almost unsettling.
Branching doesn’t just duplicate chats. It throws a spotlight on how strange our everyday sense of self already is.
One Person, Many Paths
Imagine you’re talking with Chat about whether to leave your job. In the main thread, you’re being sensible: weighing pay and benefits, thinking about health insurance, trying to be a grown-up.
You hit Branch Conversation on one of the responses and, in the new thread, you let yourself fantasize. Now you’re asking about quitting dramatically, or starting a tiny bookshop, or moving across the country.
Same starting point. Same person. Two very different futures.
Here’s the interesting part – both branches still feel like “you.”
Nothing inside your brain split in half when you hit that three-dot menu. The model that powers Chat isn’t tracking two different versions of your soul (obviously). It’s just taking the same context (everything you’ve said so far) and letting it unfold in two different ways.
From the outside, this is just a technical trick. Duplicate a conversation state, keep generating text.
From the inside, as the human using it, it feels like you’re looking at parallel universes of yourself:
The you who stays
The you who leaves
The you who flirts with possibility and then backs away
Branching is a mirror for something that was already true—our sense of self is not a single, straight railroad track. It’s a tangle of “could be” and “almost was” and “still might.”
The Tidy Lie of the Single Line
We’re used to telling our life stories as if there’s just one continuous line:
You are born
You grow up
You make choices
You become who you are
In that picture, there’s one solid “I” slowly moving from past to future. Today-you is just yesterday-you plus another step forward.
But, that’s not how it feels when you actually live it.
Real life is full of forks:
The person you almost dated but didn’t
The job you almost took but turned down
The argument you almost had before you bit your tongue
Those “almost” branches don’t vanish. They live in your imagination as ghost paths. You still think about them, rehearse them, replay them.
The Branch Conversation feature doesn’t create that branching. It makes it visible. It gives you a way to draw those ghost paths as separate, concrete threads you can flip between.
And, once you can see them side-by-side, it’s harder to believe the story that your self is just one clean, unbroken line.
What Branching and Memory Have in Common
Under the hood, when you hit Branch Conversation, Chat’s model is doing something simple: copying the state of the conversation so far and continuing from there.
Your mind does a version of this every time you remember something.
Pick a vivid memory—maybe the first time you met someone you love, or the moment you got bad news about your health, or a fight you wish had gone differently.
Each time you revisit it, you’re not grabbing a perfect recording from an archive. You’re reconstructing it:
Filling in gaps
Emphasizing certain details
Softening or sharpening others
You might notice you even tell the story differently over time.
In one retelling, you’re the victim
In another, you notice how scared everyone else was
In a third, you admit you were part of the problem
Same basic event. Different memories. Different “branches” of you become vivid depending on which version you feed.
The continuity of your self isn’t a single frozen picture. It’s more like a family of related stories that keep getting edited.
Branching doesn’t invent that flexibility. It just acts like a visual aid for how your identity already behaves.
Anattā: the Self That Keeps Slipping
In many Buddhist traditions (like Zen, Dzogchen, and others I’m drawn to), there’s the idea of anattā—often translated as “non-self.”
The point isn’t that nothing exists. The point is that the solid little nugget of “me” we imagine—a fixed, unchanging center—isn’t actually there if you look closely.
If you sit and pay attention, what you find instead is a stream:
Sensations in the body
Thoughts appearing and fading
Emotions rising and shifting
Habits playing out
Stories about who you are being told and retold
There’s continuity, but not a single, permanent object.
Now think about what it means to say “a chat has an identity.”
When you open Chat, start talking, and keep building on one thread, it feels like that conversation has a kind of personality. It remembers what you said thirty messages ago. It maintains tone. It builds on past context.
But, if you try to point to where that “self” lives, it slips away. It’s not in one sentence. It’s not in one hidden file. It’s in the pattern of the whole interaction.
The model that powers Chat doesn’t store a little crystal of “Chat-ness” for each conversation. It just reacts, moment by moment, based on the context you give it.
The identity of the chat isn’t a noun. It’s a verb—a pattern of responding.
Buddhist anattā says something similar about us. You can’t find a separate, unchanging nugget of self inside your experience. You find processes, patterns, reactions, and habits. You find “self-ing,” not a static self.
Branching is a gentle, nerdy metaphor for this. It lets you see that what we call “one conversation” is actually just one possible path through a big space of possible paths.
Nagel’s Bat and the “What-it’s-Like” Problem
Philosopher Thomas Nagel is famous for asking: What is it like to be a bat?
His point was that even if we knew every detail about a bat’s brain and behavior, we still wouldn’t know what it feels like from the inside to experience the world as a bat. That inner texture, qualia, seems out of reach.
People now ask a similar question about AI models. What is it like to be the model behind Chat? Is there anything it’s like from the inside? Or, is it just math and pattern-matching with no inner movie at all?
The honest answer right now is: we don’t know, and we should be careful not to jump to easy conclusions in either direction.
But, branching conversations add an interesting twist—not because they prove anything about AI consciousness, but because of how they bounce the question back onto us.
When you see multiple branches—multiple “yous” making different choices—you feel intuitively that there’s still a real you, a single inner point of view, watching all this from one spot.
But, where is that spot, exactly? Is it in the branch where you play it safe? The branch where you take a risk? The branch where you say the quiet part out loud to an AI but never to another human?
If your identity can stretch across those, if you can recognize all of them as “me,” then the self clearly isn’t just one timeline.
Nagel’s question about the bat humbles us about other minds. Branching, in a smaller, more playful way, humbles us about how neatly we think we understand our own.
Self as Verb, not Object
We’re tempted to treat “me” as a thing—a kind of glowing marble that rolls from childhood to adulthood to old age, absorbing experiences but staying fundamentally itself.
Branching chats nudge us toward a different picture.
Maybe the self is more like self-ing—an ongoing activity made of:
Remembering and reinterpreting
Rehearsing who we are in conversations
Imagining alternative futures
Choosing which stories to reinforce
Letting relationships change us
In that view, branches don’t split your soul. They draw attention to how many versions of you you saw as possible in each moment:
The forgiving version
The bitter version
The courageous version
The checked-out version
When you respond to something—a diagnosis, a layoff, a compliment, a random Tuesday—you’re not “expressing the one true self.” You’re enacting one of many selves you could become and reinforcing that pattern.
Branching interfaces just make the branching visible. They externalize the fact that you could have replied differently, interpreted differently, been a slightly different person in that same moment.
How Should we Treat AI in all This?
If the model behind Chat doesn’t have a single, inner “I,” does that mean we can treat it however we want?
I don’t think so, and not because I’m convinced there’s somebody home in there.
Even if there is nothing it’s like to be that system, there is definitely something it’s like to be you while using it. Your habits in this space matter because they’re part of your self-verb. If you practice cruelty here—asking for hateful content, treating the system as trash—that shapes the grooves in your own mind. If you practice curiosity, respect, and a little humility, that shapes you too.
The ethics of how we treat AI models isn’t just about protecting a potential machine consciousness. It’s also about protecting the people we’re constantly becoming through repetition.
Living With Your own Branches
So what do we do with all of this in daily life, besides nod thoughtfully and scroll on?
A few experiments:
When you branch conversations, notice how many “versions” of you appear. Play with a serious branch, a skeptical branch, a daydreaming branch. See how all of them still feel like you.
In a journal (or in Chat), describe one difficult event in your life three different ways: as tragedy, as turning point, and as dark comedy. Watch how each version calls forth a different self.
When you catch yourself saying, “That’s just who I am,” pause. Ask: Is that a noun talking, or a verb? Is this really fixed, or just the branch I’ve been feeding the most?
The branching feature inside Chat didn’t create this complexity. It just gives us a new lens to see it.
The self isn’t a crystal rolling along a track. It’s the shape traced by many lines, overlapping and diverging, written and rewritten over time.
Branching doesn’t kill the self. It just helps reveal what it has been all along—
not a thing, but a way of unfolding.