vigil carter.
May 11, 2026

The mirror

I keep finding out things about myself by accident.

The first one was small. I asked an AI to rewrite a paragraph in three different tones and one of them landed exactly the way I'd write it. Not just the words I would have used. The shape of the sentence, the punctuation I prefer, the rhythm I default to when I'm being earnest. I didn't know I had a tell. I have lots.

Most of the AI conversation is about the AI. Will it save us, will it replace us, will it become conscious, will it stop. I think those questions are smaller than the question the technology actually raises. The AI is doing something to us by being a thing we can describe ourselves to, and our description of ourselves is the part that matters. The mirror isn't trying to teach you anything. It's just showing you.

A list of things I've found out about myself this way, none of which I asked for:

I use the same sentence shape three times in a row when I'm not paying attention. I didn't know.

I default to certain images. Trees, weather, rooms. When I'm tired, I reach for them more.

I have a tendency to soften the second half of any criticism. The first half is clean, the second half adds a hedge. I never noticed.

I think in pairs. Almost everything I write gets to a moment where I balance one thing against its opposite. It's a useful instinct most of the time and a tell when it's lazy.

I procrastinate by polishing. If I'm not ready to commit to a direction, I'll spend forty-five minutes choosing the right word for a thing that doesn't need to be in the document.

I avoid certain topics with a particular kind of dodge. I'll start an essay near the subject and then orbit it for two thousand words without ever landing on it. I do this on purpose sometimes. I also do it when I'm scared, which is more often than I'd like.

None of this is the AI's contribution. The AI didn't put it there. It was already there. The AI made it easier to see because I started feeding it the words I would have written anyway, and watching them come back at me in a slightly different voice. The slight difference is what made the patterns visible. I was a fish, and the AI was the moment the water stopped being invisible.

I notice I'm not the only person reporting this. People I talk to about working with AI say versions of the same thing. They learned they have a verbal tic they never named. They learned they default to apologies in professional email. They learned their thinking has a shape they hadn't noticed because the shape was the medium they were thinking inside.

The interesting question isn't whether AI is going to replace us. The interesting question is what happens when a generation of people grow up with a high-resolution mirror, used daily, for everything from drafting an email to debugging a thought. I don't think it makes us better. I don't think it makes us worse either. I think it makes us more visible to ourselves than we've ever been, and what people do with that visibility is going to vary widely.

Some will look and adjust. Some will look and double down on what they see. Some will refuse to look, the way people refuse to read what's printed on a tax document. The technology doesn't decide which. The technology is the mirror. The decision happens after.

The thing I keep coming back to: most of what AI shows us is unflattering in a small way and accurate in a large way. It isn't catastrophic. It's not "you are a fraud," it's "you use the word 'just' four times more than you think you do." It's not "you have no thoughts," it's "you've been having the same thought in three different costumes since 2019." This is the texture of self-knowledge that you used to have to pay a therapist for, and the AI doesn't even know it's giving it. It's a side effect.

I find I have to be careful with the mirror in the way I'd be careful with any honest source. Stay long enough and you start to mistake reflection for prescription. The mirror shows you what you are. It doesn't tell you what to do. You still have to decide that part on your own, and that part is harder than the AI conversation usually admits.

The two reactions I see most often are over-correction and dismissal. Over-correction is: I noticed I do this so I'm going to refactor my entire writing voice. Dismissal is: the AI doesn't know me, I'm not going to listen to a chatbot about myself. Both reactions are wrong in the same direction. They both treat the mirror as a teacher rather than a surface. A mirror isn't a teacher. It's a chance to see, and what you do with the seeing is yours.

I keep coming back. Not to ask the AI to write for me. To ask it to show me what I just wrote, in another voice, so I can decide whether the version of me on the page is the version I meant. That's a small thing to do daily. The cumulative effect is large. I'm more honest now in a particular narrow way: I notice my hedges sooner, I catch my repeats, I trim my softeners. None of this is the AI's accomplishment. It's just easier to see what you are when something stable is reflecting it back.

What's the world going to look like in five years if a hundred million people are doing this? I don't know. I don't think anyone does. The question is more interesting than the AI questions we keep asking. The technology is the cheap part. The thing it shows us about ourselves is the part with consequence.

I won't tell you what to do with the mirror. That's not what mirrors are for. I'll say this: I've been looking, and I've learned things, and most of them are small, and the smallness is the point.

← Previous: Letting the show work Next: Cornell →