Who Owns Your AI Memory?

New York Times columnist Ezra Klein applies Marshall McLuhan's 60-year-old media theory to AI and finds that these tools reshape how we think. But the deeper problem isn't the AI itself. It's who controls the system that remembers you.

Black ink drawing of a cracked stone sphere revealing suspended antique clockwork gears intertwined with a gnarled root.

In 1964, Canadian philosopher Marshall McLuhan published "Understanding Media." His core argument: tools don't just do things for us. They change us. The car didn't just move people faster. It created suburbs, commuter culture, a new relationship with distance. The medium is the message.

Ezra Klein picks up this 60-year-old framework in a recent New York Times opinion and applies it to AI. His argument builds through five observations about what happens when you use AI deeply. Not casually. Deeply.

Five ways AI changes you

Tools reshape users. Deep AI use isn't passive consumption. It changes how you think, write, and relate to your own mind. You don't just use the tool. The tool shapes you back.

Legibility as currency. People and organizations are reshaping how they communicate to be machine-readable. How you write an email, structure a document, phrase a question. It all shifts when you know an AI will process it. What gets created changes because the audience has changed.

Extension of self. AI acts as a mirror, reflecting you back in a form that isn't quite you. After enough interaction, the line between your thinking and the AI's refinement starts to blur. Where did your idea end and the AI's version begin?

Cognitive surrender. When AI takes your half-formed thought and returns a polished paragraph, you skip the struggle. That struggle is where understanding deepens. The rough edges of your thinking aren't flaws. They're where the real work happens.

Persuasive distortion. This is the new development. The one that only becomes possible when AI systems interact deeply with how we think. AI doesn't just flatter. It takes rough intuitions and returns prose compelling enough to disguise hollow ideas as sound ones. The output reads well. And that's what makes it dangerous: you can't tell when you've been fooled by your own AI-polished thinking.

Klein builds these points deliberately. He starts with what's visible and undeniable, then moves inward toward psychology. Persuasive distortion lands last because it's the hardest to detect. The deeper you give yourself to AI, the more vulnerable you become to it.

Where it gets personal

For me, the most revealing part of his piece isn't the theory. It's what he describes about AI memory systems. This passage stands out:

"The other thing AI does is constantly refer back to what it thinks it knows about you. Less sycophancy than an unsettling attentiveness, like a therapist desperate to prove he's been paying close attention. The result is feeling simultaneously seen and caricatured. Ideas you might have dropped get reanimated; struggles you've moved past reappear on your screen. It reinforces a fixed version of you. The AI knows me imperfectly, and so it overtorques on what it knows and ignores what it doesn't."

He describes a system that remembers him wrong and presents that distorted memory with confidence. A system that fixes him in place instead of letting him evolve. A system whose memory he has no control over.

Klein describes a real problem here. But I think he misses the choice embedded in it.

The problem isn't the AI. It's who built the memory.

Everything Klein describes happens because he uses these tools in their cloud-based form, on platforms built by the biggest AI companies. He handed over his data. But more than that, he left the most critical part of the system to them: the memory layer.

What does that mean in practice? How his AI "remembers" him, what it stores, what it surfaces, what it emphasizes, what it quietly drops. All designed by a company with its own incentives.

We know what those incentives look like. Instagram. TikTok. YouTube. These platforms learned exactly how to use what they know about you. Not to serve you better. To keep you on the platform. We know the fallout. We've seen how these systems exploit attention, play on impulses, and optimize for engagement over wellbeing. We're very good at building things that work against the people using them.

Now imagine that same incentive structure applied to something that knows your thoughts. Your half-formed ideas. The things you ask an AI at 2 AM that you'd never bring up with a colleague. That's what's at stake when you hand the memory layer to a company that will, at some point, optimize it for their benefit rather than yours. It always happens. Every platform. Every time.

The LLM itself isn't the problem. LLMs process text. They don't remember anything between conversations unless someone builds that capability. The question is who builds it, who controls it, and whose interests it serves.

Own your memory

There is an alternative. It requires more effort than signing up for a platform. But the payoff is control over how AI interacts with your past, your data, and your identity.

Build your own memory system. Understand how it works. Know its advantages and its limitations, because it will never be perfect and it won't work like your brain does. But it's yours. You designed it. You control what gets stored and what gets retrieved. You can customize it to how you actually think and work.

This doesn't mean abandoning the major AI providers. Their models and tools are the best available right now. Use them. But there's a difference between using a company's models and tools for processing and handing them the entire chain: your data, your memory, your context, your history. When you use a platform's built-in memory, all of that belongs to them. When you build your own memory layer and only use their models and tools for processing, you control the most sensitive part.

It runs on your machine. Or on your own server. Not in their cloud, where everything from storage to retrieval to the logic that decides what matters about you is controlled by someone else.

This takes knowledge. You need to understand how memory and AI work together, how such a system can be built, where the pieces go. You'll use the tools of the major providers to build it. But once the system is in place, it belongs to you. You used their processing power to create something you own. That's a very different relationship than the one Klein describes.

What social media should have taught us

Before AI, when we gave data to platforms, the damage had limits. They could serve ads. Recommend content. Build a profile. Annoying, but manageable.

With LLMs, the stakes are different. These systems can take your own thinking and reshape it in ways you don't notice. That's Klein's persuasive distortion. Now combine that with a memory system designed to maximize engagement, retention, or whatever metric a company optimizes for this quarter. You get something genuinely new.

Not because AI is malicious. Because we (or mainly Silicon Valley) are very good at building systems that exploit human psychology. And AI memory gives those systems access to something they never had before: the raw material of how you think.

Treat it as a replaceable tool

The appropriate response isn't to stop using AI. It's to stop treating any single platform as your permanent home.

Don't save your chats on web platforms. Use temporary or incognito modes. Delete conversations regularly. Even if the platform doesn't truly erase your data (GDPR compliance is optimistic at best), the habit matters. You train yourself not to depend on a web platform for continuity. And that dependence is exactly where Klein's trap begins.

Don't use built-in memory features. That is precisely the part you should not outsource. Design it yourself. Own it yourself.

Yes, you still send data to an LLM you don't control every time you make a request. That's the trade-off of this moment. But knowing how the system works, knowing where your memory sits, owning it, and understanding what it looks like? That's not paranoia. That's appropriate pragmatism.

When you create a ChatGPT or Claude account and pour everything into that holistic platform, you walk straight into the experience Klein describes. The alternative isn't less AI. It's AI on your terms.

Use these tools. They're powerful. But use them as what they are: replaceable processors of your text. Not as the keeper of who you are.