Text Is All You Need

LLMs are language models. They process text. This simple fact has real consequences for how you store knowledge, create documentation, and choose file formats. Turns out, everyone who hated PowerPoint was right all along.

Text Is All You Need

Some files just work. Others don't.

Start working with AI tools regularly and a pattern shows up fast. You drop a Markdown file into a conversation and the AI understands it completely. Structure, content, context. You drop in a PowerPoint and you get fragments and guesses.

The difference isn't the content. It's the file format. And it makes sense once you think about it: Large Language Models process language. Text is their native format. Not slides, not formatted documents, not pixel-based diagrams. Text.

The format hierarchy

The closer a format is to plain text, the better AI works with it.

Works perfectly: Markdown, plain text, CSV, code files. The model reads them directly with full understanding.

Works okay: JSON, YAML, PDF (if text-based, not scanned).

Works poorly: Word documents. Content wrapped in formatting markup that adds noise and breaks structure.

Works terribly: PowerPoint. Content fragmented across slides with no document flow. Visual layout means nothing to a text model.

Markdown hits the sweet spot. It adds just enough structure (headings, lists, emphasis) while staying completely readable as text. It's also what AI tools write back to you. When Claude responds with formatted text, it's writing Markdown. There's a reason for that.

Diagrams as text

This is where it gets interesting. Mermaid is a syntax for describing diagrams as plain text:

graph LR
    A[Raw Data] --> B[CSV Export]
    B --> C[AI Analysis]
    C --> D[Insights]
    C --> E[Visualizations]
    D --> F[Decisions]

An LLM reads this and understands every connection, every node, every direction. Compare that to a Visio diagram exported as PNG. To a human, the image might look nicer. To an AI, it's pixels. The relationships and structure are locked inside an image a language model can't reason about.

Text-based diagrams are diagrams that AI can read, create, modify, and reason about. Need to document how system components connect? Ask AI to generate a Mermaid diagram. Want to update it? The AI edits a few lines of text instead of fighting with a visual editor. And it's efficient: 30 lines of Mermaid carry more usable information than a screenshot that takes up far more context.

The PowerPoint problem

PowerPoint was never meant to store knowledge. A slide deck is designed for a specific moment: someone standing in a room, talking through visuals. The actual knowledge lives in the speaker's head.

Six months later, someone finds the deck on a shared drive. Twelve slides of bullet fragments and charts without context. What was the decision? The slides don't say.

AI makes this problem measurable. Feed a slide deck into an LLM and ask for a summary. You'll get fragments. Feed it the meeting notes as a text file and you get something useful. Same meeting. Different format. Completely different result.

Why this matters beyond AI

How you store information determines how useful it can be later. A plain text file from 1990 opens fine today. Try that with a Lotus 1-2-3 spreadsheet. Text is searchable, versionable, composable. And now it's AI-readable too.

Sometimes the formatting is worth it. A spreadsheet needs to be a spreadsheet. But for documentation, notes, plans, decisions, project status? Text. Just text. It's what language models need because it's what language is.

Everyone who kept their notes in plain text while colleagues fought with SharePoint templates was accidentally preparing for the AI era. Sometimes the simple choice turns out to be the right one for reasons you didn't expect.