AI-Leverage
← Writing

2026·04·02 · Essay

What changes when AI enters real work.

Most people I talk to have used AI a lot, in the abstract. They have asked it to explain things. They have had it write emails it felt silly to send. They have had it help them think through a decision. They have had it make a list of ten ideas for a business they have not started.

This is not nothing. It is how most people got their first real sense of what the models can do. But it is a specific mode of use, and it has the odd property of feeling like proficiency without being proficiency for the thing that actually matters. The difference is between AI in the abstract and AI inside real work, and it is worth being careful about what changes when you cross from one to the other.

AI in the abstract has a specific feel. You open a chat window and you think of something to ask. What you ask is a well-formed, polite request for information or a piece of writing. You get an answer. The answer is usually good. You thank the model for no particular reason and you close the window. Later that afternoon, you do your actual work, which is a different kind of thing — a proposal, a lease review, a diagnostic call, a budget, a sales follow-up, a code review, a schedule rebuild, a quarterly plan. The AI stays in the other window.

The reason the AI stays in the other window is that real work is almost never well-formed. Real work is half a thought and four open tabs and a memory of a conversation from last Tuesday and a constraint someone mentioned once that has to be honored. Real work assumes context that lives partly in your head and partly in three different documents you have not looked at since February. Real work is interrupted. Real work has politics. Real work has one thing that is supposed to happen next and three things that are about to happen next.

Bringing AI into that is not the same problem as asking it a clean question. It is the problem of how to give a model enough of the context that lives in you and in your systems so that it can help, without spending so much time building the context that you might as well have done the thing yourself.

This is the real question of AI in real work: not what can the model do, which is usually a lot, but what is the minimum viable setup that lets it help on this specific task, today, without breaking the rest of my day.

Once you see this, a few things become obvious.

The first is that the bottleneck is not model capability. It almost never is, for the kinds of tasks people bring to it. The model can already do the thing, usually. The bottleneck is the distance between the state you are in when you sit down to work and the state where the model has enough of a handhold to actually help. Closing that distance is a design problem — the design of your workflow, your files, your projects, your habit of keeping context where it can be reached. The model sits on the other side of that design problem, waiting.

The second is that working with AI in real work quickly becomes emotional. This sounds like a soft observation but it is a practical one. A workflow you have been holding together by willpower, with too much of it living in your head, is not a workflow that an AI can just slot into. It is a workflow that has to be redesigned first — which means letting go of the private mental architecture that made the old workflow possible, and trusting that the redesign will hold. That transition has a feel to it. It is uncomfortable. People resist it and cannot always tell why. The resistance, when you look at it honestly, is almost never about the AI. It is about the redesign, and what the redesign asks of them.

The third is that once the first real workflow has been redesigned with AI inside it — really inside it, not just summoned for occasional help — the whole relationship to the work shifts. Not because the AI is doing anything magical; it is doing exactly the things it could always do. What shifts is that the work no longer asks you to hold all of its context. The model holds some of it. Your files hold some of it. Your summaries and logs hold some of it. The part you have to hold is the part that actually needs your judgment, and nothing else.

The change, from the inside, feels like the day getting lighter. Not shorter; the hours are about the same. But the cognitive overhead drops, and the fraction of the day that is actually doing the real thing rises. This is the thing people mean when they say AI changed how they work. They do not mean their tools changed. They mean the overhead dropped enough that they got their actual work back.

The fourth thing that becomes obvious — and it is worth naming because nobody says it clearly in the marketing material — is that the first week of a redesigned workflow is usually worse than the old one. You are learning the new shape. Things you used to find fast are briefly slow. Things you used to hold in your head are now in a file you have to decide to trust. You wonder whether the old way was better. The fourth day in, something clicks and the new shape starts to feel natural. The second week in, you would not go back. But the first week is a dip, and the dip is real, and anyone who tells you otherwise is selling you a course.

The last thing worth saying is that none of this is about AI specifically. What AI has done is expose, in an unusually stark way, a problem that was always there: that most people’s real work runs on more undocumented context than it should, and that the undocumented context is the reason nothing else can help them — not a new tool, not a new hire, not a new course. AI happens to be an entity that can actually do useful things with context, if you give it enough. Which puts the quality of your context on the table in a way it never had to be before. That is a good thing. It is also a change. And it is the change under the change that everyone is trying to talk about.