Why generic AI education keeps failing.
Nearly everyone I have worked with in the last two years has the same story about AI courses. They bought at least one. Usually three or four. The first one felt exciting. The second felt repetitive. By the third they were skimming. They can still describe what they learned in broad terms — something about prompts, something about role-setting, something about chain-of-thought — but a surprising amount of it has leaked away, and almost none of it is showing up in their actual work.
The common response to this is to assume the courses were bad, or that the person did not try hard enough, or that they should take one more course that is finally the right one. None of these diagnoses is correct. The courses were, in most cases, fine. The person tried. There is no one more course that will finally work.
What is going on is structural. Courses — all courses, not just AI courses — have a specific shape. They take a body of knowledge, strip it of the particulars of any one learner’s context, and deliver it in a sequence that is optimized for generality. The generality is the feature: it is what makes a course able to teach a thousand people instead of one. It is also what makes a course fail in this specific domain.
A course on a well-defined topic — basic Python syntax, say, or double-entry bookkeeping — can get away with generality because the application back in real life looks roughly the same for everyone. Everyone who needs to write a for-loop writes roughly the same for-loop. Everyone who needs to record a journal entry uses the same debits and credits. The particulars of the learner’s context do not interfere with the skill transfer. The generic version is very close to the specific one.
AI does not work that way. The skills that matter in using AI well are not skills in the ordinary sense. They are judgment calls inside a specific situation: whether to split a task into two prompts or one, whether to give the model fifty pages of context or a hundred words, whether to trust a first draft or iterate, when to verify, when to re-prompt, when to give up and do the thing by hand. Every one of these is highly context-dependent. They depend on the exact task, the exact tools in use, the exact tolerance for errors in that domain, the exact way the person prefers to work, the exact shape of the rest of the day around the work. Take any of those out and the skill has nothing to attach to.
Which means a course can teach you the generic version of these judgment calls — and the generic version is essentially empty. You end up with something like: “give the model enough context.” Which is true, and completely useless without knowing what counts as enough, in your actual work, on your actual tasks, on the day you are trying to do them. The rule is fine. The calibration is where everything is. And calibration cannot be taught generically, because there is no generic case to calibrate to.
The unit of learning here is not the lesson. It is not the module. It is not the prompt template. The unit of learning is the person inside their real workflow.
This is not a philosophical complaint. It is a practical observation about where learning actually happens with AI, and where it does not. Learning happens when a real piece of your own work is in front of you, and the next move is unclear, and either you make the move and see what happens or someone helps you see what the right move would have been. That is the only place AI skill actually builds. Everything else feels like learning while you are doing it and evaporates when you sit back down at your own desk.
The evaporation is the tell. A generic course teaches you ten things and you remember two of them a month later. A real-work engagement teaches you two things and you still use both of them six months later. The ratio is inverted because the two things were encoded into your actual workflow, with your actual tools, on your actual tasks. They stuck because they had somewhere to stick to.
There is a further reason generic AI education keeps failing, which has less to do with learning theory and more to do with what AI is, technically. The capability of these systems right now is enormous but uneven. What a model can and cannot do changes from domain to domain, from task to task, sometimes from week to week. The person who knows what the model can actually do in your specific use case is the person who has tried the model on your specific use case. No one else. Not the course author, not the thread you read on a forum, not the friend who raved about what ChatGPT did for them. Until the model has been run at your real work, with your real files, on your real constraints, nobody knows the answer. Which means the useful knowledge does not exist yet at the moment a generic course is recorded.
This is why — to be blunt — a lot of AI courses quietly degrade the moment the market moves. The model updates, the tool adds a feature, the best practice shifts. A course recorded eight months ago is teaching you something that was roughly true in a model two versions back. Again, no fault of the course. It is a structural property of the medium.
The alternative is obvious once the diagnosis is right. Learning with AI has to happen inside real work. It has to start from a specific task you are doing, in the specific environment you are doing it in, with the specific constraints you are under. Someone competent has to look at what you are doing and help you see the next move — not a generic next move, but the one that fits your situation. That might involve teaching; it often does. But the teaching is a side-effect of solving the real thing. The real thing is the main event, and the learning is what sticks to it afterward.
This is not a more expensive version of a course. It is a different kind of thing entirely. A course scales by being the same for everyone; this kind of work does not scale that way, because the entire point is that it is not the same for everyone. What scales here is not the content. What scales is the pattern library of the person doing the work — which is to say, it is not a product category where a course can ever be the right answer.
If you are on your fourth course and wondering why you still feel behind, the odds are extremely high that the issue is not you. It is that the problem cannot be solved by the instrument you keep picking up. The instrument was never going to reach.