AI content often looks right, if not perfect. The structure is there, the format is correct, the polish is undeniable. Yet it can feel hollow—disconnected from genuine understanding.
We act like this is new. Like AI invented producing work that meets expectations in form while missing the substance. But we’ve been doing this for decades.
Output Without Outcome
Watch a product team doing everything by the book. They’re running sprints, conducting user research, building roadmaps, measuring metrics. All the outputs are there: personas, journey maps, OKRs, retrospectives. Perfect artifacts.
And yet the outcome is off. The product doesn’t solve the problem. The features miss the mark. The strategy feels hollow.
Or look at design work that seems deeply researched. Stunning presentation, articulate rationale, sophisticated visual language. All the outputs of good design. But in practice, it doesn’t serve users well. The outcome reveals the disconnect.
These frameworks and practices were invented for good reasons. Sprints were meant to break down complexity and enable rapid feedback. User research was meant to surface real needs before building. OKRs were meant to align teams around meaningful outcomes. They all emerged to solve real problems, and they work brilliantly—if you understand what problem they were trying to address.
But somewhere along the way, the outputs became the goal. Following the process became the measure of success. Creating the right artifacts became what “good” looked like. The work became performance.
Organizational Imposters
The eerie part is how many people in organizations are unknowingly just pretending. They’re not malicious. They’re not consciously deceiving anyone. They’ve simply learned the patterns that signal competence and reproduce them faithfully. They know how to structure the presentation, format the document, run the ceremony. They’re fluent in the language of their discipline.
But they’ve never connected it back to the original problem these practices were meant to solve. They’re imposters who don’t know they’re imposters, because everyone around them is also checking whether the form is correct, not whether the substance is there.

The Mirror
This is what’s eerie about AI. It’s extraordinarily sophisticated at meeting human expectations in form—more sophisticated than most people at matching expected patterns. It produces the right structure, the right tone, the right surface markers of competence.
But it’s doing exactly what we’ve been doing: producing output that looks right without ensuring the outcome is right. The difference is that we’re more critical of AI. We scrutinize its work with skepticism we rarely apply to human output. But that’s already fading. AI’s level of sophistication makes it increasingly hard to tell the difference.
When humans do it, we’re often numbed to it. We’ve been swimming in it so long we stopped noticing.
AI shows us what our own work looks like when you strip away intent and understanding but keep the form intact. It’s learned from everything we’ve produced, including all the work that looked right but wasn’t. It reproduces the patterns we rewarded, the structures we validated, the forms we approved.
And we seem to like what we see.
What This Means
I’ve been looking for the unintentional imposters for years. The ones who argue most forcefully for doing things right, for following process. They’re often the ones most disconnected from why those processes exist. They’ve mistaken the map for the territory.
And with AI, the lesson is clear: to get value out of it, you have to hold it accountable to purpose, not form. Don’t judge its output by whether it looks right. Judge it by whether it achieves the outcome you need. The same standard we should have been applying to human work all along.
The appearance of good work has always been the easiest thing to fake. Now we just have a mirror that makes it impossible to ignore.