The Articulation Advantage
Over the last few months, I (like everyone) have been hard at work first creating innumerable tools, then consolidating them, then reimagining them, and basically iterating over and over again with the ebb and flow of progress that is so powerfully permeating the world of AI today. While this is true in both my work and personal life and has been transformational in both contexts, it’s in the personal context that I’m going to start, mostly because I can talk about tangible outcomes publicly there.
I started off as many do, by building small tools to fulfil specific tasks. After all, this is what I’d have done without AI - identify a challenge that I can solve with technology, and solve for it with a bit of code, or by implementing someone’s passion GitHub project, or whatever it may be. The first tool I built with Claude was something I’d wanted for years - a widget on my phone which could turn on or off our RF controlled gas fire. I had all the pieces to enable this through my existing HomeAssistant setup, but the RF blaster I had found which worked with this fire, had no HA integration. So barring developing an integration myself, I was at the mercy of someone in the community having the same challenge as me, and sadly it was not to be. The problem wasn’t hard, the activation energy to solve it just never quite made sense against everything else competing for my evenings.
It’s not an exaggeration to say that within 30 minutes of kicking off this project, I had a working prototype, and another half hour later I had a fully functional Android app with prompt-free widget which would control the fire, at home or remote. Over the next few months I built other tools to solve unique challenges that I had but weren’t solved in the community, always focused on individual tools (and sometimes HA plugins) rather than a more consolidated approach. This was borne of my experience not just building tools, but supporting the codebase into the long term. I didn’t want to build anything unwieldy that I’d come back to in six months or a year, and think ‘what the hell was I thinking here?’
In January 2026, as we were approaching my wife’s 40th birthday, I decided I’d take on a pretty mammoth coding challenge to gift her something unique. With Zelda being her favourite game franchise, and it also being Zelda’s 40th birthday this year, and with the Ocarina of Time decomp project starting to gain some traction, I decided to change all the signs in Kokiri forest to be birthday messages from her extended friends and family - despite extensive experience in C dating back to the ’90s, I figured that was about as far as I’d get, being unfamiliar with the codebase, and given its status as a decomp and not well documented or signposted.
With Claude Code I had that piece knocked out in around 20 minutes - it took way longer to get the messages than to implement them - and suddenly found myself with loads of time to implement way more. So little by little, I started writing feature requests, refining them, and working with Claude to document and implement them. In the end there was a new functional game, using the child Zelda character model (did you know she has 16 points of articulation to Link’s 18, and isn’t in any way designed to be playable?) as the player model, changing a load of interactions, adding the Triforce as a collectible item, and even adding a whole sub-quest to retrieve the Happy Birhday song, and play it on the ocarina of time to enter the Chamber of Sages and retrieve a final reward. I even managed to get the whole thing loaded on an N64 Cartridge and running on the Analog 3d, to provide that true N64 experience. It was awesome.
All of this took around two weeks of evenings and weekends. I won’t say it was flawless, there was a lot of manual effort I still had to do, but the speed at which I was able to get to grips with a large and unfamiliar codebase was insane, and the quality of documentation we generated as we went made the whole process significantly easier.
With all this said and done, I revisited a lot of my assumptions and process for building my own tools - maybe maintaining and operating a large codebase in the long term as an individual hobbiest wouldn’t be a bad idea any more. Little by little I took all of my tools, and first integrated them into a bespoke Android app which did everything I wanted, attuned just to me, and then over the last few weeks, I’ve been pulling everything together into a single coherent multi-agent system that I can actually grow and maintain over the long haul.
The pattern across all of this, looking back at it, isn’t really about the code. The code is the easy part now. The hard part, the bit that actually determined whether I got something useful or something I’d throw away in a week, was how clearly I could describe what I wanted before I asked for it. And the more I’ve worked across both this personal context and the day job, the more I’ve become convinced that this observation generalises far beyond hobby projects.
The Real Differentiator
There’s a quiet assumption running through most conversations about AI and the future of work. It goes something like this: the people who understand technology the best will benefit the most. Engineers, developers, data scientists. The deeply technical, the people closest to the metal.
I think that’s incomplete.
The real differentiator in AI-driven workflows isn’t technical depth. It’s the ability to clearly and succinctly articulate what you want to achieve. The people who get the best outcomes aren’t necessarily the ones who understand transformers or can hand-write CUDA kernels. They’re the ones who can collapse the ambiguity in their own thinking into a clear, constrained, well-structured request.
That’s a communication skill. And it changes everything about who gets to build things.
The Spec Is the Product
Consider two people asking an AI to build them a dashboard. Person A says “make me a dashboard.” Person B says “I need a single-page view showing weekly trend lines for three KPIs, with a date range toggle, aimed at a non-technical audience who will glance at this for thirty seconds in a Monday morning meeting.”
Person B gets something usable on the first pass. Person A burns five rounds of back-and-forth, five times the token spend, and arrives at roughly the same place. The gap between them isn’t technical knowledge, it’s precision of thought, expressed through language.
And that token spend matters more than people tend to acknowledge. Every round of “no, not like that, more like this” is real money, real compute, real latency. The person who nails the spec on the first pass isn’t just faster, they’re running at a fraction of the cost of the person who iterates their way to the same outcome. At individual scale, that’s the difference between a few pounds and a few tens of pounds. At organisational scale, across hundreds of engineers and thousands of requests a day, that compounds into something that shows up on the balance sheet.
This maps closely to a skill that has always mattered in engineering but never got enough credit: requirements gathering. The people who were good at writing specs, defining acceptance criteria, and decomposing vague goals into concrete deliverables were always disproportionately effective. AI just makes that leverage ratio visible, measurable, and immediate.
In a spec-driven development workflow, the spec isn’t a document that gets handed to someone else for interpretation. It IS the build instruction. The person who writes the spec is, in a very real sense, building the product. AI is just the translation layer between intent and implementation.
Phenomenal Cosmic Power, Limited Imagination
There’s another side to the articulation coin that’s uncomfortable to address as well - being able to clearly describe what you want is only half the battle. You also have to be able to imagine what’s worth wanting in the first place.
When Hal Jordan first got a Green Lantern ring, a mystical object able to create literally anything he can imagine and has the will to create, all he could create were things he already knew, like cars and similar. Phenomenal cosmic power is irrelevant without the imagination to use it. The ring will render whatever you can hold in your head, at full fidelity, instantly. But if your head is full of yesterday’s solutions, that’s all you’ll ever build.
AI is the ring. It’ll happily build you exactly what you ask for. The constraint isn’t capability any more, it’s the size and shape of the space you’re willing to imagine within. The person who says “build me a better version of our ticketing system” gets a better ticketing system. The person who says “what if we didn’t need tickets at all, and the system just noticed and fixed these issues itself” gets something categorically different.
This is where the real asymmetry opens up. Articulate but unimaginative people will use AI to build faster versions of what already exists. Imaginative but inarticulate people will have vivid ideas that never quite materialise, because they can’t collapse them into a form the ring can render. The people who do both, who can imagine beyond the current shape of a thing AND describe that imagined thing with enough precision to build it, are the ones who’ll define what the next decade of software actually looks like.
The Technical PM as Product Builder
If articulation is the core skill, then technical product managers might be among the best-placed people to become product builders in this new era. They are one of the roles that sits at exactly the right intersection. They understand the solution space well enough to know what’s possible and what’s a hand-wave over enormous complexity, and they’ve spent years, sometimes decades, refining the skill of turning fuzzy intent into structured, actionable requirements.
What makes this work is that the traditional handoff starts to collapse. The old model was: PM writes spec, engineer interprets spec, engineer builds product, PM reviews product, refinement ensues, and everyone argues about what “done” means. When the spec becomes the direct input to AI-driven development, that entire interpretation gap disappears. Fewer misunderstandings, fewer “that’s not what I meant” cycles, faster iteration, and far less token burn along the way.
The “technical” part of technical PM still matters, though. You need enough engineering intuition to write specs that are implementable. A non-technical PM writing specs for AI would hit the same walls they hit with human engineers, just faster and with less patience from the other side.
The people who thrive will be the ones who combine domain knowledge with expressive precision.
What Happens to Engineering?
This is the uncomfortable question, and I think the honest answer is that a large portion of traditional engineering roles will shrink through natural attrition over time. Not a dramatic cliff, not mass layoffs, just a gradual shift where, as people leave or retire or move on, fewer roles get backfilled at the same level. The work doesn’t disappear, it gets absorbed differently.
What survives and even grows in value is the work at the extremes. At one end, deep systems engineering: the people who understand why your cluster is dropping packets, or how to squeeze latency out of a hot path. AI is not good at that kind of reasoning today. At the other end, architect-level thinkers who can hold an entire system in their heads and make the structural decisions that determine whether something scales or falls over at ten times the load.
The middle layer, the “take this well-defined ticket and implement it” work, is exactly where AI is strongest. And that’s where attrition bites hardest.
The Lifecycle Problem Nobody Is Talking About
This is where things get somewhat trickier, and where most of the current AI enthusiasm skips a step. You can define a spec today, point it at a model, and launch a working product. That’s real, that works. But what does the development and support lifecycle look like for the next five years?
The model you use to build the next feature request might be completely different from the one that built the original implementation. Different architecture, different training data, different opinions about error handling, state management, code structure. It’s like handing your codebase to a new engineer for every update, except worse, because a new human engineer at least has consistent training, shared conventions from their career, and the social pressure of code review to keep them aligned with what already exists.
A different model generation has none of that. It might satisfy your spec perfectly while producing an implementation that clashes with everything already in the codebase. Not bugs, exactly, but architectural drift. The kind of creeping inconsistency that makes a codebase increasingly expensive to work with over time. Technical debt by default rather than by neglect.
And here’s the compounding cost nobody’s pricing in yet: that drift makes every subsequent request more expensive. The model has to reconcile more conflicting patterns, the spec has to specify more to constrain it, and the refinement cycles grow longer as the codebase becomes harder to reason about. A product built without long-term coherence in mind doesn’t just cost more to maintain in human terms, it costs more to maintain in tokens, on every single interaction, for the rest of its life.
The Constitution as the Most Important Artefact
This is where the concept of a project constitution becomes not just useful but utterly vital.
In a spec-driven development workflow, you typically have two layers of instruction. The spec defines what you’re building right now. The constitution defines how everything should be built, always. Coding patterns, error handling conventions, state management approaches, naming standards. The architectural guardrails that keep a codebase feeling like it was written by one consistent team, even if ten different models touched it over five years.
Think about what the constitution is actually doing. It’s encoding the kind of architectural judgement that normally lives half in documentation and half in the accumulated instincts of the senior people on a team. “We always handle errors this way. We structure state like this. We never do X.” Good teams try to write this down already, but it tends to exist as fragments scattered across ADRs, wiki pages, and the comments on old pull requests. The constitution gathers it into one place and commits to keeping it current, because any model in any generation is going to consult it before touching the codebase.
Without a strong constitution, you get the patchwork problem. Module A uses one pattern, module B uses another, and the seams between them accumulate subtle inconsistencies that compound over time. With a strong constitution, you get continuity. The spec can change, the model can change, but the soul of the codebase stays consistent. And every interaction with that codebase, from the first feature to the thousandth, runs more cheaply than it would in a codebase without that scaffolding, because the model has less to reconcile and the human has less to correct.
The spec defines what you’re building now. The constitution defines how everything should be built, always.
The Constitution Author
All of which raises the obvious question: who actually writes this thing, and who maintains it over the long term as the organisation, the codebase, and the models themselves evolve?
This is, I think, the highest-leverage technical role into the near and foreseeable future (though admittedly I don’t see that far), and it’s a genuinely new one. It isn’t quite the traditional software architect, who makes structural decisions about a specific system. A constitution author works one level up from that, codifying the organisation’s principled commitments about how any system should be built, in a form durable against model churn. An architect designs the building. A constitution author writes the building code that every future architect has to design against.
It isn’t quite the deep systems expert either, though it’s adjacent. The systems person knows why your cluster is dropping packets, which is diagnostic and reactive, working from symptoms back to cause. The constitution author is prescriptive and prospective, working from principles forward to prevent whole classes of problems from ever entering the codebase in the first place.
What a good constitution captures isn’t the stuff that already gets written down. Any competent engineering team documents its APIs, its runbooks, its architectural decision records. Tribal knowledge is correctly treated as a liability and ruthlessly written out of existence wherever it’s found. The constitution targets something harder: the accumulated judgement about why the team does things the way it does, the trade-offs that were considered and rejected, the patterns that have proven robust across dozens of features. That judgement has historically been difficult to extract from the people who hold it, not because teams don’t try, but because it’s tacit, context-dependent, and most easily transmitted through code review and conversation. The constitution forces it into an explicit, durable form, because the “reviewer” is now a model with no memory of last quarter’s debates and no social context for “we tried that in 2023 and it didn’t work.”
The person who does this well needs three things that rarely combine in one human: deep technical judgement to know what the right patterns actually are, strong architectural instinct to know which decisions matter in the long term versus which are just preferences, and the articulation skill this whole piece is about, to encode all of it in a document precise enough to constrain model behaviour but flexible enough to live for a decade. That last part is what makes this a genuinely new role rather than a rebadged principal engineer. The skills are ancient, the job isn’t.
A New Shape for Software Teams
If you follow these threads to their logical conclusion, the shape of a software team starts to look very different. The highest-value roles aren’t writing code. They’re writing constitutions, defining architectural principles, curating codebases for coherence, and articulating product intent with enough precision that AI can execute on it reliably.
The compounding effects are what make this so stark. The articulate, technically grounded, product-centric person with a long-term view doesn’t just build a product that survives its passage between owners and models. They build one that costs less to operate, iteration after iteration, because every request lands cleaner, every refinement cycle is shorter, and every model handoff is smoother. The difference between them and someone iterating their way through ambiguity isn’t incremental. It compounds.
Software has been trending this way for a long time. The industry has spent the last decade or more taking soft skills seriously, elevating communication as a hiring criterion, investing in the craft of writing specs and RFCs and architectural decision records. What AI does is take that trend and sharpen it to a point. Articulation stops being a career enhancer and starts being the actual mechanism of production. And imagination stops being a nice-to-have attribute of creative people and becomes the ceiling on what any individual can build.
The people who will build the future aren’t the ones who can write the most code. They’re the ones who can imagine something worth building, succinctly articulate it, and in so doing, make it reality.
Push the boundaries of your imagination. Create something new and exciting, with as little back-and-forth as possible. Conserve those tokens, and build something awesome.


