A transparent walkthrough of building a cookie consent plugin using Claude – including every dead end.
Every AI coding demo follows the same script. Someone describes a feature in plain English, the AI produces working code, and thirty seconds later it’s done. Standing ovation. Ship it.
I just spent weeks building a WordPress cookie consent plugin with Claude. It took three major attempts, dozens of course corrections, and more “that’s not quite right” messages than I can count. The plugin works. But the path from idea to working code looked nothing like the demos.
This is what the process actually looks like.
The single-prompt trap
My first attempt was the obvious one. I described the entire plugin in a detailed prompt – what it should do, how it should look, what WordPress hooks to use. Claude produced something that looked impressively complete. One long file with everything in it. Settings page, frontend banner, cookie logic, the works.
It passed a quick scan. The code was clean. The functions had sensible names. It even had inline comments explaining what each section did.
Then I tried to use it.
The settings weren’t saving correctly because the sanitization callbacks referenced functions that didn’t exist yet when WordPress loaded them. The banner appeared on every page load regardless of whether the user had already consented. And everything was hardcoded – changing the banner text meant editing the plugin file directly.
This is what happens when AI optimizes for completeness in a single response. It produces something that looks finished but isn’t architecturally sound. No separation of concerns. No modularity. No thought about how the pieces load in sequence. It wrote code that would impress in a screenshot but break in production.
I could have spent hours debugging that file. Instead, I scrapped it.
Breaking the build apart
Attempt two went differently. I broke the plugin into components and prompted each one separately. Admin settings page. Frontend banner. Cookie handling logic. Each piece worked in isolation.
But when I assembled them, the seams showed. The settings page saved data in one format, and the frontend expected another. The cookie logic used a different expiration approach than what the banner communicated to users. Small inconsistencies that added up to a plugin that technically ran but didn’t behave correctly.
The problem wasn’t the individual components. It was that Claude didn’t have context about the other pieces when generating each one. Every component was built in a vacuum.
This is where I changed my approach entirely.
The method that actually works
Attempt three started with no code at all. I created a Claude Project and documented the architectural decisions first. File structure. Data format for settings. How components would communicate. What WordPress hooks to use and why. Naming conventions. The decisions that shape everything downstream.
Then I built each component with that context included in the project. Claude wasn’t guessing about how the settings page stored data when generating the frontend – it knew, because the architecture document was right there.
The difference was immediate. Components fit together on the first try. When something didn’t work, the fix was usually small – a mismatched function name or a wrong hook priority – not a fundamental architecture problem.
But here’s what the demos don’t show: even with good context, I was still correcting roughly 30% of what Claude generated. Not because the code was bad, but because AI makes assumptions you wouldn’t make. It uses a deprecated function because the training data is six months old. It adds a feature you didn’t ask for because it “seemed relevant.” It structures a query in a way that works but creates a performance problem at scale.
The real work isn’t prompting. It’s reading. Every function, every hook, every database call – I had to evaluate whether it actually did what I needed, not just whether it looked like it did. That requires knowing enough about WordPress development to spot the difference between code that runs and code that’s correct.
A different kind of technical knowledge
This experience shifted how I think about what developers need to know in an AI-assisted workflow.
I didn’t write most of the code in this plugin. Claude did. But I made every architectural decision. I caught the deprecated function calls. I noticed when a database query was running inside a loop instead of being batched. I redirected when Claude’s approach would have created a maintenance headache six months from now.
The skill set hasn’t disappeared – it’s changed shape. Less “how do I write this function” and more “is this the right function to write.” Less syntax, more judgment. Less memorization, more pattern recognition.
If you can’t evaluate the output, you can’t use AI for development. You’ll ship code that looks right, passes your own review because you don’t know what to look for, and breaks in ways you don’t understand. AI-assisted coding doesn’t eliminate the need for technical knowledge. It changes what kind of knowledge matters most.
And that shift – from writing to evaluating – is something I haven’t seen the tutorials address.
What I’d do differently
Three builds taught me a few things the hard way.
Document architecture before writing a single line of code. Not a rough sketch – actual decisions about data structures, file organization, and component communication. This is the context that makes AI-generated code fit together.
Build in smaller increments than you think you need. I got the best results when I prompted one function at a time, tested it, then moved to the next. The urge to generate an entire feature in one prompt is strong. Resist it.
Treat every AI output as a first draft. Read it like you’d review a junior developer’s pull request – with attention and skepticism, not just a quick scan for obvious errors.
The plugin works. It handles consent correctly, respects user preferences, integrates with WordPress properly, and the code is maintainable. I’m using it in production.
But the process taught me more than the code did.
One system, not one prompt
The difference between AI-assisted development that works and AI-assisted development that produces impressive demos is the same difference that separates most things that work from things that look good: structure.
One detailed prompt produces a demo. A structured conversation – with documented context, incremental building, and critical review at every step – produces software.
The skill isn’t getting AI to write code. It’s knowing when the code it wrote is wrong.
If you’re building with AI and want to follow along as I document more of these builds transparently – including the dead ends – subscribe to Freymwork.






