AI coding revisited again, this time with a greenfield project

posted by Jeff | Sunday, November 23, 2025, 1:53 PM | comments: 0

Hard to believe that it has been almost two years since I first tried using an AI agent to help write code. Looking back, I seemed really optimistic, but I recall giving it a ton of context, and the thing I was working on was specifically math. I tried again a few months ago to add a feature to POP Forums, and wasn't that excited about the results. Despite it being a mature code base, if not necessarily the most well-structured or coded, it just got a lot of stuff wrong. Maybe that was just the state of Github's Copilot, but that was not ideal.

A friend of mine kept raving about Claude Code, so I forked over $20 and gave it a shot on a new greenfield project that I started. I took the guidance to have it generate its own readme first, and it seemed to figure out a number of the conventions that wanted, in terms of the project structure. It's a Blazor-based WASM app, meaning it runs in the browser, so I wasn't sure how much it would "get." But it actually did really well, provided I was giving it specific context. So for example, I wanted to create a class that scraped a web page, pulled out the title and any social-protocol for meta tags that would correspond to an image. I did this in two parts, knowing that's how I'd structure it. First I had it code a class to fetch the page, and then I had another class parse out the title and image location. This involves a bunch of regular expressions, the bane of my coding existence, so I was happy to let the machine figure it out. Finally, when the result came back, triggered by typing a URL in a text box, I wanted a link and the image to appear in a box that you could remove, and it did all of that, pretty much first try. There were some tweaks I asked it to do, like un-HTML-encoding the title, and some other minor things, but it worked.

This was a much more positive experience than I had last time, but to be honest, a lot of that has to do with the context. I keep saying that over and over, that AI needs context to get stuff right, so I gave it a lot of context. Having something that is new and not infested with years of bad decisions also helps. I'm also limiting the scope of any given problem. I'm not telling it to do some end-to-end thing that involves many application layers.

And that's why, at this moment, I still see so much value in senior software developers. The analogy that I recently saw was to plumbers and plumbing. Sure, with PVC pipes and twist on couplings, you don't need to solder pipes anymore. It's all much easier. But knowing how the system is supposed to work, and all of the associated nuance, is still something that requires plumber knowledge. Well, code is in many ways like plumbing, so that experience is important. Sure, you can find a bunch of YouTubers who are "vibe coding" until they have something that works, but it doesn't mean that it can scale, that it's secure or robust enough to handle humans breaking it.

"Just you wait," say people selling AI stuff. But many predicted that it could be there by now, and clearly it is not. In fact, I'd say the last year has been kind of stagnant in terms of improvement (specifically in the code generation realm). I'm not saying that it will never get there, it's hard to know, but as I've said before, if AI eventually hits a point of having to train on its own work, it will break.


Comments

No comments yet.


Post your comment: