In between countless posts about AI that could be summarized as, "You're doing it wrong" (boring!), there are folks making genuine efforts to find the best way to work with the tools. And the truth is that you've seen this before.
On one end, there are people coming up with frameworks and tooling to give the AI what it needs to get their coding output right. These come in variations on specs, detailed requirements, and most importantly, outcomes. They try their best to account for edge cases, too. At the other end, you've got the vibe-coding, surrender to the robots and go for it crowd. They get something that kind of works pretty quickly, but requires a lot of iterating to get something production-ready.
You're probably already seeing where this is going. These are parallels to waterfall and agile methodologies. It's not a perfect comparison, but there are similarities. Also not surprising, the former lends itself to product leader thinking, while the latter is engineer biased. This arrangement should raise an eyebrow about what AI is, and what it's truly capable of. Let me get back to that.
In the pre-AI world, we navigated this world already. Waterfall never really delivered on the intended quality outcomes, and took too long. It was built on assumptions that didn't account for the way that users actually interacted with the software. Agile is a good idea that has been hopelessly corrupted by associating it with tools and artifacts, ignoring what the Agile Manifesto said, "Individuals and actions over processes and tools." For those of us who have enjoyed working with productive teams, we know that being iterative with the right amount of context is the sweet spot. Build as little as you can, challenge your assumptions, repeat, and you're at the right spot when the stakeholders are happy with the outcomes.
We're going through the same journey we did without AI, and applying it to the AI. We're so convinced of its human-like interaction that we just have to apply what we already know. Is this an accurate assessment? Again, remember that AI does not have judgment, creativity, and it annoyingly doesn't ask questions very often. But these two approaches do have similar outcomes, with the same pros and cons. The anthropomorphizing of the AI in this case seems to be appropriate.
It also shines a light on what I've been saying all along: AI-assisted coding is faster in the hands of an experienced software engineer, but the coding was never the hard or time consuming part of making software (just the most expensive part). It doesn't eliminate the need to decide what to build, and why, which is what you have undoubtedly spent many meetings discussing.
So save your anecdotes about the one guy who built a social network in a few weeks (that's me!), or the one that built a compiler surrounded by an enormous existing test suite. Those aren't typical. AI makes coding more fun and productive, sure, but that's the smallest part of the process.
No comments yet.