I have to make a confession. The biggest reason that I love using AI agents to write code is that I just want to get to the outcomes quickly. Sure, I have to check the work of the robots, but it's faster than me writing it, even with robust auto-complete. I know a lot of engineers feel pride and excitement about pecking out solutions to stuff, but if I'm being honest with myself, I don't think I ever enjoyed it.
That said, if AI hit at the start of my career, I wouldn't be where I am.
I fear that there's a structural problem with the use of agentic coding. I'm not suggesting we don't use it, far from it. But I am saying that we could have a problem with the up and coming folks if they're uninterested in what's actually happening with the code. That's how I was when typing was the only way to write code.
I think I was a little slow to really understand object-oriented programming (or scripting for that matter), because my entire incentive for learning in the first place was to get to the outcomes I wanted. I started as a media guy who just wanted to put stuff on the Internets, so programming was a means to an end. I just happened to turn it into a career. It took years before I felt like I was "good" at it, and even when I felt competent, my output was so much lower than my peers because of my then-undiagnosed ADHD. If I had AI back then, I doubt I would have gone very deep on what the machines wrote, as long as it worked. By extension, I would have never gained an appreciation and love for system design, the part that I enjoy the most. In other words, I'd be a vibe coder that didn't know how my stuff worked.
That puts us in a weird place. If we're not mentoring and coaching our junior devs, who will be the senior devs of tomorrow? LLM's are just sophisticated word guessers, and they get worse if they're training on their own output. (Is there a term for this? The thing where LLM's get racist and generally worse without bona fide "good" input?) The trust and verify approach is going to be with us for a long time, and when something breaks, good luck blaming the AI. Going back to my own focus on outcomes, nobody cares... customers, users, stakeholders... if AI wrote the code, they just want a working system.
No comments yet.