I Was Wrong About AI, or: How a VP Learned to Stop Debating and Start Building
It’s 11pm on a Tuesday. I’m a VP of Product Engineering…haven’t written production code in years…and I’m three hours deep into building something with an agentic coding tool. And the thing that’s running through my head isn’t “this is cool.” It’s “I was wrong.”
Let me back up.
The Hype Didn’t Convince Me
I’m a Gen X-er. A certain amount of cynicism is baked in. But this isn’t that…this is pattern recognition.
I’ve spent the last couple of years listening to tech companies tell me that some extraordinary percentage of their code is now written by AI. And every time I heard it, the same thought fired: these are the companies with billions invested in this technology. They need you to believe it works. They need the returns.
We’ve seen the hype cycles. Pets.com, anybody? We weren’t wrong that the internet would change the world. And we needed the Pets.com’s of that era. The market had to figure out what was real and what wasn’t. I just don’t want to be one of them.
But perhaps more analogous than the dotcom bubble is the pre-DevOps era, when infrastructure solutions were sold to CIOs in boardrooms and then handed down to engineers with a mandate to adopt. I lived through that. I also lived through the DevOps revolution that followed: a grassroots movement where the people actually doing the work chose the tools that actually helped them. The value was real because adoption was earned, not imposed.
So when I saw the AI wave building, I didn’t doubt that it would change software development. I’m not that stubborn. But I had serious questions about when and how much. The claims felt ahead of the evidence. And I wasn’t going to repeat the pattern of adopting something because someone upstream told me it was the future.
I wanted proof, not pitches.
So I Did the Work Myself
Here’s the thing about being a leader who questions the hype: at some point, you have to do more than question it. You have to go find out.
So I did. Nights and weekends. Pick your agentic coding tool of choice (mine is Claude Code, but the argument I’m about to make likely holds for any tool worth its salt). I read the docs. I watched the videos. I built real projects. I deliberately tried to push the AI as far as it could go, as hands-off as possible. Not because I wanted to prove it worked, but because I wanted to find where it broke.
My goal was simple: figure out what’s real before I tell anyone else what to believe.
What Actually Changed My Mind
The early versions of these tools were, frankly, underwhelming. They felt cobbled together. They required the developer to build elaborate scaffolding around them to get anything useful. They worked well inside a narrow context, but the moment your ask got broader, the moment you needed the tool to hold a larger picture in its head, things fell apart.
So what changed?
The pace of innovation, for one. The speed at which these tools are evolving is unlike anything I’ve seen in my career. The ecosystem is maturing fast. Real patterns are emerging, getting adopted, and getting baked into the tools themselves. What used to require a developer to wire together manually is becoming native capability.
And the models themselves are getting meaningfully better. The best ones now genuinely feel like working with a mid-level developer: someone who can hold context, make reasonable decisions, and produce work you’d actually commit. Others still oscillate between impressive and maddening, especially when you push them past their context limits. But the trajectory is clear, and the gap between iterations is shrinking.
Is this all still playing out? Absolutely. But if you’re waiting for the dust to settle before you dig in, you’re making a mistake. The dust isn’t settling. It’s accelerating. And the leaders who understand what these tools can and can’t do will be the ones making better decisions about them.
Where I’ve Actually Landed
Do I believe we should replace engineers with AI? Not a chance.
AI is a force multiplier in the hands of a competent developer. But in the hands of the inexperienced, it’s more like a trickster, full of big promises up front, but capable of leading you into dead ends and hard lessons that you won’t see coming until you’re deep in the hole. The skill of the person driving it still matters enormously. Maybe more than ever.
But here’s what I didn’t expect: as a former engineer who moved into management and then leadership years ago, agentic coding is pulling me back closer to the metal. I can leverage the knowledge I’ve accumulated over my career. It’s the patterns, the architectural instincts, the understanding of what good software looks like, but without having to fuss with learning the specific frameworks or syntax. AI handles the implementation detail. I handle the thinking.
That shift has been profound. Building a personal tool or automation used to mean carving out significant time, ramping up on a framework, and accepting a bunch of overhead that made the juice not worth the squeeze. Now? I regularly have AI build programs, skills, and automations for me. The way I want them. Slack, Jira, Notion? These aren’t just apps anymore. They’re data sources I can wire together for my own purposes.
AI is becoming my operating system. And I don’t mean that in the hand-wavy “I use ChatGPT” sense. I mean I’m building distinct personas, purpose-built configurations with their own integrations, commands, rules, and skills, each designed for a specific part of how I work. It’s not a chatbot. It’s an environment. And it’s part of my daily workflow.
The Payoff
Building products is fun again.
I don’t say that lightly. When you’ve been in leadership long enough, the distance between you and the work grows. You’re reviewing, prioritizing, unblocking; all necessary, but all one step removed from the thing that got you into this career in the first place. Agentic coding collapses that distance. I still spend my time thinking about what to build. That’s the job. But AI means I’m no longer blocked from participating in the building itself. The implementation barrier that kept me at arm’s length? It’s gone.
And there’s a payoff I didn’t anticipate: empathy. Real empathy. Not the kind you get from reading sprint retros or nodding along in standups. The kind you get from walking the same path. I’m in the code alongside my developers now. Not literally shipping features on their backlog, but working with the same tools, hitting the same walls, discovering the same possibilities. We discuss ideas. We share approaches. We evolve our thinking together. That’s not something I could have done from the sidelines.
Mark Twain once said, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” I was so sure that the hype was ahead of the reality. And I’m glad, genuinely glad, that I was wrong. My skepticism needed to be challenged, and I’m a better leader for having gone through it. We all need a swift kick now and then to keep us sharp. The most dangerous trap any of us can fall into is becoming a victim of our own certainty.
So yes, I’m a believer now. But here’s the question that keeps me up later than any coding session: faster toward what?
The tech leaders championing AI love to paint a picture of the upside. Productivity gains. Efficiency. A future where humans are freed up to do more meaningful work. What none of them seem eager to address is what happens when a huge portion of society is unemployed or underemployed and the gains pool at the top. They kick the can. They can afford to.
Yvon Chouinard couldn’t afford to, or rather, he chose not to. He built a three-billion-dollar company and then gave it away, because he believed the fight mattered more than the fortune. He proved that “enough” could be a business strategy, not just a platitude.
So where is the Chouinard of the AI era? The leader who says “this margin is adequate” and redirects the rest? Who turns productivity gains into a 20-hour work week and gives the other 20 back to the community? Maybe that sounds naive. But all great dreams carry a bit of naivety. That’s what makes them worth chasing.
But that’s a bigger story; one I’m still working out.
Until next time…