AI Adoption in Software Development

January 10th 2026 (21 days ago)

"All of your dreams can really come true, all of your nightmares are waiting there too" — Motörhead

Last month, I learned that Michael Burry had shorted his AI stocks, including Palantir and Nvidia. For those unfamiliar, Burry is the investor who predicted the 2008 housing crash and profited from it—his story was immortalized in The Big Short. This news sparked something in me: a creeping unease about where we're headed with AI. And this concern isn't just for investors. It should matter to developers too.

I want to share my thoughts on the current state of AI, particularly in coding, programming, and product management. More importantly, I want to be honest about my own experience—how I've integrated AI into my workflows, how much time it saves me, and why that very dependency now keeps me up at night.

The Good: How AI Transformed My Daily Work

I work as a product manager at a Swiss NGO. It's a full-time role with responsibilities that span multiple domains:

  • Task management and sprint planning
  • Solving technical issues, fixing bugs, and implementing features across various web products
  • Conducting code reviews for other developers
  • Communicating with our CEO and clients to define requirements and scope new features

I've invited AI into every single one of these tasks. My criterion is simple: if it reduces the time I spend on a task, it earns a place at my desk. And if it takes a few hours to figure out how to integrate AI into a particular workflow, I'll invest that time gladly—because the payoff compounds.

Here's a concrete example. Our entire tech team adopted Claude Code in August 2024. In just five months, we increased our pull request volume by 45%. We shortened the time between opening a PR and completing code review by 75%. The number of commits we ship now is enormous. We're simply moving faster.

How I Actually Use AI Day-to-Day

I've integrated Claude Code into virtually every dimension of my work. Let me walk you through the specifics.

Morning Briefing: The "Coffee" Command

I built a custom command called /coffee that runs through:

  • All Linear tasks from our tech team since yesterday—flagging what's done, what's ready for review, and what's been assigned
  • Our Sentry logs across all projects, surfacing critical bugs from the past 24 hours
  • (Coming soon) My Slack messages, to catch anything I missed or need to follow up on

The output is a single-page Markdown file: a clean, readable list of 2–3 tasks that genuinely need my attention today. Maybe it's a task that's been stuck "in progress" for too long and finally moved to review. Maybe it's a Sentry error that's been quietly accumulating. Either way, I start each day knowing exactly where to focus.

Parallel Agents for Development and Review

I've written a suite of custom agents, each designed to behave like a senior developer in a specific domain:

  • A Django/ORM optimizer
  • A database engineer
  • A UI/UX reviewer
  • A tester
  • A code reviewer

When I'm developing or reviewing code, I spin up several of these agents in parallel. They work remarkably well—provided you give them clear instructions, point them to the right context, and define your conventions explicitly. For instance, we have specific formatting rules for Python and Vue.js templates (we place the second attribute directly below the first, which I find far cleaner than Prettier's defaults). The agents respect these conventions because I've told them to.

Critical Review of My Own Work

Before I submit anything, I ask Claude Code to critically review my task definitions and implementations. I want it to find bottlenecks, highlight weak spots, and suggest improvements—not just to the code, but to how I've scoped the work itself.

The result? Claude Code handles a significant portion of the heavy lifting. It saves me hours every week and lets me focus on the work that actually requires human judgment.

The Bad: The Dependency Problem

Now here's what troubles me.

What happens if Anthropic stops providing their services tomorrow? What if some geopolitical decision in the US suddenly cuts off European access to these tools? It's not as far-fetched as it sounds—we've seen how quickly technology access can become politicized.

Many of us have already woven AI deeply into our workflows. We rely on it to save time, make better decisions, stay informed, and maintain control over increasingly complex systems. And therein lies the curse: we don't know what tomorrow brings.

We've built our productivity on infrastructure we don't control. We've optimized our processes around tools that could vanish. We've become faster, yes—but also more fragile.

I don't have a neat conclusion here. I'm not suggesting we abandon these tools; the benefits are too substantial. But I think we need to be honest about the trade-off we're making. Every hour saved is also a thread of dependency woven tighter.

Michael Burry might be wrong about AI stocks. But his instinct—to look for the hidden risk that everyone else is ignoring—feels uncomfortably relevant right now.