Now 50% off Gitkraken Pro

Git Blog

Releasing the Power of Git

AI amplifies your systems, for better or worse.

AI Is an Amplifier, Not a Shortcut

There’s a version of the AI story that engineering leaders want to hear. It goes like this: adopt AI coding tools, watch output multiply, ship faster, do more with less. Clean. Simple. Boardroom-ready.

The data tells a different story. Not a worse one. Just a more honest one.

We recently analyzed 2,172 developer-weeks of real coding activity across teams using GitHub Copilot, Cursor, and Claude Code. The headline numbers are striking: power users show 4-14x higher activity than non-users. More commits. More PRs. More test code. If you stopped there, you’d think AI had fundamentally changed what’s possible.

But here’s the thing. If AI tools actually delivered 10-14x productivity gains, every company on the planet would have already cut their engineering teams in half. That hasn’t happened. And it’s worth asking why.

The gap between activity and impact

Much of the variation in output we observed exists regardless of AI usage. The developers who generate the most with AI were already high-output before AI. They tend to be more senior. They work on different types of problems. They operate in teams with stronger review practices and clearer ownership.

When you control for those factors and compare developers against their own historical performance, the real AI-attributable productivity lift is closer to 25% year-over-year. That’s a meaningful number. It’s also a fundamentally different story than 10x.

This distinction matters because it changes what you should do about it. If AI creates 10x engineers, the play is simple: buy licenses and get out of the way. If AI amplifies what’s already there by 25%, the play is completely different. You need to invest in the things that make that amplification productive: review practices, testing culture, architecture decisions, and the ability to measure what’s actually happening.

The number nobody wants to talk about

The other finding from this research that should be on every engineering leader’s radar: code churn increases up to 9x among heavy AI users.

More code written. More code rewritten. That’s the tradeoff.

This isn’t inherently bad. Rapid iteration can be healthy. But it becomes a problem when you can’t see it. Most teams don’t track churn as a first-class metric. They see the output going up and assume everything is working. Meanwhile, rework is accumulating, review burden is growing, and the long-term maintainability of the codebase is quietly degrading.

If you’re not measuring churn alongside throughput, you’re seeing half the picture.

What AI actually amplifies

The pattern in the data is consistent: AI magnifies whatever is already there.

Strong teams with good practices get faster. They use AI to eliminate boilerplate, accelerate test coverage, and reduce the friction of starting new work. The 25% lift compounds on top of an already healthy system.

Teams with fragmented tooling, unclear ownership, and weak review practices also get faster. They just get faster at generating technical debt. More code, more churn, more surface area that nobody fully understands. AI doesn’t fix broken processes. It scales them.

This is why treating AI adoption as a tooling decision misses the point. It’s a systems decision. The outcome depends less on which model you pick and more on the environment you allow that model to operate in.

What this means for how you measure

If AI is an amplifier, then the metrics you use to evaluate engineering performance need to account for amplification effects, not just activity.

That means measuring quality alongside speed. Tracking churn and rework as signals, not just throughput. Connecting engineering output to delivery outcomes like lead time, deployment frequency, and change failure rate, not just counting commits and PRs.

It also means asking your developers what’s actually happening. The data shows what changed. Developer sentiment tells you why. You need both.

This is the measurement framework we’ve been building at GitKraken. Insights combines DORA metrics, code quality signals, AI impact data, and now developer surveys into a single view. Not because any one of those dimensions tells the full story, but because together they get you closer to answering the question that actually matters: is your team shipping better software because of AI, or just more software?

The bottom line

AI is real. The productivity gains are real. But they’re gains on top of your existing system, not a replacement for it. The teams that will get the most out of AI in the next two years are the ones that invest in the fundamentals that make amplification productive, and build the measurement capability to tell the difference between motion and progress.

The challenge now isn’t adopting AI. It’s understanding its impact so you can grow its impact.

The full research, “The AI Multiplier Effect,” is available at gitkraken.com/reports/ai-multiplier-effect.

Like this post? Share it!

Read More Articles

Visual Studio Code is required to install GitLens.

Don’t have Visual Studio Code? Get it now.

Team Collaboration Services

Secure cloud-backed services that span across all products in the DevEx platform to keep your workflows connected across projects, repos, and team members
Launchpad – All your PRs, issues, & tasks in one spot to kick off a focused, unblocked day. Code Suggest – Real code suggestions anywhere in your project, as simple as in Google Docs. Cloud Patches – Speed up PR reviews by enabling early collaboration on work-in-progress. Workspaces – Group & sync repos to simplify multi-repo actions, & get new devs coding faster. DORA Insights – Data-driven code insights to track & improve development velocity. Security & Admin – Easily set up SSO, manage access, & streamline IdP integrations.
winget install gitkraken.cli