AI has fundamentally changed how software gets built – 85% of software engineers now use AI coding tools at work. Yet 60% of engineering leaders cite lack of clear metrics as their biggest AI challenge.
The confusion is understandable. Developers are using everything from GitHub Copilot to Cursor to Claude, often without organizational oversight. And when examining organization-level metrics like DORA, some analysis found no significant correlation between AI adoption and better outcomes (known as the “AI Productivity Paradox”).
AI is here to stay, but how do you measure what actually matters in this new paradigm? In this session, we’ll explore three complementary approaches to measuring and improving developer productivity in the age of AI:
Diagnostic Deep Dives – Understanding the “Why” Behind the Numbers: Numbers tell you what’s happening, but not why. We’ll explore how to identify where AI helps vs. where it creates bottlenecks, and what distinguishes high-performing AI adopters from struggling teams.
Internal Benchmarking – Learning from Your Own Success Stories: How teams use AI generally matters more than which tools they use. Discover how to identify high-performing practices worth scaling and make data-informed decisions about standardization vs. team autonomy.
What You’ll Learn:
Who should attend: Engineering leaders who are…
winget install gitkraken.cli