Himanshu: Let’s start with introductions. Nathen, can you tell us about your background?
‍Nathen Harvey:I lead the DORA (DevOps Research and Assessment) team at Google Cloud. I’ve worked in startups and big companies, across roles like engineering, operations, finance, and even marketing. At DORA, we focus on helping tech teams improve through research-backed insights into software delivery and performance. Our aim is to help teams deliver more value—to their customers and their businesses.
‍
‍Himanshu: Rishi, can you share your journey and how DevDynamics started?
‍Rishi:I’ve spent 18 years managing engineering teams at places like Walmart and Disney. A big challenge I faced was explaining to leadership why my teams were busy but not delivering visible results. The disconnect between workload and outcomes was always a tough conversation. That’s what inspired me to build DevDynamics—a tool that gives data-driven insights into how engineering teams work, so they can focus on the right things and improve delivery.
‍Himanshu: Nathen, what’s the philosophy behind DORA and its metrics?
‍Nathen:DORA is more than just metrics—it’s a research program. We’re known for our four key metrics (deployment frequency, lead time for changes, change failure rate, and time to restore service), but our scope is broader. We also study how technical practices, culture, and processes influence outcomes like team well-being and delivery performance. For example, this year’s report added metrics like rework rate and highlighted areas like developer experience and cognitive load.
‍Himanshu: Rishi, measuring performance can be tricky. What challenges have you faced?
‍Rishi:Metrics like deployment frequency are easy to measure, but change failure rate is tougher. Teams don’t always have automated ways to track failures. For instance, a quick follow-up deployment might mean something broke, but confirming that needs context. Ultimately, every team has to figure out what works for them.
‍Himanshu: Nathen, any tips on measuring change failure rate effectively?
‍Nathen :One way is surveys—ask teams how often deployments lead to rollbacks or hotfixes. Another method is tracking deployment patterns. If a second deployment happens soon after the first, it might be fixing something that broke. Some teams even use APIs where engineers can mark deployments as “failed.” It’s about finding what fits your team’s workflow.
‍Himanshu: How do you see AI changing documentation and engineering workflows?
‍Nathen :AI is helping a lot, especially with writing and summarizing information. It’s great for generating initial drafts of documentation, which engineers can then refine. Our research shows that as teams rely more on AI, documentation quality improves. That’s likely because AI makes existing documentation easier to use while also improving what gets written.
‍Himanshu: Rishi, you’ve worked with AI tools. How do you see their impact?
‍Rishi:AI tools are amazing assistants but not replacements for engineers. They’re great for tasks like code generation, but debugging AI-generated code can take longer. While AI helps individual throughput, its overall impact on delivery isn’t straightforward yet.
‍Himanshu: Nathen, your research shows AI reliance can lower delivery performance. Why is that?
‍Nathan :One reason could be that AI helps engineers write more code faster, leading to larger changes going through pipelines. Larger changes are harder to test and deploy, which can hurt performance. Another reason is that AI lacks the specific context of your business or team. Improving how AI incorporates this context will be key moving forward.
‍Himanshu: Does DORA work for startups, or is it just for big companies?
‍Nathen :DORA’s core idea: continuous improvement, works for any team, big or small. For startups, it’s less about dashboards and more about conversations. Ask, “Is our delivery getting better?” As you scale, tools and metrics can add structure. But at any size, the goal is the same: keep improving.
‍Himanshu: Can you share an example of DORA’s impact?
‍Nathen :Uber’s a great example. By using DORA, they not only improved deployment speed but also reduced their carbon footprint by optimizing their architecture. Vodafone’s ML team is another—they cut model deployment time from six months to two weeks, enabling faster experiments and innovation.
‍Himanshu: Any advice for teams starting with DORA?
‍Nathen :Focus on iterative improvement. Don’t get caught up in perfect metrics. Start conversations about trends and outcomes. It’s not about being an elite performer but becoming an elite improver. And if you’re curious, join the DORA community to learn and share.
This episode covered the evolving challenges in engineering management, from metrics to AI. The key takeaway? Continuous improvement is everything. Thanks to Nathen and Rishi for sharing their insights. Let us know your thoughts, and we’ll see you in the next episode!