Velocity

Implementing DORA Metrics: A Developers Perspective

An overview of Google's 'DORA' DevOps metrics, plus real-world insights gained from their adoption at Jobber.

This entry is part 3 of 7 in the series DevOps Metrics for High Performance

Through six years of research, Google’s DevOps Research and Assessment (DORA) team has identified four key metrics that indicate the performance of a software development team.

The metrics not only give insight into DevOps performance, but the research indicates that these metrics are actually predictive of improved business outcomes, including profitability, productivity, and customer satisfaction.

The feature video from Dina Graves Portman and Dave Stanke, Developer Relations Engineers at Google Cloud, covers Definitions and DORA Background, Four Keys Implementation, Infrastructure Review and Demo, and Using the Four Keys Dashboard.

Google Cloud Platform has created an open source project to help automate the generation and collection of the metrics.

DORA at Jobber

Writing for Infoq Ian Phillipchuck, Engineering Manager at Jobber, shares his practical experiences of implementing DORA metrics with their development team.

Ian’s principle observation is one you might expect, that metrics alone aren’t sufficient, you also need to address the human and organizational dimensions. For example speaking with developers identified issues like too many meetings interrupting their work and thus reducing productivity.

The metrics are an essential foundation starting point, providing a frame of reference to guide how they analyzed key processes and identified required improvements, such as:

  • Investing in build-on-demand CI/CD pipelines not only for their production environments but also their developer environments as well, drastically impacting their LTC by getting test builds out to internal stakeholders and testers minutes after an engineer has proposed a fix.
  • Streamlined their PR review process by cutting down on the steps required for them to push out hotfixes, new work and major releases, cutting down significantly on their DF.
  • Rolled out on-demand Bitrise mobile builds for all of their new PRs, and that meant that it now only took 30 minutes to deliver builds to all interested parties with the contents of a change or a revision.
  • Improved the process for how they handle incidents and thus reduced time to resolution, by adding Allma to their Slack communications (demo).

They also measure other metrics as a general approach for continuously improving performance, such as measuring not only the amount of defects resolved, but also how many are being closed per week, or the amount of time that a PR takes before it’s closed, as well as the number of PRs reviewed and comments on those PRs.

They also measure how many times teams and departments update our internal documentation and wiki resources, how many times they reinvest back into other developers or documentation each week.

In short they adopt a culture of if you don’t measure it, then you can’t improve it. This ensures that they’re tracking changes to their progress in a measurable way, helping their teams be agile as they make changes, and be data-driven in their execution.

Series Navigation<< State of DevOps Report – Metrics and Capabilities for Elite Performance DevOpsAccelerating DevOps with DORA – Deutsche Bank >>

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button