Transformation Practices

DevOps Flow: Value Stream Mapping and Integrated Toolchains

High performance DevOps Flow is achieved through identifying and eliminating system constraints through toolchain integrations and process improvements.

As Wikipedia describes Value Stream Mapping is a lean-management method for analyzing the current state and designing a future state for the series of events that take a product or service from the beginning of the specific process until it reaches the customer.

A value stream map is a visual tool that displays all critical steps in a specific process and easily quantifies the time and volume taken at each stage. Value stream maps show the flow of both materials and information as they progress through the process.

DevOps Value Streams

Value Stream Mapping to chart and understand the critical steps in a specific process and quantifies easily the time and volume taken at each stage, identifying key constraints like slow handoff interactions.

The DevOps Institute explains why it is necessary, stating it’s essential to understand the flow of work across the value stream from the idea to software deployment and ensure that we set up feedback loops until the customer or the user realizes the business value.

As Dominica DeGrandis explains in this blog the disconnects between systems and departments are typically the main focal point of errors and lost time and delays. The handoffs generate a considerable wastage of time and effort, slowing the DevOps Flow.

Writing for the New Stack, Jeff Keyes of Plutora, defines that Value Stream is the Future of DevOps is in Value Stream Management, highlighting that often DevOps mistakenly focuses purely on tools and automation, and that VSM is key to unlocking high performance.

The Carnegie Mellon Software Engineering Institute blog Taking DevSecOps to the Next Level with Value Stream Mapping explores in detail the relationship between DevOps and Lean principles, popularized by Toyota, describing that VSM is a Lean technique for visualizing, characterizing, and continuously improving the flow of value across this set of end-to-end activities by eliminating barriers, whether procedural, cultural, technological, or organizational.

Gene Kim of IT Revolution walks through Where to Start with DevOps Value Streams, suggesting a framework that considers greenfield vs brownfield scenarios, and how to navigate the types of organizational challenges that may arise as these lean principles are applied.

On the VMware blog Mandy Storbakken provides an example of value stream mapping IT workflows and how this can illustrate the means for defining DevOps Flow metics, via a server provisioning process:

50% Complete & Accurate – Half the time, this stage cannot be completed without gathering more information or correcting something; 1-hour Processing Time – This stage could be completed in one hour, if there were no delays; 7 days Lead Time – This stage typically takes seven days to complete, which could be time in the queue, time awaiting additional information, or time  waiting for the change review board.

She adds:

“The metrics across all stages, from ops intake to ops hand-off, show that while it typically takes around 15 days to complete the stages, the actual processing of work only accounts for four hours over  that time (15 days LT, four hours PT). An Activity Ratio (AR) – which is the total process time divided by the total lead time, times 100 – can also be derived from these metrics.”

From this she concludes by articulating how VSM typically uses four core metrics to analyze the flow of work through each stage in a value stream, and that these metrics should be derived from actual timings for each stage (not based on SLAs):

  • Lead Time (LT) – Time to complete the stage, from intake to hand-off.
  • Process Time (PT) – Time it could take to complete the stage, if all information were complete and the process were uninterrupted.
  • Complete & Accurate (%C/A) – How often the stage can be completed without needing additional information or corrections.
  • Value Added (VA) – A ‘yes’ or ‘no’ indication of whether the stage adds direct customer value.

Software Factories

The reason they are called ‘Software Factories’ is the underlying management science is literally derived from the world of manufacturing, utilizing practices such as Lean, Six Sigma and in particular the Theory of Constraints (TOC), a process improvement methodology that emphasizes the importance of identifying the “system constraint” or bottleneck.

Developed by Dr. Eliyahu Goldratt it was conceived through recognizing that manufacturing production lines were often guilty of these local optimizations and yet were still frustrated that their overall capacity wasn’t improving. Thus Goldratt invented a management practice that examined the total system to continually identify and remove constraints.

In software development terms a local optimization could be one team adopting an automated process that speeds their individual work but due to inefficiencies elsewhere in the overall life-cycle the rate of deployments is not improved at all.

To understand how this science can be applied to software development a highly recommended paper that explores this in detail is Productivity in Software Metrics by Derick Bailey, describing the application of TOC to software development, such as how they relate ‘User Stories’ as a unit of work and including a framework for performance metrics based upon its principles:

  • Inventory (V), and Quantity: Unit of Production (Q) – How does the software team quantify what is ‘work in production’.
  • Optimising Lead Time (LT) vs. Production (PR) – Using Workload Management to schedule the most optimum flow of work.
  • Investment (I), Operating Expense (OE) and Throughput (T) – Maximising Net Profit (NP) and Return on Investment (ROI) and calculating Average Cost Per Function (ACPF).

In a 2i blog, Principal Consultant Greg McKenna maps this science to software development team practices to optimize the DevOps Flow.

He explains how key TOC and Lean concepts can be applied to speed the deployment of new software releases, notably identifying and removing constraints, reducing batch sizes and eliminating waste. These are all steps taken within manufacturing to increase production throughput and are improvements that software teams can adopt to achieve equivalent benefits.

Integrated Toolchain

The DevOps Toolchain refers to the combination of tools and technologies used to progress code through the full life-cycle from development to production.

How effectively this interlinked chain is integrated together is key to the velocity of throughput, as manual hand-overs between steps can introduce significant delays and potential for error.

In other words it’s essential that testing tools don’t form yet another isolated silo, that they don’t exist on their own as a standalone tool but rather as a component part of an overall high performance value stream.

As Eric Billingsley writes on DevOps.com, the challenges developers face include poor interoperability between tools and manual handoffs causing DevOps automation silos, multiple test environments used through the process and the need for multi-cloud and hybrid deployments adding tons of unnecessary complexity, and increased policy and security requirements that simply cannot scale and remain manual.

As one of a plethora of examples Applause announced a bi-directional integration capability so that application owners will be able to use a Jira project management application to create a ticket that requests a specific piece of code be tested or be sent to a tester that is a member of uTest, an online community of professional testers that Applause oversees.

This highlights the principle challenge engineering high performance DevOps Flow, the complexity of the enterprise IT landscape and also the associated bureaucracy that manages it. One or many parts can act as system constraints.

DevOps Flow

Thus we can see that defining a DevOps Flow framework to achieve high performance software development is a process of value stream mapping the end to end cycle that makes up the total throughput capacity, and identifying and removing the individual constraints that restrict that capacity.

Metrics and process maps will reveal where these constraints can be found, and a combination of technology automations and integrations with organizational changes can be applied to eliminate those bottlenecks.

Video Library

Leave a Reply

Your email address will not be published.

Back to top button