pmmagazine.net

Your monthly dose of insightful Project Management articles

pmmagazine.net

Your monthly dose of Project Management articles.

Optimizing Value Delivery Through DevOps Transformation

In the current “Heavily Digitally Disrupted” market, where the tight race for delivering more business value in less time with ever-improving quality and richness has become the new norm, there is little room left for the traditionally structured pipelines to try and transform into more adaptive and optimized models before their window of opportunity gets closed forever.

Enterprises suffer from a variety of Organizational Debts, which is the aggregation of their technical, talent, process, data, architecture, security, and even social debt that has been accumulated since their inception.

To navigate through the turbulent market conditions which is pulled and pushed by the forces of pandemic, economic rivalry among the large players, technical disruptions, and an accelerated case of natural disasters as we step further into this decade, all organizations need to transform their value delivery pipelines into the next level of efficiency and performance.

DevOps came into existence to end the conflicting stance between Dev and Ops teams and unite them into highly productive, self-sufficient engines sitting at the core of Value Streams. The traditional Dev and Ops structure had created two extreme poles in the technology teams.

Dev would like to quickly respond to market with innovative ideas to be released rapidly to customers to engage them, generate revenue, and also collect feedback to be incorporated in the next release.

Ops on the other hand was tasked with establishing and maintaining a stable, secure, and high performing environment and was not interested in last minute changes and experimental loads on the servers which would cause instability or breakdowns.

DevOps at enterprise level is composed of multiple dimensions, with the most prominent ones being:

  • Technology
  • People and culture
  • Process
  • Technology Ecosystem (Tech Vendors and own Deployment Structure).

Key Performance Indicators (KPIs) for DevOps are mainly focused on:

  • Development(Cycle time or the time it takes for the product to complete coding and testing and exit the pipeline).
  • Deployment(Cycle time for the product release from pipeline and into the outside world, and the Failure Rates)
  • Recovery(measure of the time it takes to recover from an issue in the production environment, like Mean-Time-To-Recover [MTTR])

DevOps is based on minimizing manual work, maximizing alignment with the Value Streams, and providing high Reliability and Consistency, which would empower capabilities such as:

  • Configuration and Environment Management:
    • Infrastructure-As-Code
    • Platform-As-Code
  • Development and Testing (Smart Code Development and Automated Testing)
  • Release Management
  • Operations Management

Since no organization in the world enjoys unlimited budgets and timelines to complete their transformation at their convenience, to succeed in their journey, they should choose the best set of pioneering areas that they believe will have the best positive impact and highest possibility of success in each incremental step.

Application Portfolio Analysis

One good approach in identifying the DevOps teams (and areas) for the first / next batch for transformation, is running the candidate areas through an Application Portfolio Analysis.

The following sequence can help the organization decide on the Minimum Viable Change Cluster (MVCC) which is the group areas that should logically go through the transformation together to make a meaningful and significant impact to the organization (or their respective group):

  • Starting from the Current Investment Profile of each Product/Service, we can graph the Expected ROI against the Cost of Transformation to look for High Impact / Low Cost items.
  • The second filtration should be on the Product/Service’s Frequency of Change (deployments),
  • Then their Criticality (the value this Product/Service has, or the loss expected from losing its functionality)
  • And last, but not the least, the Technology Stack that is the used for that Product/Service (to decide which candidates have [less complexity and share more platforms  / infrastructures] to move together).

When in doubt or having too many candidates on the list, try to use the 80/20 Rule (getting 80% of the positive impact by transforming the top 20% of candidates).

Value Stream Mapping

It is important to note that an organization’s ability to create Business Outcomes is usually capped by the Value Delivery Pipeline with the lowest maturity level (the weakest link in the Value Delivery Ecosystem).

Value Stream Mapping (VSM) allows the transformation leadership to identify the Value Stream(s) that are  creating and maintaining the candidate Products/Services and their respective DevOps teams. It also helps to visualize the end-to-end delivery process, assess the cycle times, review the failure rates, and identify the tooling that is used in each part of the pipeline.  

Using Score-cards the team can record the information and use as a comparison and tracking measurement tool. They will also allow for creating Baselines at the onset of the transformation for each candidate Product/Service which can be used later to compare and assess the improvement that is gained from the transformation.

Experiment Definition (Hypothesis)

It is important to remember that each transformational change that we are planning for, is essentially an experiment trying to prove our Hypothesis that “certain assumed benefits can be obtained should be successfully implement that incremental change”.

As they say, “Design like you are Right, but Test like you are Wrong!”

Tooling

DevOps uses a broad range of tools to cover collaborative coding, testing, continuous integration (CI), continuous deployment (CD), data administration, container management, security, and monitoring.

While there are numerous tools in the market covering each area, there are general best practices to consider:

  • A tool’s configuration should be stored as human-readable code (enabling us to re-create that tool in our environment whenever needed. This would also allow for automating their creation from scratch, which would be a recommended practice to do so in certain intervals).
  • A tool’s licensing should preferably be Open-Source or Enterprise to avoid escalating the costs as the number of users go up.
  • A tool should allow its data to be used by other tools in the pipeline (esp. Dashboards).
  • A tool should be able to use standard formatted data from other tools in the pipeline (for automated chain integration).
  • A tool should fit the technical capabilities of our teams (matching their skills set or be intuitive enough for them to learn rather easily).

From time to time, we will receive requests from a team to allow for diverging from the rules. In these cases, the requesting team should be asked to:

  • Make sure that their chosen tool follows the base organizational standards for resilience, security, and maintainability.
  • Provide an experiment description (Hypothesis over the expected benefits and how to measure them) and report back.

Based on the outcome of the experiment we can decide whether the tool can stay and even be adopted into the tool set used by the organization.

Active management of DevOps tools is a crucial capability of an organization since even the best tool can become a liability if not managed properly.

Vendor Management

The traditional “Functional Outsourcing” where a part of the pipeline (e.g., the development or testing) was contracted out to a vendor had major flaws that historically created numerous issues for organizations.

Many vendors had their contracts designed in a way to allow them to choose their own tools which were not necessarily compatible with organization’s tooling and needed many manual interventions to integrate their outputs into the enterprise pipelines. This would become more complex when multiple vendors were contracted to cover several parts of the pipeline and needed to integrated their work using incompatible tools and standards.

The new model calls for “End to End Partnership” with active and live integration of their tools – and their culture - with the organization’s pipelines following the same standards and guidelines as the customer teams.  

Evaluation of their quality of work and performance should concentrate on the main factors impacting our Value Streams which be measured in Delivery and Operational scorecards to track timeliness, predictability on scope, quality of the output, amount of existing automation and the cycle time at each step.

The partnership contracts should include clear engineering standards requested by (and adhering to) the organization’s mandates. It is important to consider realistic and progressive measures in place, so the vendor have less need to ask for exemptions during tight delivery timelines or fast-tracked delivery exercises. These standards should cover key areas such as Configuration Management, Code Quality, Building and Deployment, Quality Assurance, Operationalization and Metrics and KPIs.

Deming’s Cycle

The famous Deming Cycle is the fundamental way all work is done (unless we want to skip one step and fail in our attempt!)

  • Plan: requires us to establish the Hypothesis and have success measures and systems to track the progress.
  • Do:is when we put our Hypothesis into the experimentation and run our measurement on its outcome to gauge and track its validity.
  • Check:At this step we analyze the measured outcomes to understand the success level of the experiment and decide whether to continue to the next deliverable or take corrective actions and re-run the previous experiment.
  • Act: Based on the outcome of the Check step, we either continue with the next item on the list or repeat our experiment with new parameters and conditions.

System Thinking

Looking at the entire organization as one interconnected system provides us with the right perspective on how to improve the entire entity and not just one area. While we are following an incremental path to make the evolutionary changes as planned for our transformation, it is important to measure the positive impacts at the local and organizational levels and build up the positive changes toward the future state we have in mind for the enterprise.

Conclusion

Our organizational DevOps transformation should focus on rising the speed of decision making (through short feedback cycles), happiness and engagement of the teams and resilience, consistency and compliance of the work that is done.

While this recommendation may seem to miss the centricity of “Business Outcome” in the equation, history has shown us that with those fundamental improvements, the organization will be able to deliver more accurate, solid products and services that will adhere to the needed service availability and quality which in turn deliver strong value to customers and result in the “Business Outcomes” that the organization is trying to achieve.

Exclusive pmmagazine.net 💬

Arman Kamran

About author

Enterprise Agile Transformation Coach, CIO and Chief Data Scientist

Arman Kamran is an internationally recognized executive leader and enterprise transition coach in Scaled Agile Delivery of Customer-Centric Digital Products with over 20 years of experience in leading teams in private (Fortune 500) and public sectors in delivery of over $1 billion worth of solutions, through cultivating, coaching and training their in-house expertise on Lean/Agile/DevOps practices, leading them through their enterprise transformation, and raising the quality and predictability of their Product Delivery Pipelines.

Arman also serves as the Chief Technology Officer of Prima Recon Machine Intelligence, a global AI solutions software powerhouse with operations in US (Palo Alto, Silicon Valley), Canada (Toronto) and UK (Glasgow).

View all articles