Over the past couple of decades, the mindset of “Faking it, till making it” had become the motto of Start-ups and pioneering teams and a very common advice from Guru’s to Rookies in the industry.
Thanks to the 1980s movement of energizing individuals to try to break through their boundaries and push for impossible and it seems to have worked, at least to some good extent and lead to passing it as an advice to the next generation.
It kept bringing results, until it ran into this new thing called Agile.
As of then, trying to cook the metrics and faking performance in Agile proved to be the worst anti-pattern idea you would bring upon an enterprise with epic negative impacts to the teams and the organization’s reputation.
Not to unload all the blame on the Agile practitioners (Scrum Masters, Kanban Flow Masters …), in most cases the reason behind such embezzlement becomes a matter of survival under the suffocating pressure from the upper management on the teams to show steady growth in performance.
A Scrum Master (or Agile lead of any rank) under constant pressure from Sr Management to show an ever rising Velocity, or a Kanban Flow Master who is being pressed for constant improvement in the teams’ Delivery Rate (or continued drop in Cycle Time), would be forced into that self-damaging and team-demising practice out of despair.
Turning that illogical expectation into a personal performance metric, bashing those who cannot dance to that tune and rewarding those who can put up the best show for it, would soon be creating the destructive anti-pattern and a downward spiral on code quality, code delivery and value creation for the customers.
This is a direct outcome of lack of adequate awareness through the executive ranks, missing corporate-wide training, understanding how Agile values manifest in day to day activities, and how to stay on the level with transparency in combination with Inspection and Adaptation.
Training a Good Enterprise Cook (of Agile Metrics)
As per the well-known law attributed to Charles Goodhart, a British economist and former member of Bank of England and former professor at London’s school of Economics, “When a measure becomes a target, it ceases to be a good measure”.
What it means, is that for example, if you start measuring someone’s performance based on the number of nails made in a day, you will push them to produce a very large number of tiny nails. If you measure their performance by the weight of the nails made in a day, you will be given a handful of very large and heavy nails.
This law was originally formulated as “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
As clear as it was to Statisticians and Economists, it was not showing the same zing within the other fields, so it was reformatted to the new way that is mentioned in the image above.
This refers to the fact that once getting an ever improving report on metrics become the goal according to the senior management, and that even develops further into the main do-or-die policy of the department – or worse – the organization, the teams start finding ways to game the report and to look nice for as long as this milking practice can continue before the reality catches up with the organization and land it somewhere nobody likes to see.
Take Velocity for example, which is expected to represent the amount of work (measured in Story Points) that a team (or a group of team in an Scaled Agile model) are able to complete and deliver, during a Sprint.
In order to be effective, Velocity is calculated as the average number observed over the past few Sprints.
Naturally as the teams become stronger in their skills and more of their work is automated and their developed functions are becoming APIs or Services and a lot of repetitive work is re-used, their performance shows improvement up until it finally reaches a plateau where it mainly stays at the team’s highest achieved number but stays rather flat at that level.
Unless something major happens to the teams’ ability to deliver (like a change in number of highly skilled team members or using a solution that would expedite teams’ delivery capacity or the new work turns out to be impossibly hard, etc.), it would stay pretty much the same.
Now if Velocity turns into your teams’ lifelines and they feel threatened by its number at the end of each iteration (be it a Sprint or Program Increment), they will start overestimating the work so it would appear that they are doing more.
Once the work comes out, the business will scratch their head in bewilderment of where did all the constantly growing achievement – as per the ever rising Velocity of the team – go without materializing in the actual Product that is being released.
This would continue till it is escalated to the management, at which point their in-depth investigation would reveal the puffed up estimates and under-delivery of work and would most likely ignite a finger pointing battle which can lead to the destruction of the team (or as far as dissolution of the entire department).
Of course, if this is not caught in-time, it would result in market losses that may lead to the ruin of the organization.
Same problem goes to Kanban teams when gaming the WIP (Work-In-Progress) constraint of the columns (stages) through breaking the Stories into broken half-stories that would now be done as two separate items and would raised the teams’ Delivery Rate.
Further manipulation can be done by breaking down stages into many sub-stages to show high Through-Put values for each. It would also show lower number of issues on the Cumulative Flow Diagram as it hides the diverging or shrinking of one former columns through multiple half-stage ones.
Here are some of the common cases:
- Breaking down a small Story into more Stories, with each newly created Story having only one piece of the puzzle and estimating each one as a full Story.
- Allowing Stories that do not meet the “Definition of Done” to get marked as Complete and then create new Stories to finish the work, all with their own fresh estimates.
- Allowing Stories that have failed QA to get marked as Complete and then create QA Stories, each estimated separately, to finish the work.
- Taking credit for Stories that are developed by other teams for a joint release.
- Add fake Stories that do not need Dev or QA, only to close them as Completed work.
- Allocating Capacity for BAU (Business-As-Usual) work and then estimate them and add them to the Sprint Backlog to consume the non-BAU capacity that is supposed to be used for other purposes.
- Manipulating the Story’s estimate in the middle of the Sprint without adjusting the remaining Sprint Backlog in a meeting with the Product Owner.
- Over-estimating the Stories by giving them ball-park (yet puffed up) numbers.
- Use Partial Story Points for incomplete Stories in one Sprint and spilling them over into the next Sprint with fresh estimates.
- Create Stories for meetings and ceremonies and give them estimates as Stories.
- Using more efficient technology tools (faster infrastructure, automated testing, Cloud setup, purchasing a 3rd party solution and modify it, etc.) and use the estimates from the in-efficient, manual workdays.
Are your Scrum Masters / Kanban Flow Masters, part of the Solution or Problem?
Since Scrum Masters / Kanban Flow Masters are responsible for promoting Agile values and to ensure that Agile Teams are following them as closely as possible, they are on the hook for the gamed metrics regardless of participation, staying quiet or being tricked all the way.
Your experienced Scrum Masters should be able to notice such problem during Backlog refinement and Sprint Planning sessions.
Your Savvy Kanban Flow Masters should be able to tell when during the Replenishment Feedback Loops (similar to a combination of Refinement and Planning ceremonies of Scrum), their teams and the Product Owners seem to be breaking Stories apart to too much granularity, leading to an artificially puffed up number of Stories to finish.
Outside that, the actual report that is created using the Metrics may also be gamed by the Scrum Masters (or Kanban Flow Masters) in order to give their work quality a face-lift and also balloon up their Teams’ performance.
In some cases, the middle-management may also be participating in the scheme, either by promoting the cooking practice through ignoring the details behind the puffed up numbers and enjoying the fake credit that it produces, or directly encouraging the work behind the mis-representation.
Keeping Agile Teams honest is a cultural movement.
Since Agile Teams – at least at the time of this writing – are all comprised of humans (and no robots or AI yet), they should be treated with the considerations needed for Agile teams, the most important of which: Self Organization.
Once Agile teams are trained and brought into a functional level, they need to be allowed to practice their hands in self-organization and through that practice ownership of what they commit to deliver and then improving on that. They also need to be afforded with the trust in their commitment in continued improvement and learning.
If the management feels – or is not sure – whether the team is improving as they should, instead of asking for better Metrics, they should try to walk a mile in their shoes to see and feel how their day to day work is proceeding.
They should check to see whether the teams are dealing with high pressure and stress and why are they experiencing that. Only then the management can step in and assist in resolving the factors behind unsatisfactory performance improvement levels and lack of acceleration in productivity.
Humans are smart and adaptable. If you push them through an impossible expectation period, they will come up with creative ways to soften your pressurized management and none of those approaches will lead to a better “True” performance or productivity level.
We all do a lot better to step in to assist and participate in improvement as an organization and enjoy the shared victory across the teams with a real gain in customer satisfaction, and through that an expansion into the market.
Arman Kamran is an internationally recognized executive leader and enterprise transition coach in Scaled Agile Delivery of Customer-Centric Digital Products with over 20 years of experience in leading teams in private (Fortune 500) and public sectors in delivery of over $1 billion worth of solutions, through cultivating, coaching and training their in-house expertise on Lean/Agile/DevOps practices, leading them through their enterprise transformation, and raising the quality and predictability of their Product Delivery Pipelines.
Arman also serves as the Chief Technology Officer of Prima Recon Machine Intelligence, a global AI solutions software powerhouse with operations in US (Palo Alto, Silicon Valley), Canada (Toronto) and UK (Glasgow).