When I first learnt about the practice of velocity as a metric for measuring development progress I was sold on the idea. Velocity seemed to be the perfect solution to articulate the inherent challenges of IT development and implementation.
In velocity I recognised: a way to measure and communicate progress to stakeholders; a way to provide an abstraction from the underlying technical detail; and, finally, a means to estimate implementation in terms of relative complexity rather than man-hours, dates and tasks. Even better, it would be the team who are responsible for implementation that provided the relative complexity estimates. After all, who would be better qualified to provide a responsible estimate for work remaining if not the technical team?
This had to work. No longer would we be measuring success as the ability to achieve a date-driven milestone; instead, we would iteratively deliver a set of requirements prioritised by value, and measure progress as our ability to ‘burn down’ complex technical problems.
A few years down the line and my optimism towards velocity has been tempered with the realism of experience. I would like to share some personal insights.
“A team can have a great velocity and deliver no value.”
It feels good to be part of a successful team, when iteration after iteration you are consistently achieving your previous velocity. Better still, if you are part of a collocated cross-functional team conducting regular retrospectives there’s a good chance that you are seeing an upward trend in your development velocity. Estimation is becoming faster and results more consistent; defects reported in production are consistently reducing over time.
All of these things are signs of a successful and motivated team but they do not measure value perceived by the customer. From a team perspective, everything is going well, but none of these things need include customer validation or feedback. Take care if you are fortunate enough to find yourself part of a hyper-productive team. You may have achieved the pinnacle of agile development prowess, or you could be comparing yourself against a set of introspective and narcissistic standards. Velocity alone won’t inform you either way.
“Velocity is a poor metric by which to measure team productivity but is often abused for this purpose.”
One of the principles of velocity is that it should not be used to compare the performance of one team against another, but in practice I’ve seen this behaviour repeatedly. On projects where dates matter and multiple teams share a single backlog, velocity offers a seductive means by which to derive when work may be complete. As pressure increases it is such a small next step to compare one team’s velocity to another; after all, they are working against the same backlog.
This is seldom helpful, and often the first step towards localised team optimisation rather than the most efficient choice for the programme or project as a whole. What is more, the most energy-efficient way for any team to improve velocity is not to work faster or smarter but to affect the scale of the original estimates. As an industry we tend to work with people who are more than capable of working out how to game this particular system if only we provide an incentive for them to do so.
“Burn-down charts are a measure of assumed remaining activity, not progress.”
Ultimately, velocity does not measure progress; it is a relative measure of work completed compared against work outstanding. If velocity was calculated by the number of points released into production per iteration this could arguably be the case. But even if that were true we would only be measuring the amount of work delivered into production rather than the value that work delivers to the customer or end user. Surely, customer validated positive feedback and empirically measured value delivered into production would be a true measure of progress?
More on the topic of velocity: