The practice of story points in combination with velocity and burn down charts can be a very powerful tool to make predictions for planning purposes. If teams know their velocity – based on actual experience in the first sprints, and have estimated story points for all requested items, team can make quite useful predictions about amount of time and therefore budget. This simple practice actually works in reality as described. It can be used for release planning and budgeting. The main difference with traditional way of planning is not relative estimation aspect, but continuous nature. Even story points practice that uses baseline of previously gathered experience will give imprecise calculation of work that still need to be clarified, and we therefore know so little about. It is important to regularly reestimate items on the product backlog, foremost for better understanding, and also improved predictions and better planning.
In fact, you could also measure team’s performance with this practice, as so often explained by Jeff Sutherland. “A good team should have velocity increase of about 10% every sprint.” If team’s velocity increases, it means that team has improved productivity, right!?
Measuring productivity based on velocity is illogical at best. It would be interesting to notice how one makes release planning and budgeting based on 10% velocity increase every sprint.
Using velocity to measure team’s productivity is wrong. Actually, extremely wrong. From all possible reasons for velocity increase, actual productivity increase is least likely. Let me give you the actual reasons for velocity increase:
Quantum measurement problem
You know what a team usually does when management uses velocity to measure productivity? What would you do?….Yes, “story points inflation”. You decrease the value of a point very gradually, not even really consciously. Every time you need to make a choice between 8 and 13 points with poker planning card, and they both seem equally valid, which one you will usually choose, ha?? Bam! Your productivity has increased. Everyone is happy, since nobody really notices this, except customer who does not really see any difference in the outcome. The real problem is that any predictions in budgets and release planning have just become less useful.
So, the very act of productivity measurement has influence on estimates provided by each team member.
Sustainable pace in Scrum is a very important aspect. It means that team’s capacity to deliver certain amount of work should stabilise and become more and more predictable. Making something predictable is very difficult with 10% velocity increase. There are two common patterns when teams don’t have sustainable pace.
The first one is chaotic and tends to fluctuate. A team would deliver 30 points one sprint, be praised for the extraordinary performance, only to delivery 15 in the next one. Third one might be almost 30 again, and so on. The reasons for this are usually not very serious:
- Previously unknown technological challenges exposed
- Rushed product backlog grooming and sprint planning
- Team members are not yet t-shaped or generalising specialists. In other words, not capable yet to understand all aspects and required effort to deliver something.
All of these reasons are usually exposed in retrospective meetings, or gradually disappear with practice and better collaboration.
The second one is that velocity is relatively low in the first few sprints because teams need to get used to each other and figure out all kinds of things (architecture, how Scrum works, continuous testing and integration stuff, etc). After few sprints, it is happiness time. Velocity starts increasing with jumps of 10 or more per cent every sprint. Also the real value / number of features delivered per sprint is increasing. Teams seem to improve every sprint, with big steps. Amazing! Scrum is awesome! Then, first signs of a disaster start to show up. Velocity is not increasing with 10% anymore, then gradually stops increasing and starts to slowly decrease. The reason: Well, our architecture has become large and complex. It is more difficult to deliver new features today with such a big system. Everyone will usually think: Ok, I guess that is logical. No real problem here. Except that we cannot really say anymore if team is more or less productive. Management discovers that whole productivity measurement with velocity is pointless, and quietly fades away.
Unfortunately, this realisation has come way too late. The real reason for continuously increased velocity are shortcuts in quality and architecture. Team has created a monster of a thing, under (in)direct of pressure of increasing productivity. The outcome: spending several sprints on fixing problems without producing any or very little value; and sometimes rewriting the whole thing. Last one could be a complete failure if your budget is gone.
Healthy velocity is a stable one
This is relatively stable velocity. It should not fluctuate much and should be in line with sustainable pace.
In the first sprints, teams do need to get used to each other, learn practices, introduce continuous delivery tooling. This may have certain impact on velocity, but healthy situation is the one where these activities are spread over multiple sprints or introduced gradually. Therefore, impact is rather limited. On other hand, there may be a limited amount of existing code (no legacy) in these first sprints, which enables team to deliver with little effort. In other words, low complexity may cancel getting-used-to-everything negative effect compared to later sprints.
5 sprints later: team knows how to effectively use the practices and becomes more efficient. At that point, a negative effect might be cumbersome product backlog refinement or lack of experience with TDD, refactoring, BDD, etc.
20 and more sprints later: team is in a real flow. Things have become predictable, meetings are efficient, but overall complexity of the solution has risen. Once again, these things affect velocity and tend to cancel each other out.