Agile Planning Part 3 — Monitoring and Optimizing

Bob Wilkinson
6 min readNov 30, 2021

--

Photo by Stephen Dawson on Unsplash

This is the final post in my series on Agile Planning. In post 1, I described a simple and scalable agile planning process. In post 2, I showed the process in action with an example planning period. In this post I will show some of the neat insights that can be readily derived from a structured agile planning framework like this.

Plan Performance

In each planning period, the organization commits to a set of roadmap epics at the beginning of the period. At the end of the period, we know what was actually delivered. We can therefore look at the delivered epics versus the committed epics to track plan performance.

Before I get to the specific math involved here, I want to show a real-life result of one of our planning periods at Coalition. These are some of the same views I showed in the last post, but they get quite a bit more interesting when played out over a longer planning period with more teams.

This first graph shows what occurred at the epic level in the period.

The light blue line at the top is the total count of epics in the period. Note that it drops over the course of the period. This is quite common and typically happens as a result of teams pushing epics out of the period due to underestimation or unplanned work. While not related to plan performance, the other interesting line to observe here is ‘Epics Left’. Even though we encourage teams to complete epics incrementally over the period, this always tends to show a late burn down, indicating that epic completion is back-loaded in the period. As a final note, the ‘Epics Left’ here really should have converged to 0. We truncate these graphs a few weeks after the end of the period so this indicates we had a small portion of epics lingering open for some reason.

Next, here is the effort burn down for the period.

Here, note the trend of the total ‘Wag’ estimate in the period. In this case, it actually increases early in the period, indicating that the organization took on additional scope for some reason. Towards the end, though, it trends back down to less than was originally planned. Similar to the above, total ‘Wag’ is reduced as teams react to underestimated or unplanned work and have to adjust scope in the quarter.

Now for a few definitions, both fairly simple.

These give us two different ways to look at delivery performance. I track both to get a sense of how our performance trends over time. Here are the actual results from the organization here at Coalition.

From the graphs, epic delivery performance is averaging 91% and WAG delivery performance is averaging 85%. Since I previously threw out 70–80% as a healthy range for delivery performance you would correctly infer that we are pretty pleased with how the organization is performing on these metrics!

Investment Allocation

Another benefit of the planning framework is ready access to different views of investment allocation. This can be useful both at planning time and historically to review how spending has trended over time. If you recall back to the Epic schema we defined in the second post, there was a field defined for Investment Category. Here’s a view I produced recently which summarized investment in the current planning period.

These views are a nice tool to track investment over time against an allocation framework that you strive to maintain. In the above, we like to see ‘RUN’ effort as low as possible because that tends to be basic operational work that we would rather automate over time. The other buckets are all “good” for different reasons and there is no specific target. As an InsurTech, we care an awful lot about our loss ratio, hence a large investment in that area should not be surprising!

Velocity

In my experience, managing velocity (both real and perceived) tends to be one of the more challenging aspects of an engineering/product leader. I’m betting that many of you have been told something to the effect of — “we just aren’t delivering enough features and everything takes too long”. It’s a tricky thing to track. By definition you need some type of unit to measure by. Agile story points don’t work that well because their scale is most often unique to a team and unstable over time as teams/roles change. Other options like lines of code, code commits, deploys to production, etc. all create pretty obvious misalignment of incentives given that none of those necessarily relate to delivered value.

While it wasn’t one of the objectives that I or the team had in mind when we built out the planning framework, it turns out that using the same measures of Epic and WAG delivery you can track velocity metrics that are reasonably stable over time. The only assumption we need is that the average epic and WAG are reasonably stable across the organization over time. Our unit of WAG is ideal developer-months so it’s always anchored at some percentage of the true ideal capacity. For epics, the sample size of epics is large, currently around 100 per quarter in our organization. Additionally, our guidelines constrain the size of epics. Thus, the average epic is actually quite stable over time too.

Here are the metrics from Coalition corresponding to the same time periods above.

I show the capacity metric first as it’s important framing for understanding velocity. Capacity is measured in terms of WAGs per Sprint. The normalization to sprint is important since planning periods will have different numbers of sprints. In this case we have grown capacity materially (from 10.8 WAGs/sprint up to 16.7 WAGs/sprint). There are two ways to influence this capacity — 1) hiring, and 2) increasing the effective rate in the team capacity calculation. In our case, our effective rate has fluctuated modestly as teams learned how much effort they needed to hold back to cover typical operational load, but the vast majority of the increase is due to hiring.

The second graph shows both Epic and WAG delivery velocity per the definitions above. The data shows that we are generally seeing an increase in both measures of velocity over time — this is good! Note that if desired, we could normalize the velocity metrics by capacity so that we can directly see whether the organization is getting more or less efficient as measured by the effective planning rate. I tend to stick with the metrics shown here as velocity increase is velocity increase, regardless of how we get there!

Conclusion

In this post, I showed how to use basic data from the agile planning framework to drive some powerful tools for monitoring and optimization of the process and organization. I hope that the series overall has inspired you to think about your own agile planning processes and how they can be improved! If you have questions or want to discuss any aspect, please don’t hesitate to reach out.

--

--

Bob Wilkinson
Bob Wilkinson

Written by Bob Wilkinson

Software engineer and builder of products and teams. Currently Head of Engineering at Coalition, Inc.