Lessons learned in turning existing specifications into a DEEP backlog

This week, I helped the team estimate a backlog that was not DEEP as Mike Cohn defines it. Mike Cohn considers that a good product backlog is Detailed appropriately, Estimated, Emergent, Prioritised.

I fully agree with the DEEP principle. However, I ran into a situation where the backlog that did not follow those principles. This happened because the project transitioned from waterfall to agile after a certain while. As a result, an exhaustive amount of analysis work had already been done. This is where I learned a lot: how to best turn detailed specifications into user stories that follow the DEEP principles.

Ideally, the backlog looks like on the following figure.

The farther the horizon, the more coarse-grained the stories and the lower the priority; this is the Detailed Appropriately and the prioritised aspects. The backlog is built over time, hence Emergent. Each time you add an item to the backlog, the team estimates it so that it be Estimated at any given moment.

The problem is that we did not end up with such a backlog. So, what did we learn?

Detailed Appropriately

If you have a detailed analysis, you see detailed specifications and you want to turn them into stories. The problem is that we turned the specifications into way too many too small items. This leads to several problems:

  • The more small items, the more potential estimations errors
  • Too much detail implies unwanted dependencies that impede the estimation process. If priorities change, it is sometimes necessary to re-estimate many stories
  • False sentiment of control and precision. Because we have got many stories in the backlog, we think we have everything under control and that the estimates are precise
  • Too many stories to estimate and the team gets drown into the huge amount of stories and loses the overview
  • Useless rework on multiple stories instead of fewer because of/thanks to the feedback on the delivered stories
  • Hardly manageable amount of stories
  • Loss of an overview of the stories

The lesson learned is that, even though analysis was detailed, we would have been better off with coarse-grained stories so as to keep their number smaller and limit interdependencies.


The backlog is not emergent because it is imported from an existing analysis. Nevertheless, we could have kept it emergent by braking down larger stories when they came closer to being implemented in the forthcoming sprints. This would have allowed us to keep the backlog more manageable.

One shall restrain herself from trying to map detailed specifications to detailed items in the backlog. It requires a lot of discipline and work to transform fine-grained stuff back into coarse-grained stuff for the sake of manageability and emergence.


In order to estimate the backlog, we could not play planning poker. Instead, we played something along the lines of the Team Estimation Game or Story Card Matrix (see the references section). Indeed, we needed a technique to get to consistent estimates quickly. The lesson learned here is that, although those techniques work great, you quickly get drown into the  swarm of stories laid down on the table. The table, be it large enough, gets cluttered with stories.

Additionally, the more smaller items we estimate, the more loss of accuracy because of the sum of individual errors. One shall not take for granted the estimates obtained in this case. Only time will tell and it is important to keep steering and not switch on the autopilot.


The stories are first prioritised using MoSCow method. This allows for first pass to prioritise items for the upcoming release. Afterwards, relative priorities are still to be defined and reviewed regularly to determine what goes into the upcoming sprint. The more items you have, the more the Product Owner has to prioritise. She can easily get bogged down into the details. It is much harder to make decisions on smaller items than on larger ones, especially because of dependencies among them.


Starting a green field project with DEEP in mind is different from building up a backlog for a brown field project. In the latter, it is important to even more bear the principles in mind so as to avoid falling into the pitfalls of false control and precision. At any time, it is important that the backlog be kept manageable and that details be added at the last responsible moment. If we fail to detail the backlog at the last responsible moment, during backlog grooming sessions, we introduce too much waste. This waste induces confusion and quite some overhead that must be avoided at any price.

Following lean principles helps focussing on keeping the backlog DEEP. If your backlog is too detailed, you can’t clearly see the whole because of the introduced waste. It is thus necessary to incrementally detail the backlog even though most work was already done.


%d bloggers like this: