Evolutionary requirements, incremental design, refactoring and rework


Yesterday, I had quite a lively discussion with a colleague of mine. He wants to do Scrum. He wants to do Scrum but still wants to spent some weeks on upfront analysis and design. He does not believe in evolutionary requirements and design. One of the reasons for defending that position is that he considers that incremental means a lot rework. He also assimilates refactoring and rework. Consequently, he deduces that being evolutionary means reworking existing code each time we would implement a new feature.

As you might have guessed, I argued the opposite. I’ll try and explain why in this post.

First, let’s start off  by defining refactoring, rework and rewrite.

  • Refactoring: Change the code to improve it in terms of readability, maintainability, cleanness, reuse, etc while preserving functionality. If there are bugs, they remain in the code. When you refactor, you don’t change functionality.
  • Rework: Change the code to change its behaviour, for example to fix bugs or improve performance and stability. Rework entails changing the functionality. You did the wrong thing and want to do the right thing.
  • Rewrite: Scratch something and start afresh. You obviously did the wrong thing and did it wrong.

In Agile, you build the system iteratively and incrementally. You only write code for the scope of your iteration, no more, no less. This means that the functionality will change for sure. That’s not a problem because, thanks to the rapid feedback, you usually will seldom need to completely change too much completely.

By doing incremental design, you:

  • Code only to implement what’s in the scope of the sprint.
  • Design only to fulfill the needs of the scope of the sprint.

Plan stories earlier if you need to minimise risk and generalise some design early.

  • Triangulate to generalise behaviour instead of thinking ahead a so-called generalised behaviour and that will not be used after a while and possibly never.

By doing so, you eliminate waste:

  • By coding only for the functionality at hand, you don’t overdesign. The code you write is tested and accepted by the end of the sprint. If you wrote code for future needs, it would not be accepted because the functionality would not be tested
  • You don’t code anything based on speculations. The code you write is used and exercised immediately.

Iteration after iteration, you write new code and change existing code. Usually, you do the following:

  • Refactor continuously so as to keep you code base clean and don’t incur technical debt.
  • Rework periodically to fix bugs and misunderstandings. This usually happens during the sprint and concerns little portions of code. If defects made their way to production, the code will be fixed during a future sprint.
  • Rewrite seldom, except if you incurred a lot of debt or that something was done completely wrong.

The continuous refactoring allows for keeping the code clean and easier to evolve. Such evolutions usually take the form of new code driven by tests. Once the tests pass, refactor the code to avoid duplication and keep it clean.

In conclusion, you refactor a lot, rework a little and rewrite as less as possible.

Some interesting links on the topic:

Lessons learned in turning existing specifications into a DEEP backlog


This week, I helped the team estimate a backlog that was not DEEP as Mike Cohn defines it. Mike Cohn considers that a good product backlog is Detailed appropriately, Estimated, Emergent, Prioritised.

I fully agree with the DEEP principle. However, I ran into a situation where the backlog that did not follow those principles. This happened because the project transitioned from waterfall to agile after a certain while. As a result, an exhaustive amount of analysis work had already been done. This is where I learned a lot: how to best turn detailed specifications into user stories that follow the DEEP principles.

Ideally, the backlog looks like on the following figure.

The farther the horizon, the more coarse-grained the stories and the lower the priority; this is the Detailed Appropriately and the prioritised aspects. The backlog is built over time, hence Emergent. Each time you add an item to the backlog, the team estimates it so that it be Estimated at any given moment.

The problem is that we did not end up with such a backlog. So, what did we learn?

Detailed Appropriately

If you have a detailed analysis, you see detailed specifications and you want to turn them into stories. The problem is that we turned the specifications into way too many too small items. This leads to several problems:

  • The more small items, the more potential estimations errors
  • Too much detail implies unwanted dependencies that impede the estimation process. If priorities change, it is sometimes necessary to re-estimate many stories
  • False sentiment of control and precision. Because we have got many stories in the backlog, we think we have everything under control and that the estimates are precise
  • Too many stories to estimate and the team gets drown into the huge amount of stories and loses the overview
  • Useless rework on multiple stories instead of fewer because of/thanks to the feedback on the delivered stories
  • Hardly manageable amount of stories
  • Loss of an overview of the stories

The lesson learned is that, even though analysis was detailed, we would have been better off with coarse-grained stories so as to keep their number smaller and limit interdependencies.

Emergent

The backlog is not emergent because it is imported from an existing analysis. Nevertheless, we could have kept it emergent by braking down larger stories when they came closer to being implemented in the forthcoming sprints. This would have allowed us to keep the backlog more manageable.

One shall restrain herself from trying to map detailed specifications to detailed items in the backlog. It requires a lot of discipline and work to transform fine-grained stuff back into coarse-grained stuff for the sake of manageability and emergence.

Estimated

In order to estimate the backlog, we could not play planning poker. Instead, we played something along the lines of the Team Estimation Game or Story Card Matrix (see the references section). Indeed, we needed a technique to get to consistent estimates quickly. The lesson learned here is that, although those techniques work great, you quickly get drown into the  swarm of stories laid down on the table. The table, be it large enough, gets cluttered with stories.

Additionally, the more smaller items we estimate, the more loss of accuracy because of the sum of individual errors. One shall not take for granted the estimates obtained in this case. Only time will tell and it is important to keep steering and not switch on the autopilot.

Prioritised

The stories are first prioritised using MoSCow method. This allows for first pass to prioritise items for the upcoming release. Afterwards, relative priorities are still to be defined and reviewed regularly to determine what goes into the upcoming sprint. The more items you have, the more the Product Owner has to prioritise. She can easily get bogged down into the details. It is much harder to make decisions on smaller items than on larger ones, especially because of dependencies among them.

Conclusions

Starting a green field project with DEEP in mind is different from building up a backlog for a brown field project. In the latter, it is important to even more bear the principles in mind so as to avoid falling into the pitfalls of false control and precision. At any time, it is important that the backlog be kept manageable and that details be added at the last responsible moment. If we fail to detail the backlog at the last responsible moment, during backlog grooming sessions, we introduce too much waste. This waste induces confusion and quite some overhead that must be avoided at any price.

Following lean principles helps focussing on keeping the backlog DEEP. If your backlog is too detailed, you can’t clearly see the whole because of the introduced waste. It is thus necessary to incrementally detail the backlog even though most work was already done.

References

%d bloggers like this: