Agile is for cavemen

I am currently reading The Tipping Point, by Malcolm Gladwell. In the book, Gladwell quotes the anthropologist S.L. Washburn:

Most of human evolution took place before the advent of agriculture when men lived in small groups, on a face to face basis. As a result human biology has evolved as an adaptive mechanism to conditions that have largely ceased to exist. Man evolved to feel strongly about few people, short distances and relatively brief intervals of time; and these are still the dimensions of life that are important to him.

Once you think about it, it sounds so obvious that humans are still wired as they were back when they dwelt in caves. It is therefore also natural to feel comfortable in a work environment that reproduces this kind of setting.

In Agile, small teams of five to nine people are recommended. Agile also recommends that team members be collocated, in a single room. Those practices aim at maximising the efficiency of communication and team work while minimising the required amount of ceremony and waste in communication. Those therefore facilitate face-to-face communication as well as osmotic communication, as Alistair Cockburn names it. Agile is therefore an approach to software development that befits our cavemen brains.

Dispersed teams and informal communication channels are thus completely unnatural. Nevertheless, even though our brains haven’t evolved much since the times of our hunter-gatherer ancestors, our societies tend to put more and more distance between people while offering loads of means of communicating and sharing any form of information.

Food for thought…

Evolutionary requirements, incremental design, refactoring and rework

Yesterday, I had quite a lively discussion with a colleague of mine. He wants to do Scrum. He wants to do Scrum but still wants to spent some weeks on upfront analysis and design. He does not believe in evolutionary requirements and design. One of the reasons for defending that position is that he considers that incremental means a lot rework. He also assimilates refactoring and rework. Consequently, he deduces that being evolutionary means reworking existing code each time we would implement a new feature.

As you might have guessed, I argued the opposite. I’ll try and explain why in this post.

First, let’s start off  by defining refactoring, rework and rewrite.

  • Refactoring: Change the code to improve it in terms of readability, maintainability, cleanness, reuse, etc while preserving functionality. If there are bugs, they remain in the code. When you refactor, you don’t change functionality.
  • Rework: Change the code to change its behaviour, for example to fix bugs or improve performance and stability. Rework entails changing the functionality. You did the wrong thing and want to do the right thing.
  • Rewrite: Scratch something and start afresh. You obviously did the wrong thing and did it wrong.

In Agile, you build the system iteratively and incrementally. You only write code for the scope of your iteration, no more, no less. This means that the functionality will change for sure. That’s not a problem because, thanks to the rapid feedback, you usually will seldom need to completely change too much completely.

By doing incremental design, you:

  • Code only to implement what’s in the scope of the sprint.
  • Design only to fulfill the needs of the scope of the sprint.

Plan stories earlier if you need to minimise risk and generalise some design early.

  • Triangulate to generalise behaviour instead of thinking ahead a so-called generalised behaviour and that will not be used after a while and possibly never.

By doing so, you eliminate waste:

  • By coding only for the functionality at hand, you don’t overdesign. The code you write is tested and accepted by the end of the sprint. If you wrote code for future needs, it would not be accepted because the functionality would not be tested
  • You don’t code anything based on speculations. The code you write is used and exercised immediately.

Iteration after iteration, you write new code and change existing code. Usually, you do the following:

  • Refactor continuously so as to keep you code base clean and don’t incur technical debt.
  • Rework periodically to fix bugs and misunderstandings. This usually happens during the sprint and concerns little portions of code. If defects made their way to production, the code will be fixed during a future sprint.
  • Rewrite seldom, except if you incurred a lot of debt or that something was done completely wrong.

The continuous refactoring allows for keeping the code clean and easier to evolve. Such evolutions usually take the form of new code driven by tests. Once the tests pass, refactor the code to avoid duplication and keep it clean.

In conclusion, you refactor a lot, rework a little and rewrite as less as possible.

Some interesting links on the topic:

Learning and Growing with Agile

I work for an organisation that thinks that juniors can’t design. Even worse, they still think that because of this alleged lack of skills, they need to be fed design. Consequently, projects needs “designers” that just do big upfront design.

I say no! I even say, don’t give them anything! Instead, coach them, give them opportunities to try and learn and propose.

I see several practices that help addressing the topic:

  1. Design sessions: During those short sessions, have more senior roles (seasoned developers and architects) work out a design with the juniors. This is highly beneficial because they are an active part of it. They better understand what to implement afterwards because they know the reasoning behind. The most important is that they learn how to tackle design.
  2. Pair Programming: Have a senior and a junior work together on a story. Alternate roles so that the junior can play both the driver and navigator roles.
  3. Test Driven Development: TDD is all about design. Designing for testability, YAGNI, baby steps, incremental design, refactoring.  Do  ping-pong pair programming. They once write a test, they once implement the code to pass a test written by the team mate and they once refactor. Then repeat.

The benefits of this approach is that the process is much leaner. If we try and map the benefits to the seven lean principles:

  1. Eliminate waste: No time wasted on lengthy designs that will never be accurate. No overproduction of design.
  2. Build quality in: No need fork any rework after implementation. Peer reviewing is immediate thanks to pair programming.
  3. Create knowledge: Both juniors and seniors learn and grow their skills.
  4. Defer commitment: Design just in time.
  5. Deliver fast: Design just in time and enough leads to a shorter lead time.
  6. Respect people: Juniors are respected and considered fully skilled team members
  7. Optimise the whole: Team improves, no constraint on expected design, rapid feedback on the design.


Agile and lean practices really help getting juniors up-to-speed. IT helps them grow while optimising the process and being respectful of team members. A great benefit is also that it’s much more fun to work that way!


Pair Programming
Lean Principles of Software Development
Test Driven Development

Lessons learned in turning existing specifications into a DEEP backlog

This week, I helped the team estimate a backlog that was not DEEP as Mike Cohn defines it. Mike Cohn considers that a good product backlog is Detailed appropriately, Estimated, Emergent, Prioritised.

I fully agree with the DEEP principle. However, I ran into a situation where the backlog that did not follow those principles. This happened because the project transitioned from waterfall to agile after a certain while. As a result, an exhaustive amount of analysis work had already been done. This is where I learned a lot: how to best turn detailed specifications into user stories that follow the DEEP principles.

Ideally, the backlog looks like on the following figure.

The farther the horizon, the more coarse-grained the stories and the lower the priority; this is the Detailed Appropriately and the prioritised aspects. The backlog is built over time, hence Emergent. Each time you add an item to the backlog, the team estimates it so that it be Estimated at any given moment.

The problem is that we did not end up with such a backlog. So, what did we learn?

Detailed Appropriately

If you have a detailed analysis, you see detailed specifications and you want to turn them into stories. The problem is that we turned the specifications into way too many too small items. This leads to several problems:

  • The more small items, the more potential estimations errors
  • Too much detail implies unwanted dependencies that impede the estimation process. If priorities change, it is sometimes necessary to re-estimate many stories
  • False sentiment of control and precision. Because we have got many stories in the backlog, we think we have everything under control and that the estimates are precise
  • Too many stories to estimate and the team gets drown into the huge amount of stories and loses the overview
  • Useless rework on multiple stories instead of fewer because of/thanks to the feedback on the delivered stories
  • Hardly manageable amount of stories
  • Loss of an overview of the stories

The lesson learned is that, even though analysis was detailed, we would have been better off with coarse-grained stories so as to keep their number smaller and limit interdependencies.


The backlog is not emergent because it is imported from an existing analysis. Nevertheless, we could have kept it emergent by braking down larger stories when they came closer to being implemented in the forthcoming sprints. This would have allowed us to keep the backlog more manageable.

One shall restrain herself from trying to map detailed specifications to detailed items in the backlog. It requires a lot of discipline and work to transform fine-grained stuff back into coarse-grained stuff for the sake of manageability and emergence.


In order to estimate the backlog, we could not play planning poker. Instead, we played something along the lines of the Team Estimation Game or Story Card Matrix (see the references section). Indeed, we needed a technique to get to consistent estimates quickly. The lesson learned here is that, although those techniques work great, you quickly get drown into the  swarm of stories laid down on the table. The table, be it large enough, gets cluttered with stories.

Additionally, the more smaller items we estimate, the more loss of accuracy because of the sum of individual errors. One shall not take for granted the estimates obtained in this case. Only time will tell and it is important to keep steering and not switch on the autopilot.


The stories are first prioritised using MoSCow method. This allows for first pass to prioritise items for the upcoming release. Afterwards, relative priorities are still to be defined and reviewed regularly to determine what goes into the upcoming sprint. The more items you have, the more the Product Owner has to prioritise. She can easily get bogged down into the details. It is much harder to make decisions on smaller items than on larger ones, especially because of dependencies among them.


Starting a green field project with DEEP in mind is different from building up a backlog for a brown field project. In the latter, it is important to even more bear the principles in mind so as to avoid falling into the pitfalls of false control and precision. At any time, it is important that the backlog be kept manageable and that details be added at the last responsible moment. If we fail to detail the backlog at the last responsible moment, during backlog grooming sessions, we introduce too much waste. This waste induces confusion and quite some overhead that must be avoided at any price.

Following lean principles helps focussing on keeping the backlog DEEP. If your backlog is too detailed, you can’t clearly see the whole because of the introduced waste. It is thus necessary to incrementally detail the backlog even though most work was already done.


%d bloggers like this: