Learning and Growing with Agile


I work for an organisation that thinks that juniors can’t design. Even worse, they still think that because of this alleged lack of skills, they need to be fed design. Consequently, projects needs “designers” that just do big upfront design.

I say no! I even say, don’t give them anything! Instead, coach them, give them opportunities to try and learn and propose.

I see several practices that help addressing the topic:

  1. Design sessions: During those short sessions, have more senior roles (seasoned developers and architects) work out a design with the juniors. This is highly beneficial because they are an active part of it. They better understand what to implement afterwards because they know the reasoning behind. The most important is that they learn how to tackle design.
  2. Pair Programming: Have a senior and a junior work together on a story. Alternate roles so that the junior can play both the driver and navigator roles.
  3. Test Driven Development: TDD is all about design. Designing for testability, YAGNI, baby steps, incremental design, refactoring.  Do  ping-pong pair programming. They once write a test, they once implement the code to pass a test written by the team mate and they once refactor. Then repeat.

The benefits of this approach is that the process is much leaner. If we try and map the benefits to the seven lean principles:

  1. Eliminate waste: No time wasted on lengthy designs that will never be accurate. No overproduction of design.
  2. Build quality in: No need fork any rework after implementation. Peer reviewing is immediate thanks to pair programming.
  3. Create knowledge: Both juniors and seniors learn and grow their skills.
  4. Defer commitment: Design just in time.
  5. Deliver fast: Design just in time and enough leads to a shorter lead time.
  6. Respect people: Juniors are respected and considered fully skilled team members
  7. Optimise the whole: Team improves, no constraint on expected design, rapid feedback on the design.

Conclusions:

Agile and lean practices really help getting juniors up-to-speed. IT helps them grow while optimising the process and being respectful of team members. A great benefit is also that it’s much more fun to work that way!

Resources:

Pair Programming
Lean Principles of Software Development
Test Driven Development

Unit tests versus integration tests and test smells


Today, I had an extremely lively discussion with the team I coach. The topic was the definition of what a unit test is and foremost what it is not.

It is so natural to me to make the difference between unit, integration, system and user acceptance tests that I get easily carried away on this matter. Now, I need to summarise the levels of testing and point to useful references to relief myself 🙂 Next time I will have to discuss the topic, I will just simply give the URL to the people in front of me and discuss again after they read it!

The different levels of testing:

I always see four levels of testing. Each targets a specific level of granularity of the component(s) and also considers them as either white or black boxes. There are even more characteristics: fast execution time to get rapid feedback vs. long running tests, stakeholders, and so on.

  • Unit tests: white box tests that aim at ensuring that a given unit of code, usually a class, behaves as desired. These tests must fly because, as we try to code test-driven, the feedback must be almost immediate. Developers should fully automate those tests;
  • Integration tests: white box tests that aim at ensuring that, when put together, classes or components work as desired. These tests usually take much longer to run because they involve setting up environmental components such as a database or a queuing system. Developers shall fully automate those tests;
  • System tests: black box tests that aim at ensuring that the system works as desired. Those tests usually take a substantial amount of time and also serve to regression-test the whole system. Testers usually either conduct or automate those tests. The preference is to have all those tests automated so that they can be executed during nightly builds for instance;
  • User acceptances tests: black box tests that are usually conducted by the business people and that aim at validating the behaviour of the system with respect to their expectations. They can be either automated or manual, depending on who is in charge of conducting them.

I am today only interested in unit and integration tests, to make the differences clear.

The different levels of testing:

I always develop TDD. This means that, when I code a class, I start by writing the test they will force to to think of what I am actually need from the class I’m testing. Writing the test first makes you think of the design of the class, and its collaborators but it also forces you to make that class testable. Saying that might sound stupid, but it is key advantage of TDD. Very often, I saw people struggling with humongous classes embodying too many responsibilities and trying to unit test them afterwards because they were not designed to be tested. I will get back to the topic of testability in the next section.

Even though the benefits of TDD are so compelling, most people haven’t adopted it yet. Many even don’t understand why they should start doing it. Never mind, I will try and persuade them 🙂

Whatever the level of testing, TDD always apply, you start TDD at the unit level and then up to the systems test and user acceptance tests. You will say: how can you TDD black box. Firstly by preparing your scenarios without having the system in front of you and by automating your tests as early as possible so that you’re ready to test the bloody system when it shows up.

It is also important to make sure that the system be easily testable, even as a black box. Very often, it is not the case. When thinking about the testing early, you can make your system testable and add features to facilitate the testing such as backdoors for triggering batch jobs, a console that allows for cleaning the cache or changing some configuration parameters.

I will now give you my two cents on the differences between unit and integration tests.

Unit tests:

A unit test aims at ensuring that a given single class, on its own, works as expected. As any class always depends on other classes, you have to deal with those dependencies. This is where the difference with integration tests lies. You will mock, fake or stub some dependencies in order to really test the class in isolation (More on those topics in a future post). You can notice that I said some dependencies. Not all dependencies. Usually, one does not use mocks or fakes for value objects or String objects for example.

When you unit-test a class, you are interested in two things:

  • Whether the end result is what you expected. This can be the return value of the method for example;
  • Whether some “hidden” action was performed. When I say hidden, I mean any action that is not a return value. This can be for example storing information in a database or changing the state of a object or sending an email.

The first category is pretty easy to verify. The second is can be much more complex to test if one is not careful and does not properly separate responsibilities between classes.

For example, let’s say that I have a business service that must store several objects in a database, send email notifications and send a request to an external system via a Web service. Let’s also consider that the service must throw an exception if any step failed. If one coded all those actions in that business service, it would have wary too many responsibilities. Therefore, because you respect clean separation of concerns and abstraction, you have this class control the flow of actions and delegate to other classes the various responsibilities. You will use data access objects, an email notification service and a service that will be in charge of communicating with the external Web service.

Testing the business service will then boil down to testing the multiple scenarios, including error ones. Because you have a clean separation between the service and the delegates, you will merely have to ensure that the service will adequately use the delegates. If you want to simulate an error, you will have the delegate throw an error in the scenario. To achieve the testing of the scenarios, you will have to mock up the delegates that the business service depends on and then assert that they were called as expected or ask the mocks to throw an exception.

If you want to have the opportunity to use fakes or mocks, it is important that you can “inject” them into the class under test. If the class instantiates any delegate using the new keyword, you’re stuck. This is where theDependency Inversion Principle kicks in. Dependency inversion is a way to implement DIP.

Integration tests:

Technically, integration tests can be the most expensive to implement. Here, you focus on testing the integration between components/classes but also with external components such as databases or JMS queues. You will also test that your transaction handling works fine. Personally, I tend to use the Spring Framework to implement integration tests because the framework allows simulating a container outside of the container and deals with transactions. I tend to separate integration and unit tests in a project/module. The reason is that I only run integration tests in the end because they take much more time to execute and provide too slow a feedback. If you use Maven, you can easily separate them using the Surefire and Failsafe plugin.

The main difference between unit and integration tests is thus: Instead of mocking everything that surrounds you, you struggle to have it running outside the container to get rapid feedback. I find it a good practice to also run integration tests inside the container, if possible. The reason is that, very often, things change between the standalone version and the container, even though you use standard technologies or APIs. Long live standards!

Test smells and Data Access Objects

Almost every team I worked with had the smelly habit of starting to implement any feature from the ground up. They start with the database layer and then build on top of that up to the GUI or integration components or any other boundary classes. I find it very bad to start from the database because you no longer think in terms of domain but in terms of persistence! Bad, bad, bad. Focusing on technique instead of business really stinks. But this is only one facet of the problem.

When you write a DAO, you want to be sure that you have you queries or ORM set up right. Therefore, you start writing integration tests immediately instead of unit tests. Many think that they’re writing unit tests but you’re not. Once you’ve got your DAO right, you start working of the layer above, and so on and so forth. Usually, when people start working on the layer on top of the DAO, they keep on using the DAO. This means that they continue writing integration tests instead of coming back to unit tests. At each layer of the system, they test everything down to the database layer because they keep on working as they started, integrating layer after layer. The tests of the layers on top of the persistence layer take more and more time to run and the whole bunch of tests starts taking an awful amount of time to run! That’s a real test smell!

One you’ve got your persistence layer right, you just need to be sure that the interface is right and then mock it up when writing the classes that use it and get back to writing unit tests instead of integration tests all the way.

I’m not saying that integration tests are useless. Not at all! They help ascertain that when you put all the pieces together, they do more than just respecting a given interface. For example, if you want to test that the error handling works fine from bottom to top, you write integration tests that will trigger errors to ensure that the whole stack behaves as expected.

Conclusions:

I really needed to talk about testing 🙂 It is important to TDD and not only for unit testing. I don’t know whether this post is readable but I think it does contain the gist of my views on testing. Maybe I shall also talk about performance testing and testing of non-functional requirements.

Useful references:

Here are a few references, I will come with more!
In French:
%d bloggers like this: