Testing is not easy, it is actually very hard. This article is about the difficulties and provides you with a couple of great resources to come closer to test heaven or at least farer away from test hell.
The naive part
Time to market is important for our business. Therefore, we need to deliver more often as we ever used before. The only chance to get there is to automate tests, so that we can quickly add features, verify that the new stuff works and the old stuff still works.
Did I overhear some ‘business bla bla’. It sounds cool but it is easier wanting to be a dancing queen than becoming one.
The hard part
I struggled multiple times with the crazy effort of maintaining tests and in the past have seen projects dropping unit tests half way in the project.
I used to struggle a lot with questions like:
- Do I need unit tests, integration tests, acceptance tests?
- Where is the border line of unit tests and integration tests?
- Should I write tests first like in Test Driven Development
- What should I actually test?
- What kind of test coverage is most reasonable?
I won’t answer all questions in this article, but I would like to share a couple of insights, test smells and some pointers to resources which I found an amazing help.
No tests are like soap bubbles
Imagine your bugs being like soap bubbles, whenever you change code you blow soap bubbles and you try to burst them. When playing with children, they catch some of them quickly, but first it is amazing how many bubbles are created blowing only one time and second there are always some taken fare up into the skye.
Even a non perfect level of unit tests and acceptance tests will avoid a large number of bugs. It gives you much more confidence when refactoring code which impacts areas, which you have not coded this morning. It is a prerequisite for refactoring. Refactoring is a prerequisite for a maintainable code base. You will have to compromise heavily on code quality without the possibility to refactor code continuously.
Boarders are not clearly defined
Unit tests covers a unit of code which might include one or multiple classes in object oriented languages or functions in functional languages.
Integration tests is mostly defined as something dealing with code you cannot change (frameworks and libraries) or sub systems like databases and queues. Sometime it is used to describe tests which verify the integration of various software systems.
Acceptance tests verify functionality from a customer point of view. For example, it checks if a user story does work as expected.
What’s a test of a method which queries the database. It is a …?
May I ask a different question: Will the answer to this question will help you in testing or in a philosophical discussion?
Let’s focus on something else.
A unit test does not check if all the units work together. Imagine you build a plane, all parts look neat but it does not fly.
An acceptance test check if the software works as expected. Imagine you build a plane, it flies but you just hammered it somehow together.
Integration tests are somewhere in the middle.
Unit tests run in memory, are crazy fast and you can have many of them with little effort. Acceptance tests requires to setup all the data for a use case and are slow as they involve databases and message queues and whatever else you build into your system. You can have only some of them.
As a consequence, you need test on different levels for different purposes. Do not test units of code with acceptance tests and do not try acceptance testing with unit tests. Neither makes sense and can cause crazy effort to build the tests.
Unit tests improve the code quality
If you do test driven development or at least write tests early, you have to build your code so that it is testable. A method with 600 lines of code or a class having dependencies to 10 other classes even if injected by your preferred dependency injection framework is impossible to test.
In a large project, I worked on, we found a strong correlation between untested code and code which required heavy refactoring.
Test coverage is one part, feedback is the other
With our tests we cover 100 % of the code base including database access, they only take 2 days to run. A scenario like this is a nightmare for developers. It is like testing in offshore countries.
If possible, developers run all tests before they commit code, which is potentially a couple of times per day. A test suite needs to be crazy fast. Keep IO access out of the tests and select some acceptance and integration tests as smoke tests. Try to run as many as possible, reasonably selected tests. If it is still slow or the coverage is not enough, dig deeper.
- Try to push variations of test scenario from acceptance and unit tests down to unit tests
- Fight hard to avoid redundancy
- Fake IO access
- Execute test suites in parallel
Quick feedback is essential for development, automated test is not only for management reporting.
Test require design
Tests are code, code requires design, bad design of code or bad design of tests or bad design of both, can sink at least the productivity of your project.
So you need to study test design.
A complex setup especially for unit tests can be caused by many things.
Have you ever heard about the single responsible principle. Well, if you violate it in the code, then you will violate it in your test as well.
Another reason could be too many side effects. Do you sprinkle IO access in code like powder sugar on waffles, a save to database here and a save to database there.
Refactoring breaks a crazy amount of tests
This may indicate that a test does not test an interface but peeks behind the interface and makes assertions on the internals or your code is doing this. It can be caused by many or complex dependencies between classes (dependency rot) as well. Basically it is a design problem.
Test driven development is so hard
Test driven development requires to write a simple failing test before coding. But you find, that there is no simple useful test. Before you believe that TDD only works for bowling games, Roman time converter or whatever popular example used in tutorials and books, please consider, that there are many professionals using it in complex scenarios. Struggling to write simple tests most likely points to a design problem like to many responsibilities in a class, big and complex methods etc.
In my own experience, TDD requires a lot of exercise and is not at all trivial. Do not get frustrated the first day, and try to build up experience.
Great readings and films
The good thing is that there are amazing resources to improve your testing skills.
Here are three puzzle pieces which might help you reaching test heaven:
- Test design and what to test (a conference session): http://www.youtube.com/watch?v=qPfQM4w4I04 by Sandy Metz
- Granularity of tests (a book): http://www.growing-object-oriented-software.com by Steve Freeman and Nat Pryce
- Practice of test driven development (a couple of films on TDD): http://cleancoders.com by Robert Martin
I hope you enjoyed reading the article.