Testing from a Developer’s point of viewPublished on: Author: Bram Dekker Category: IT development and operations
Testing is not always the most fun part of our job. Some even see it as a necessity. At any rate, I think we can all agree that testing saves us a lot of time, and thus the customer’s money.
If you start building an application and the team has a shared view of said application, you don’t always need tests to be sure that you are in control of the code and system behavior. However, when the team starts to change (which is not an if, but a when) and the system increases in scope and complexity, you could start losing full control over the quality of the system. Changing the system might then lead to a breaking change in another part of the system that you just didn’t think of. At this point, running a test-suite that was enhanced over time during the development of the system could be a savior.
On the other hand, having to extend a legacy system with functionality might also be a good case for a test-suite. Given that we do not have control over the system and can consider it a black box, we could start building a test-suite around it by checking the output values with given inputs. If we then extend the system, we can at least be (almost) sure that the old behavior will not break because of our changes.
Cover all cases
Running a test-suite in these cases, and seeing all tests turn up green is of course a relief. However, you can never be sure that the tests that were written cover all cases. Below, I will point out a few ideas and common pitfalls of writing tests that might help you build a more stable and understandable set of tests. They will cover more system functionality and decrease the maintenance nightmare.
The ideas and tips below focus mainly on functional system tests, but some of these could be applied to other layers of the testing stack (integration, unit, ...). This is because testing, to me, is a multidisciplinary effort that should reflect business value and system behavior.
Shared Idea of System Quality
Most people within a project team do agree that testing is important but might not agree on what level testing should happen: system, integration, unit and what it should encompass: only external or internal dependencies, full system meltdown triggers, is not always clear. To start testing it is always preferred to get a shared understanding of all team members, their view on what should be tested and why. When you are working on a project with dependencies on other systems or teams it is wise to manage expectations between your team and the external dependency and with that avoid finger pointing later on. Preferably this shared idea is written down and reflected upon to avoid confusion.
Data generation using builders
Test-cases related to the same subject use similar data but slightly different from each other. For example a User with or without an email address. To avoid writing a lot of boilerplate code, the builder pattern might help you to generate slightly different version of the same objects for testing purposes.
Mob test definition sessions
Alone, we might be able to come up with key examples of a feature we want to build. However, by talking about it with others, and discussing our feature, we can find the edge-cases. Which is why I think defining tests should be a group effort. In an agile context for example, the product owner and the tester will come up with the basic definition, then enhance the definitions with the rest of the team during a refinement session. This multidisciplinary approach will also increase the robustness, because a multitude of perspectives are involved.
Measure test change
To see if your tests are written with the right level of abstraction, you can measure the rate of changes within your test-suite. If your tests are version controlled, together with your code, this can be easily done. If your product is going towards a stable release, the amount of test changes should also decrease. If this is not the case, it would be good to take another good look at the abstraction level of your tests.
Test coverage thresholds
In projects that enforce thresholds on test coverage, it is common that a false sense of quality creeps in. Setting thresholds means that teams try to focus on getting above the threshold, instead of thinking about the test’s actual benefit. This might lead to tests that don’t have any benefit but do increase the maintenance of the test-suite. My advice would be to think about cost versus benefit. Try to do a risk analysis. See which risks you as a team and the business think is feasible.
Feel responsible for testing
In companies with dedicated teams, testing and defining test-cases is often delegated to specific developers, or a different group (testers). In my view, this is a shared responsibility that should also be done by developers. Discussing the various test-cases and writing the tests makes us more aware of what is important within the system. And helps us to explore the edge-cases better. It will also lead to a common testing toolset (meaning, everyone uses the same tools). This means a tester’s role is not only to test the system, but also to improve the testing awareness of developers and to come up with better ways to test. Their job will also include coaching and guidance.
Conciseness, coherence, completeness
Tests are useful in many ways. They could be used as acceptance criteria for features, as a specification of the system, or as documentation for later reference. Keep in mind that these scenarios require different levels of detail. If you want to have useful tests, try to adhere to the following:
- Always add an introductory text to the tests
- Start off with simple cases, then write down the more complex examples that test the boundaries of the feature
- Group related examples together
- Highlight key examples of a feature
- Separate more comprehensive tests and technical tests, which are related to the feature at hand, to keep a clean overview1
1 Fifty Quick Ideas To Improve Your Tests, p. 62 Balance Three Competing Forces
Tests writing tips
Teams approach testing in very different ways. Some teams write no tests, some a few, and some too many. If you decide that you want to write tests, a good rule of thumb is to start with the key examples. Look if you need to extend the test-suite by changing variables and exploring boundaries.
In the ‘to improve’ part of the example above, it’s unclear what is tested and what influences the result. The title says: ‘withdrawal from bank account’, but what variables make this a valid withdrawal (amount, origin, currency, private)? Try to focus on group variation (amount, origin, currency) and split these into different tests. This is easier to read and invites to explore edge cases.
Arrange, Act, Assert (AAA)
From TDD (Test Driven Development) we know that we should structure our tests in the following way:
These terms improve the readability of a test and separate the parameters, action, and result check. Which makes the test easier to read. These terms are reflected in different frameworks in different ways. In the example below, use the ‘Given’, ‘When’, ‘Then’ terminology.
The ‘to improve’ case has multiple Givens and Whens, which makes it hard to see what is actually being tested, which parameters are relevant (name, account number) and what action is tested (is selecting the account details needed). In the ‘good’ example, we see that it’s actually the name of the account which causes the payment fail (not taken into account the account number). Try to structure your code as follows:
- Set up your parameters (arange)
- Do the action (act)
- Check if the action yields the appropriate result (assert).
Describe what, not how
Tests that describe ‘how’ something is done are hard to write, subject to a lot of change, often time related, and do not test what you want to capture.
If we were to change the login button identifier or the layout of the page, we would have to rewrite the ‘to improve’ set. If we have a network delay and the button hasn’t rendered in time, the example will also have failed. The ‘good’ set will continue to work and does not have any of these constraints.
We as developers try to be as concrete as possible when writing our software. This sometimes reflects in our tests. Though it might be convenient for us, it doesn’t provide more information for someone than if they were to look at the code. Defining tests is also about exploring boundaries. Using mathematical formulas abstracts these away and makes people think less about the boundaries.
If I were to write down the ‘to improve’ case during a discussion with a stakeholder, we can agree that it is a valid test case. But what about February, which has less than 30 days? This could lead to a different validity status. Writing all of this down explicitly makes a deeper discussion about validity constraints more probable.
Want to know more?
I hope these tips will help you with your testing endeavors. Even if not all the topics mentioned above apply to your project. If you want to know more about this topic, I would recommend you read the book: “Fifty Quick Ideas To Write Tests” by Gojko Adzic, David Evans and Tom Roden. It’s a very interesting read (disclaimer: I am not affiliated with the book or the authors in any possible way).