In one of my Scrum projects there was an interesting conversation between my testing team and the development team:
- Tester A: “Look at that bug; it’s pretty straightforward that the functionality doesn’t match our test case. Why can’t somebody do a quick smoke test before checking in the code?”
- Developer A: “Well, yes I agree that’s a bug. We just ran out of time – the schedule is tough for us, we did those most important verifications before we checked the code in; we made sure all Unit Tests passed, we made sure that the build is solid, and we wrote as many automated test scripts as possible but we didn’t have enough time to cover that functionality. It’s great that the testing team found that bug, we can fix it later.”
- Tester B (Test Lead): “But that costs a lot, we spent a whole day to manually execute all the functional test cases and found at least 5 obvious bugs. They could be identified even without looking at the test cases. Now we need another day for regression test after your team gets them fixed.”
- Developer B (Develop Lead): “But that’s the reality, isn’t it? It’s normal to have bugs. We cannot avoid delivering bugs together with the code. That’s why we have a testing team.”
The most interesting part to me is that I always hear different people having different versions of the same conversation. But is that really normal or reasonable, like Developer B said, that developers cannot avoid delivering numerous bugs together with the code?
You may have realized that it’s kind of tricky if we look at a typical development lifecycle. Generally speaking we usually have two development phases, phase #1 – creating bugs, phase #2 – finding and fixing bugs. And sometimes phase #2 even requires more effort than phase #1. Even in Scrum projects, it’s also quite common for the Scrum teams to define a couple of “Stabilization Sprint” which will be used to deal with bugs even when they have setup clear quality criteria for each sprint and are using tools (like Continuous Integration and Test Automation) to help secure the code quality.
Why can’t we do the things right the first time? Why budget for correcting our own mistakes? Why don’t we check in in fewer bugs into our code repository?
In another team, with those questions we had a deep discussion before project kick-off on how we make it possible to make developers deliver few bugs the first time they check in code into code repository. The first rough idea we had was that all the developers should conduct smoke test on their local environment before code check-in. The testing team also proposed that they can help to execute the corresponding test cases on their local before the CI build is generated. We came to an agreement as a whole team, including all developers and testers, will do a smoke test together to verify the new functionality we developed in that single day after the daily build, and resolve today’s issues today.
We ran that approach well in that project. In our retrospectives, these “What we did well” items were mentioned several times:
- Almost all obvious bugs were found before developers checked in the code.
- The quality of development builds were significantly improved – the number of identified bugs decreased about 50% comparing with our historical data.
- We had better cooperation between testing and development teams. We had less back and forth communication for describing and tracking bugs.
- We had a better control of schedule – although we spent a bit more effort (in average about 30 minutes) everyday, we can still save at least one sprint for regression test and bug fixing.
- We are more confident about our deliverables. Now we can say every sprint we deliver some potentially shippable products.
- Although it’s true that we still have bugs identified after code check-in, however, the time/effort we put on testing and bug fixing was significantly reduced. We planned to use 12 sprints to finish all the development work, and within those 12 sprints, only the last one was dedicated for regression test and bug fixing.
But we are not satisfied yet. In the retrospective of the last sprint, some team members raised something else we need to improve:
- We still had a lot of arguments for deciding whether or not it’s a bug – there were still requirement understanding gaps between development team and testing team. The developers seldom read the test case documents carefully before they start coding.
- The testing team still put a lot of effort on correcting ‘stupid’ mistakes for the development team on their local environment. Our developers had the skill to conduct smoke tests now but they’re just not (mentally) ready yet to stop relying on testing team. The testing team wants to focus on some more valuable tasks like performance test and automated test.
In the following brainstorming session talking about how we make improvement in the future, some guys proposed some creative ideas. One developer asked a question. Since most of the developers are eventually doing testing work now, and looks like besides designing good test cases, most of the test execution work could be taken by the developers? Therefore, can we train the developers the skills of test case designing so that the development team could totally take over the functional test work and then our testers can be release to do performance/automated test and produce higher value? And one of the testers suggested that since our team has already had good experiences on doing Test Driven Development (TDD), can we go one step further and try Test Driven Requirement, so that we throw those boring requirement documents away and use only Test Cases as the unique input for the both two teams to present requirement and remove misunderstandings?
Later on the same team got another opportunity to try and apply these ideas. In that project we didn’t setup any dedicated tester role on our org chart, only one experienced tester was providing support to the development team, while all other testers transferred their focus from manual functional test to automated test and performance test. And the testers and developers were working together to develop functional test cases as the sprint input.
To me, those experiences are brand new and are so exciting. Next week let me share more about what we have done and how we made the ideas worked in that project.