Last week I was facilitating an interesting conversation between my development team and testing team. The great experience we had on that conversation was how the two teams broke the silos down to secure code quality together at an earlier stage. I related a story that shared how that team made a significant difference in the code quality. They established a goal that the developers should deliver fewer bugs the first time they check in code into the code repository, and they made that happen – they saved at least one Sprint for “Stabilization”, and when compared with historical data, the number of identified bugs decreased by about 50%. Below I have listed several key things they’ve been doing:
- The testing team was working together with the developers to conduct functional verifications on dev local before code check-in.
- Every day the whole team was doing a 30 minutes smoke test after the daily build, and verifing the new functionalities integrated that day.
- The team committed to resolving today’s issues today.
But by the end of that project the team found they could do more; they found the effort the testers spent were heavier than normal because when doing smoke testing on the development workstations before code check-in the developers were relying too much on testers even if they had the required testing skills, and it was normal to have arguments about their understanding of requirement which lead to the definition of bug. Fortunately, later the same team got another chance to go further on their code quality improvement on a brand new project which has less technical risks so that the team could put more focus on testing and engineering practices. This time the team decided that they want to beat even more aggressive targets:
- Continue improving our code quality – decrease the number of identified bugs more than 50%.
- Release the testers from regular functional testing work, letting them take more valuable work like performance testing and automated testing.
- Practice Test Case Driven Requirement, making it possible to throw those obscure requirement documents away and build up an understanding baseline among all teams.
When defining our team model and Sprint process, we realized that our developers already had enough manual functional testing experience in the prior projects so that they should be able to take over most of the testing work, but they were not experienced enough to design high quality test cases, which means if our testing team could train them the skills of designing good test cases the developers could entirely handle the functional testing from the technical skills perspective. Hence, we found that we have only one problem left for the developers to completely take over the functional testing work: as a developer, it’s normal to be less easy to test against his/her own code, they always find fewer bugs if they’re testing for themselves.
But again, this creative team resolved that problem. They decided that they can do cross testing among developers because it’s always easier to find others bugs. And in order to secure the quality of testing, they realized that they still need a dedicated functional tester role, whose responsibility would not be taking real testing work but to provide direct support to the developers when they’re doing testing work. That role would provide on-job training on how to write profession test cases, how to design test data, how to decide the best timing to conduct regression tests, etc.
And since the whole team (including the testers) had some good practice of TDD (Test Driven Development), it was easier for them to accept the concept quickly and to design the activities inside a Test Driven Requirement Analysis cycle. The testing team provided a standard format for functional test cases, and based on that the team came up with a hierarchical structure for decomposing high level requirements to user stories and functional test suites, and the mechanism to maintain the requirement traceability.
The team was feeling ready to go. They started the development using the new approach, and finally they were successful again. The code quality was even better than before, with the number of identified bugs per KLC only 1/3 of our organizational benchmark: the development team delivered only 33.33% bugs compared with other teams. And the more important thing, almost all of the testing work was performed by developers – only the test lead spent part of his time supporting the development team, with the rest of the testers doing automated testing and performance testing. As time went by the team kept improving their approach iteratively and when the project finished, they already had a formally defined team process for their day-to-day work.
Below is a brief introduction of their final development process, which could be summarized to several key development roles, activities and principle/commitments.
The key development team roles and their responsibilities:
- Story Owner – for each user story, the team would have an owner who would be responsible for the final delivery with high quality. A Story Owner should NOT take any real development work inside that user story although the person who takes that role could be developing for another user story. Instead, a Story Owner should take care of test case development and make sure all necessary testing effort would take place to cover that feature.
- Quality Goalkeeper – the development team needs an experienced functional tester to provide ongoing support and to measure the current quality status, e.g., quality statistics analysis, overall quality reporting, technical supporting, etc. Quality Goalkeeper would be the go/no-go decision maker for that Sprint from the quality perspective.
The key activities inside a development lifecycle:
- Develop functional test cases and use them as the unique documents to present high level requirement.
- Test Driven Development, with the specific target for Unit Test code coverage.
- Continuous Code Review and local functional verification on development workstations before code check-in.
- Continuous Integration and test automation. Testing automation is a key reason for us to be able to conduct frequent inspection – it reduces the huge manual effort for regression test if we want to do functional testing everyday.
- Daily Functional Testing – verify the functionalities newly integrated on the same day. This is a whole team activity which on average takes 30 minutes per day.
- Sprint regression test which happens before the Sprint Demo and a final quality inspection before the product is delivered.
Several principles and team commitments:
They defined three Quality Gates and made corresponding commitments:
- Local Testing before code check-in – developers will resolve today’s issues today.
- Daily Functional Testing after daily build – Story Owner will make sure all tests are passed on the development server.
- Sprint Functional Testing before the Demo – Testing Team would make sure no bug with minor or above severity remained in that Sprint.
The following diagram illustrates this simple development approach:
By applying an empirical approach to determine how we can continue to improve our overall code quality we have been able to develop a more cross-functional team, where more testing is performed by developers with more specialized testing performed by dedicated testers, while significantly increasing code quality. We are also able to deliver a runnable product at the end of every business day, and a potentially shippable product at the end of every Sprint. We invite you to try these techniques for yourself, and would like to hear about your results.
Thank you for sharing these experiences !
Very valuable and interesting.
Smart approach, I liked it.
But doesn’t it increase the task on the developer’s plate. Assume each developer has 100 hours available. If he does functional testing, he will not be able to use the entire 100 hours for development. I see with reduce defect, developer will save time in bug fixing. But does this time saving completely balance off the time put in functional testing.
Hi,
It was an interesting post that I really want to explore more, but I have some first questions:
1. How many people were involved in the project? 2 teams, but how many developers and testers were in the teams respectively?
2. You talk about developers not testing their own code, and then you have some roles with responsibilities. Did you recognize roles that were possible for one and the same person to have, and any roles that are not possible to combine?
3. Did you have any project lead involved in some form? One for each team, or you were the one for both? Would you consider some other constellation or need for other leadership for this to get better? Or did the team organize these things completely by themselves?
Thank you for a good posting,
Sigge
Nice idea of catching the bugs earlier.I’ll try to apply this approach in next sprint in my team and see the quality of the code delivered.
Hi Shrikant, thanks for your comment.
From my point of view (and also our experience), having developers to be involved in functional test (at least on their local development workstations) will not cause too much effort to them – if they don’t have too many bugs to be caught in the test, it just takes 10 – 15 minutes to execute the test cases.
Actually a couple of key points that we probably want to make sure all the developers understand and agree to:
1. Bugs are not included in our definition of done for any piece of the software we’re going to deliver, eventually we’ll resolve all the bugs we find, but the later we do it, the more effort we’ll have to put on it.
2. The best way to deliver few bugs is always not no produce bugs (ideally). The effort for the developers being involved in the test case development and earlier inspection is actually helping them to do the things right at the first time, and to save the effort happens in the future.
Hope that answers your questions, thanks.
Hi Sigge, thanks for your valuable questions. I’m trying to provide more information, please see my answers below and hope that helps.
1. How many people were involved in the project? 2 teams, but how many developers and testers were in the teams respectively?
– That’s really a good question; we were having only one team actually although initially our plan was to have 2 separated teams. Later we just decided to merge the testing/development teams since we really saw the benefit of having just one self-organized team without any silo being built up internally. Probably that’s why I was mentioning “2 teams” in my post, sorry for the confusing. And I had 7 people in my team – the perfect size for a Scrum team. Among them only 1 was tester and the rest 6 were all developers. I didn’t count myself in because I didn’t commit to deliver any piece of code.
2. You talk about developers not testing their own code, and then you have some roles with responsibilities. Did you recognize roles that were possible for one and the same person to have, and any roles that are not possible to combine?
– We only had one recognized role – the “Quality Goalkeeper”. That role had the power to say no when a decision is needed on whether or not we can say the sprint is successful (from the quality perspective). For the other roles, they were virtual and only related to specific user stories, and the people who took those roles were dynamic. Let me take one example, in one sprint, Developer A was the “Story Owner” for User Story #1, but he didn’t necessarily code for that story, instead, he was coding for User Story #2, which was owned by Developer B. I didn’t see any roles that are not possible to combine, even the ScrumMaster role 🙂
3. Did you have any project lead involved in some form? One for each team, or you were the one for both? Would you consider some other constellation or need for other leadership for this to get better? Or did the team organize these things completely by themselves?
– No, I didn’t have any lead level person involved, and the team members I had were not very senior. I was providing suggestions to all the individuals on when they would do what. Fortuentely all those guys were quite smart and were learning new things very quickly. The more important, we were able to work inside one open working space everyday so that it would be easier to communicate and build up trusts inside the team. In the first 3 sprints it was just in a mess and team was under a high pressure, but they did figure out what could be the best way for them to improve. My experience is that once the team realizes they are fully trusted and supported, and are encouraged to utilize the maximum level of their own creativity, you’ll see how amazing they are.
Quite a bit of time can be gained back by writing unit tests and automating them behind a build server. Our development process includes automated unit tests that are run locally by developers prior to check-in, then those same tests and functional tests are run by the automated build. The automated build is run on every check-in, with results sent to the complete engineering team.
Pingback: Single Source Test Data - Portal Solutions Blog - Mike Porter, Perficient
Pingback: Test Case Driven Requirement – A New V-Model | Multi Shoring