Ethan Huang (Hangzhou, China), Author at Perficient Blogs https://blogs.perficient.com/author/ehuang/ Expert Digital Insights Wed, 23 Feb 2011 13:28:10 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Ethan Huang (Hangzhou, China), Author at Perficient Blogs https://blogs.perficient.com/author/ehuang/ 32 32 30508587 Does Scrum waterfall work? https://blogs.perficient.com/2011/02/23/does-scrum-waterfall-work/ https://blogs.perficient.com/2011/02/23/does-scrum-waterfall-work/#respond Wed, 23 Feb 2011 13:28:10 +0000 http://blogs.perficient.com/delivery/?p=888

I often talk with teams who claim that they’re running Scrum on their software development projects, or, at least they believe they are.

“We’re applying all the Scrum ceremonies, fixed duration Sprints, Sprint Planning, Daily Stand-up, Sprint reviews, and Retrospectives”.

“The team is using Burndown Charts to track the performance.”

“The only issue is that we didn’t see big improvements after the team started using Scrum.”

I was quite interested in why Scrum wasn’t helping in their projects, so I tried to dig into the details. I was curious how their teams run the development activities inside one Sprint.

“Well, the first thing we do before we start a Sprint is to have the team break the User Stories down into tasks. Based on the technical architecture, we’ll have three levels of development activities – design, coding and testing.”

“We use a 3-week timebox for each Sprint, so usually in the first couple days the team will work together to get the ‘JIT design’ done; then the team spends about 7 – 8 days to complete the coding part; at the last week we’ll involve the testers to test the results while at the same time the developers fix any bugs the testing identifies.”

I was a bit surprised since it sounds so much like a traditional waterfall approach even if it was happening inside one Sprint. I tried to understand the relationship between their tasks and User Stories. To my great surprise I was told:

“There is no direct relationship between User Stories and tasks, we mix all the feature-related User Stories up and have the team focus on the tasks – anyway we need to get them done by the end of the sprint, so the entire team will work step-by-step to get the feature done.”

Smells bad to me. Probably this is really the reason their “Scrum” doesn’t help – they are just applying the term Scrum to a time-boxed small waterfall. Some of the reasons I believe this doesn’t work include:

  • Not working according to the User Stories indicates that they’re not delivering small pieces of potentially shippable products – this doesn’t align with the Agile spirit of “working software”.
  • Having a traditional step-by-step approach to work on design, coding and testing results in the team not having a clear definition of done for each User Story – late testing is a Scrum anti-pattern.
  • The team loses the opportunity to frequently and repeatedly inspect and adapt; a practice which helps assure all the work will be done by the end of each Sprint. Not doing this adds more possibility of a last minute surprise.

In a real Scrum environment, the Scrum Team should always be focused on a list of prioritized User Stories. The User Stories should be a simple and unique base of work for everybody. The team should be focused on User Stories — user valued features. This is the first challenge the team will face if they don’t want to run Scrum-waterfall. It’s not easy to break epic User Stories down into implementable User Stories especially if the team doesn’t want to break it down into the technical layers, the practice they are familiar and comfortable with. I like the approach what my team was using: for one big epic User Story that focuses more on the business value, they try to decompose it to several independent User Stories according to the user interaction. Sometimes even a series of page controls would make up one User Story, if the end-user could complete a meaningful interaction with the system. And on the back of the story card, the team writes the functional tests cases down as the definition of done. This helps the team have a list of simple but clear User Stories that align 100% with the business value and at the same time are testable.

This is the foundation of how the team executes development in the Sprint. Different from the teams I was talking with, I believe the team should not mix all the User Stories up and then do the task break down according to the technical architecture. The team should get the User Stories really done, one by one – complete JIT design, development (which should be driven by tests), and the final testing for one single User story, and then start another one.

Sometimes if the team has the capacity and the User Stories are really granular, I think it’s also good for the team to start a couple of User Stories in parallel – but for each User Story it’s still going to be a standalone development cycle. You cannot mix them up. That is the real value Scrum provides – always focus on the highest priority business value, deliver small pieces of potentially shippable products iteratively, and be able to inspect and adapt frequently and repeatly.

]]>
https://blogs.perficient.com/2011/02/23/does-scrum-waterfall-work/feed/ 0 210486
Quantitatively Measure Story Points in Color https://blogs.perficient.com/2010/12/07/quantitatively-measure-story-points-in-color/ https://blogs.perficient.com/2010/12/07/quantitatively-measure-story-points-in-color/#comments Tue, 07 Dec 2010 07:38:28 +0000 http://blogs.perficient.com/delivery/?p=764

Story Point estimation is a comparison estimation approach, where Story Points are used to represent the relative size for the User Stories. At Perficient when doing Story Point estimation, we select one small User Story which every individual on the team is familiar with and feels comfortable to commit to delivering in a short period of time, and make this story a comparison base. We assign a small Fibonacci number (e.g., 2 or 3) to this story, and we then assign Fibonacci numbers ( 0.5, 1, 2, 3, 5, 8, 13, 20, …) to the rest of the User Stories based on the ‘feeling’ of how many times bigger they are compared to the base Story.

In the past 2 years I’ve been practicing Story Point estimating with various teams, and it was quite common that the team gets confused on how to objectively compare the Stories with the base one no matter how experienced or mature the teams and individuals are. Sometimes even after playing Planning Poker, many times my team members still cannot convert their thinking from giving out hours estimates to the comparison results – they still depend on an invisible “formula” to convert hours to points (see my previous post:  How Story Points in Scrum can reveal more than hours tracking to see why this doesn’t work).

That is not the right way to go if we really want to pursue continuously improving our estimating/tracking system, but we really weren’t able to come up with many ideas on how to deal with that “formula”. The only thing it seems we could do is to emphasize again and again in the planning meetings that we don’t do that, but sometimes it doesn’t work when the team could not find a quantitative way to compare. If the estimates is based on the gut feeling the team would rather go back to use hours to make themselves feel more comfortable.

Recently we had some new ideas come up accidentally when introducing Agile Estimating 2.0 into our toolbox to replace the time consuming Planning Poker approach. This is an improved Story Point estimating technology using Fibonacci Numbers too (see my previous post Agile Estimating 2.0), and there is one step in this new approach which allows the team to categorize different factors which impact the estimates using different colors. E.g., if we feel that “technical complexity” is going to impact our estimates, we assign a color to this aspect and tag all the User Stories having high technical complexity with this color.

I realized that this is actually a way we quantitatively measure Story Points. If we use different colors for the categories they provide a straightforward way to support the team to compare the User Stories with each other. Below is one example:

I consider the following 3 aspects when comparing each User Story with the base:

For the given User Story my estimate is that it has 3 times of Acceptance Tests, twice technical complexity and the same level of dependencies comparing with the base:

So, as a result, I decide for this User Story, thinking from the ATDD perspective, basically its size is three times of the base, and considering the higher technical complexity level which impacts half of my implementation I’ll scale up 50% for the size. Since the base story sizes 2 points, I’ll easily get a number by this formula: 2 x 3 x 1.5 = 9. And since 8 is the closest Fibonacci number to 9, my final estimate goes to 8.

Thus as an individual team member finally I got a better quantitatively measured Story Points estimate. And another thing that is really exciting thing to me is it’s possible to use color not only for Agile modeling but also in agile estimating. Cool isn’t it?

]]>
https://blogs.perficient.com/2010/12/07/quantitatively-measure-story-points-in-color/feed/ 1 210472
Agile Estimating 2.0 https://blogs.perficient.com/2010/11/25/agile-estimating-2-0/ https://blogs.perficient.com/2010/11/25/agile-estimating-2-0/#comments Fri, 26 Nov 2010 00:58:58 +0000 http://blogs.perficient.com/delivery/?p=722

Planning poker is a widely accepted estimating technology which is being used in almost all the teams in Perficient GDC. It “combines expert opinion, analogy, and disaggregation into an enjoyable approach to estimating that results in quick but reliable estimates” (Agile Estimating and Planning, Mike Cohn).

However, we found that sometimes this poker game takes quite a long time to play, especially when the team is big. Ken Schwaber suggests in his book Agile Project Management with Scrum that the Sprint Planning session should be time-boxed to 4 hours, but practically if a meeting lasts 4 hours I believe most of the attendees would be tired and lose their focus – then the meeting wouldn’t be fun anymore.

We tried better preparation to reduce meeting time. We let each individual spend more time on reading through the requirement to reduce the length of the Q&A session and also setup meeting rules to make sure the discussion (sometimes debates) are in good order, but the Sprint Planning meeting still takes time.  Imagine that there are 15 User Stories on our backlog and 10 team members sitting together in the meeting. It’s quite common that those 10 people need to consume 5 minutes to finish one round of discussion before they make a decision, and in order to come up with the estimate for one User Story it’s not unusual to have 2 – 3 rounds of discussion. Thus just the time we spend for playing the planning poker can easily exceed 2 hours.

This has been bothering us for quite a long time until Brad Swanson and Björn Jensen introduced us to the Agile Estimating 2.0 approach at the Scrum Gathering in Shanghai on April 19, 2010. This new estimating approach is also a combination of expert opinion and analogy, and it also uses Fibonacci numbers, but it is significantly less time consuming.

The first step is to have the PO introduce every User Story to the team making sure there is no requirement related questions before we estimate.

Then the whole team participates in the planning game. There is only one single rule – one person at a time to place one Story Card on the white board with a certain order: smaller on the left, larger on the right, similar sized ones are grouped together in one vertical pile. The whole team moves the Story Cards in turns over and over until the entire team comes to an agreement on the right order.

The third step is to assign Story Points to each User Story or story pile. In our team we prefer to use team voting to decide which Fibonacci number goes to which User Story.

And we still have the last step – use different colors to represent different aspects that are impacting the estimates, and rethink if the estimates should change, for example RED stands for those User Stories which could not be covered by automated testing thus for those red User Stories we might consider putting larger numbers for their estimates, because over time we might have to put more effort on the manual regression testing.

We’ve played this game multiple times, and we’re quite happy with the result. The team is more confident with the estimation accuracy and it takes only ½ the time we spent before. The following article describes the approach in detail. Perhaps you might also want to give it a try.

http://properosolutions.com/wp-content/uploads/2010/05/agile_estimation_2.0-for-pdf.pdf

]]>
https://blogs.perficient.com/2010/11/25/agile-estimating-2-0/feed/ 1 210468
Agile Noodle https://blogs.perficient.com/2010/10/29/agile-noodle/ https://blogs.perficient.com/2010/10/29/agile-noodle/#respond Fri, 29 Oct 2010 14:36:42 +0000 http://blogs.perficient.com/delivery/?p=660

I’m very lucky because there is a wonderful small restaurant right opposite to my apartment. They sell various noodles that taste great. Actually this restaurant is so famous that it attracts people from different areas of this city. Usually from 10:00 am there is already a long queue waiting for lunch. If you don’t get there before 11:00 am you might have to wait more than an hour before you get your noodles.

That bothers me a lot. I hate waiting for an hour and I hate going there at 10:00 am for lunch; it’s too early for me. I noticed that most customers order three popular noodles (I really have no idea on how to translate the names to English). Ordering any of the three takes the longest time. Unfortunately these three noodles are my favorites. Sometimes my wife and I will walk past to see if the queue is reasonable. Most of the time we have to give up and find another place for lunch.

Last week, all in sudden, we found there were less people waiting even during the rush hour. We were quite happy to see even between 11:00 am to 12:00 pm there appeared to be only 20 – 30 people waiting. We were quite interested on what happened to reduce the queue length by almost half. Hopefully they didn’t change their cooks and the taste is still attractive.

After investigation we actually found the result which was quite a big surprise to me. What I saw was that they improved their working process in an Agile way! I was really excited that I found a group of Chinese cooks are applying Agile delivery in a restaurant!

The big change I found was that they hung a big sign up on the entrance to announce the three most popular noodles are only served between 11:30 am t0 12:30 pm, and every day they only sell 100 bowls each. Here I saw is a fixed timebox with a good estimated capacity according to their historical data. 300 noodles every day is already a big number for that small restaurant, it’s reasonable and also understandable to the customers. No matter whether they realized it or not, they’re delivering noodles in Sprints!

Their new policy continues: in order to reduce the waiting time any customer interested in these 3 noodles can pre-order by getting a ticket with a number on it. They have a blackboard setting near the entrance to show the current orders, the current number being cooked, and the number they’ve done using chalks. To me this is an excellent Whiteboard system illustrating their velocity; it’s almost exactly the same as the Scrum task board with the ToDo, In Progress and Done columns. I didn’t have a camera with me to take a picture so I’ve drawn it on sticky note according to my memory.

It’s really cool to see Agile everywhere in our everyday life, especially in a small Chinese noodle restaurant, isn’t it?

]]>
https://blogs.perficient.com/2010/10/29/agile-noodle/feed/ 0 210466
The real life of a Scrum team – with photos https://blogs.perficient.com/2010/08/26/the-real-life-of-a-perficient-scrum-team/ https://blogs.perficient.com/2010/08/26/the-real-life-of-a-perficient-scrum-team/#comments Fri, 27 Aug 2010 04:01:22 +0000 http://blogs.perficient.com/delivery/?p=571

Recently while cleaning up my photo albums I found some interesting old pictures which were captured while I was leading a Scrum project. These white board pictures illustrate how we incrementally deliver from scratch. Looking at these pictures I really enjoy recollecting the days when I was working together with my team; days we spent suffering, learning and growing together.

Sprint # 0.

At that time the team was just busy like crazy. We got all infrastructure stuff ready in this period of time (we call it Sprint 0). The team also made some smart decisions; one of those was that we use a white board as our User Story completion tracking system, and use different color sticky notes for different types of User Stories.

Sprint # 1.

This picture was taken in the middle of our first Sprint. We got our Product Backlog prioritized and started from some foundational technical tasks. Fortunately, my team was strong enough so that we also delivered some of the highest priority (which also happened to be some of the simplest) User Stories.

Sprint # 2.

This picture was taken on the last day of our second Sprint. Things were just going smoothly. We delivered all the planned User Stories and had a very successfully Sprint Review meeting to our client. The client was happy and we did the Sprint planning for the next Sprint right after the Sprint Review meeting. The team was excited and confident that we can deliver more story points in the next Sprint.

Sprint # 3.

Maybe the team was too excited in Sprint # 2 to realize life never becomes easy. Looking at the picture we took in the middle of Sprint # 3, I can still feel the pain the team was suffering at that time when we had to admit we could not deliver all the planned work.

Sprint # 4.

Things were becoming even worse. This picture was taken the last day of Sprint # 4. That day the team was really frustrated because we were hit by a significant failure: we failed to deliver most of the planned stories. For two consecutive Sprints that the team had missed our commitments. We spent 2 hours having a serious retrospective meeting to decide how we can adjust and catch up with the plan in the next Sprint.

Sprint # 5.

One action the team took was to communicate with our client honestly about the current problems the team was having. The client re-prioritized our Product Backlog so that we got extra time to clean up our technical debts in our Sprint # 5. Thankfully our client understood the team needed more time to learn and grow, although they might not have been that happy. Anyway the team learned from the failure and got extra time to fix the issues.

Sprint # 6.

The team worked extremely hard in Sprint # 5 and 6. Sprint # 5 was a milestone – we not only cleaned up our technical debt but also delivered several additional User Stories. By the end of that Sprint we were finally feeling that we were taking back control.

Sprint # 7.

I lost the picture of that Sprint.

Sprint # 8.

We were getting closer to the end of the project. As we often experience requirement changes started coming into our backlog. All the high priority User Stories on our Product Backlog have been delivered, now the client was adding changes almost every day. Managing those changes became the biggest headache for the Sprint.

Sprint # 9.

The additions in Spring 8 had been implemented. Only a few low priority stories left on our To-do list. The client was planning to throw them away directly and the team started doing regression testing again and again to make sure we deliver fewer bugs. The biggest lesson we learned in Sprint 9 was that we should have written enough automated functional test scripts so that we don’t need to be working over time until mid-night that Sprint.

Sprint # 10.

This was the last Sprint of this project. We’ve done enough tests, and we were very confident about the quality we delivered. The client was also satisfied. They were ready to do the big final Demo to the sponsor. Several key team members started taking vacation. They needed to compensate their families for those crazy days and nights they were staying in the office.

That is the typical life inside Perficient China office, full of happiness of experiencing new things every day, full of pain of dealing with different challenges and issues all the time, full of excitements of continuously learning and growing. I’m feeling lucky that these happen all the time in my life in this office.

]]>
https://blogs.perficient.com/2010/08/26/the-real-life-of-a-perficient-scrum-team/feed/ 3 210459
Manage requirement changes inside a Sprint https://blogs.perficient.com/2010/08/11/manage-requirement-changes-inside-a-sprint/ https://blogs.perficient.com/2010/08/11/manage-requirement-changes-inside-a-sprint/#respond Thu, 12 Aug 2010 01:20:21 +0000 http://blogs.perficient.com/delivery/?p=526

Ideally a Scrum team should be protected from any requirement change during a Sprint. But that is IDEAL. Often we face clients who are unable to assign a real Product Owner to the team, and we seldom win the battle of trying to convince our client to add changes to the Product Backlog and prioritize to schedule the changes for the next Sprint. The clients always want to see quicker responses to their requests – “You embrace changes, that’s what you told us”.

Realistically many of our projects require us to manage changes inside a Sprint.  Accumulated from our experience in our successful deliveries over the past 5+ years, we have many best practices to manage changes inside a Sprint.

JIT (Just In Time) Sprint Planning

Ken Schwaber described the Sprint planning rule in his book Agile Project Management with Scrum:

(For a 30 day Sprint), the Sprint planning meeting is time-boxed to 8 hours and consists of two segments that are time-boxed to 4 hours each. The first segment is for selecting Product Backlog; the second segment is for preparing a Sprint Backlog.

But if we can predict that the User Stories will change someday in the near future, why waste effort on planning for changes? We just make sure we focus ONLY on the highest priority User Stories in our Spring Backlog which we’re confident won’t change before we deliver them. We have them decomposed into tasks and estimated, and start to work – “Responding to change over following a plan”.

Shorten the Sprint duration if possible

Typically our Sprints are 2 or 3 weeks long, which in some cases is still too long to satisfy our clients expectations for change. If this is the reason the client wants to change requirements during the Sprint we probably want to make our Sprints shorter. This doesn’t mean we change the Sprint duration completely; we usually stick to our first decision on Sprint duration, but internally we can treat every week as a short sub-Sprint. This would help to shorten our response cycle to be at minimum one week. According to the 20/80 principle I believe implementing those most important changes in the next week will satisfy the client in most situations.

Change freeze

It’s also very common that change itself changes. I’ve experienced several times that while we’re spending lots of effort to change from requirement A to B we get an e-mail saying that what the client really want is not B, but C. This is understandable. Before the client really uses the product how can they know whether or not it meets their real needs. One useful solution is to maintain a change backlog – we log all the changes onto a list, get those items prioritized, and ask the client to wait several days to make sure the top 3 changes are stable and no more new thoughts will affect the decisions we’re going to make. Usually our clients are patient enough to wait several days.

Do not use Scrum

Sometimes changes are happening so frequently that we cannot even get an initial Product Backlog that would allow us to start a Sprint. In this situation we might need to seriously consider if it’s still suitable to use Scrum. Besides Scrum which is widely used in most of our projects we have an alternative process for these type of projects – we call it Ticket Driven Development. Sam Tong contributed his experience on using Kanban technology for ticket driven projects in one of his posts.

I’m not trying to explain how we defend ourselves to avoid changes, on the contrary, these practices are proven to be helpful when we want to better manage change; not only for our own benefit but for the benefit of our client. We embrace changes, that is true.

]]>
https://blogs.perficient.com/2010/08/11/manage-requirement-changes-inside-a-sprint/feed/ 0 210455
How Story Points in Scrum can reveal more than hours tracking https://blogs.perficient.com/2010/08/08/story-points-is-over-hours/ https://blogs.perficient.com/2010/08/08/story-points-is-over-hours/#comments Mon, 09 Aug 2010 05:47:53 +0000 http://blogs.perficient.com/delivery/?p=489

My team recently received a couple of very interesting burndown charts from our previous sprint, and we’ve had a very good discussion on how this happened. We’re feeling this case could be very convincing evidence to support that using Story Points to estimate is better than using actual hours.

Before we look at the charts (real charts grabbed directly from our task tracking systems), let me introduce a little bit about our current estimation/tracking process.

  • We estimate Story size using Planning Poker: we give each Story a number of Story Points by comparing the size with our base story.
  • We break the User Stories down into detailed tasks and estimate how many hours we need to complete each task because our client requires us to estimate and track our velocity and capacity in hours.

We use the client Jira site (Atlassian Jira + GreenHopper) to track task completion using an hour based burndown chart. We have the policy that each time when the individual logs the real effort into Jira they need to re-estimate how many hours are remaining before the task can be completed.

At the same time we’re still using our own whiteboard-based tracking system which uses a Story Point burndown chart produced by hand to publish the day-to-day User Story completion status.

You may have already realized what is happening in our project – we use two different ways to estimate and track:

  • Hours-based estimation for the detailed tasks, and an hours-based burn down chart tracking the task completion.
  • Story Point based estimation for the User Stories, and a whiteboard Story Point burndown chart tracking the Story completion.

This provided a great opportunity to compare which works better for us using a single set of project data.

Now let’s take a look at the real interesting things, the two different burn down charts for our last Sprint.

Hour-based burndown chart:

Story Point-based burndown chart:

If we just look at the hours burndown, you’l see that everything was going perfectly. The burndown trend was as good as any Scrum examples: we only had 9 hours among the planned 240 left at the last day. But if you look at the story point burn down, things are totally different. By estimation, the planned Stories are 50 Story Points in total, but by the last day we just delivered 16 points – that definitely was a significant failure.

My team analyzed how this situation took place. When using the hours burndown chart generated by Jira, the system doesn’t care about whether or not the user stories are really completed, it just adds all the hours we logged to the tasks together and uses that aggregated number as the “completed” work. Jira is calculating using this formula:

Time spent = Work done

But that formula is NOT telling us the whole truth, the relationship between time and delivery are indirect:

  • The tasks which one team/individual can finish in 10 hours may cost 100 hours for another team/individual.
  • It’s possible that the team spends 100 hours while delivering nothing.
  • As a User Story, it’s either “Done” or “Not Done”, we cannot use a ratio number (as we easily go with when estimating how much effort is remaining to complete) to say it’s  “Partially Done” – we cannot burn 33.3% story points down for one user story if it’s not done yet.

The conclusion my team had are:

  • Hours estimation is never accurate, different people working on one task takes different hours.
  • Hours spent doesn’t equal to task completion, “partially done” is a dangerous status which hides the problems.
  • Story Point estimation is simpler and more Agile although it’s not easy.

When you perform a Google search, you will find a lot of discussion around how to do Story Point estimation. We were also questioning this and some of our team members were still hesitating to use Story Points because it’s not easy to understand and use; but after this Sprint we really realized that we should give up the traditional hours estimation because it’s giving us the wrong feeling and leading us in the wrong direction. In the retrospective meeting my team decided that we’ll do more practice in the future and incrementally make our Story Point estimation accurate. I’ll be glad to share our experience if in the future anything interesting happens during this process.

]]>
https://blogs.perficient.com/2010/08/08/story-points-is-over-hours/feed/ 6 210452
Use Earned Value Analysis to quantitatively measure schedule deviation in Scrum projects https://blogs.perficient.com/2010/08/08/use-earned-value-analysis-to-quantitatively-measure-schedule-deviation-in-scrum-projects/ https://blogs.perficient.com/2010/08/08/use-earned-value-analysis-to-quantitatively-measure-schedule-deviation-in-scrum-projects/#respond Sun, 08 Aug 2010 13:47:09 +0000 http://blogs.perficient.com/delivery/?p=500

A Burndown chart is the most important tool we use to represent work left over time in our Scrum toolbox. We use this diagram to measure the current progress and assess how healthy the project status is by looking at the trend. It also provides a good way for the team to know the deviation of the current team velocity vs. the historical velocity if we use the historical data to estimate.

Let’s take a look at some sample burndown charts I created using sample data so that we can easily see how the burndown charts illustrate the project status. Notice that all these projects are running in parallel and are having the same start date.

Project A, each Sprint lasts 3 weeks, the historical team velocity is around 165 Story Points per Sprint.


Project B, each Sprint lasts 2 weeks, the historical team velocity is around 30 Story Points per Sprint.

Project C, each Sprint lasts 1 week, the historical team velocity is around 50 Story Points per Sprint.

If you are the ScrumMaster taking care of one of these projects, these burndown charts are already good enough. But what if you are a PMO manager or a Project Portfolio Manager who is taking care of all these three projects at the same time? Are you comfortable that at one single moment you can identify which project has the least schedule deviation? You might realize the following problems if you have to put those diagrams together trying to come up with a quantitative comparison:

  • The Sprint duration varies. If I want to compare Project C Sprint 1 (1 week) with Project A Sprint 1 (3 weeks) I have to wait until we complete Project A Sprint 1. But by that time Project A is in Sprint 3 already.
  • The team velocity varies. Team B completes around 30 Story Points in 2 weeks while Team C completes around 50 in 1 week. Which team is delivering more?
  • The comparison base standard varies. All the three projects are using different technologies to build different products, 1 Story Point in Project A definitely doesn’t equal to 1 Story Point in Project C: how can we say which project is having less schedule problem if both A and C has 20 points left unfinished?

In order to resolve these problems, very naturally , we would come to the solution of equalizing the both horizontal and vertical dimensions in the burndown charts using ratio numbers (percentage) to replace the real values (Story Points and days) so that all the projects with different characteristics will generate similar burndown charts and the value on these diagrams will become comparable – we can easily say a project having 5% schedule delay is performing better than a project having 20% schedule delay if we compare only from the schedule perspective.

When thinking about how to have a quantitative calculate method I recalled the Earned Value Analysis technology in PMI. It uses a very simple formula to calculate the schedule deviation:

SPI (Schedule Performance Indicator) = EV (Earned Value) /PV (Planned Value)

As the numerator, EV (earned value) stands for the total number of the original estimation (planned Value) for those as of completed stories.

Here actually the “PV (Completed)” could simply be translated to the “Original Estimation of those completed work”, and since when using Story Points to estimate we don’t usually do re-estimate so this value could be easily calculated by using a MS Excel formula – to add up the story points belong to those User Stories marked as “Done”.

But the denominator, PV is kind of tricky because in Scrum we don’t plan for when to complete how much work.  However, we can have our own definition to get the rough “Planned Value”, I would like to just simply use the below formula since ideally every day the Scrum team should be delivering some pieces of working software.

According to the above definition, SPI is just a simply ratio number (percentage) representing as of the current moment the work that has been done over the work should be complete.

Let’s use Project A as the sample instance. Below are the metrics data we collected daily, and the SPI value calculated from the below formula.

Now we have a mathematics basis to calculate the SPI value for the different projects. If we put all 3 project data together and calculate the SPI on a weekly basis we get the table below:

If you’re a PMO manager or a Project Portfolio Manager I bet you’ll be very happy because based on these data you can very easily compare quantitatively the schedule deviation in different projects with different characteristics as follows:

When practicing this quantitative measurement technology, in order to calculate the SPI value we need to have our own definition for PV which best fits project reality. Moreover, we need to collect the metrics on a regular basis. Our best practice is to do it weekly to generate a quantitative report for the different projects. If you’re interested we invite you to apply this in your projects and contribute more great ideas. We look forward to hearing about other’s experience and best practices.

]]>
https://blogs.perficient.com/2010/08/08/use-earned-value-analysis-to-quantitatively-measure-schedule-deviation-in-scrum-projects/feed/ 0 210453
Is your Burn Down chart good enough? https://blogs.perficient.com/2010/07/21/is-your-burn-down-chart-good-enough/ https://blogs.perficient.com/2010/07/21/is-your-burn-down-chart-good-enough/#respond Wed, 21 Jul 2010 10:40:45 +0000 http://blogs.perficient.com/delivery/?p=436

We use Burn Down charts to illustrate our task completion status in the Scrum world. However, if you’re still using hours to estimate your user stories/tasks, are you using the burn down chart in an appropriate way? Is your Burn Down chart really demonstrating your current progress and team velocity?

Let’s look at the below example.

Assume we’re running 5-days sprints, and before we start each sprint we have a sprint planning session in which we break user stories down to tasks and give the original estimates (hours) for each task. When time goes by we record two metrics data everyday – hours spent on each task, and the newly estimated remaining hours for each task.

In this case what do you feel if look at the burn down chart below?

I bet you’ll feel everything ‘s just going well, although we still have 2 hours at the end of the Sprint, it  looks like it’s not a big deal, we’re still on track, we’re good.

But if you take a second look at the data sheet where this burn down came from, you probably will have a different view– every user story contains one unfinished task, with only 0.5 hour left. Although the estimation for the total work remaining is just 2 hours it still results in zero delivery in this Sprint. Nothing at all would be delivered, and that would be a significant failure. The Burn Down chart lied to you.

How can we fix this problem? How can we make sure the Burn Down chart represents your real progress?

Solution 1: use Story Points instead of hours when estimating

When using Story Points for estimation, the points can be “burned” only when the task is marked as done.  That can help highlight the unfinished tasks/stories on our Burn Down chart because the trend will be magnified when not calculating those “In Progress” efforts. Below is an example of using Story Point estimation with the exactly same task completion status. The result is quite close to the real situation.

Solution 2: create another “task completion Burn Down chart” in addition to your current estimation Burn Down chart, regardless of whether your Burn Down is based on hours or Story Points

If you put the “Tasks Burn Down Chart” and “Hours Burn Down Chart” side by side, they might provide you with more information to judge what the real completion status is.

I’ve been using both of these solutions in several projects. My personal favorite is to use a combination of “Task Burn Down” plus “Story Point Burn Down”. They always tell me the real task completion status and never lie to me. 🙂

]]>
https://blogs.perficient.com/2010/07/21/is-your-burn-down-chart-good-enough/feed/ 0 210447
Test Case Driven Requirement – A New V-Model https://blogs.perficient.com/2010/07/01/test-case-driven-requirement-a-new-v-model/ https://blogs.perficient.com/2010/07/01/test-case-driven-requirement-a-new-v-model/#comments Fri, 02 Jul 2010 01:37:05 +0000 http://blogs.perficient.com/delivery/?p=415

We’ve been very familiar with the traditional V-model like below:

Even when implementing an Agile SDLC, sometimes we still treat this model as an important  guidance to some degree when defining our development/testing activities. We have to admit that the traditional deliverables/documents used to define the product are still widely accepted by senior developers and testers who have been educated and acclimated to UML and RUP. It’s very normal for a Scrum team to be using Requirement Specification documents and Test Cases as the required workpiece to drive the corresponding programming and testing work separately.

I’ve seen the below documents/format are used in several different projects:

Workpiece Format Usage
RUP SRS (Software Requirement Specification), or any other Requirement Document UML Use Cases + natural narratives Define the product from the end-user perspectives
Functional Test Cases Test steps + expected results Define the expected behavior for the product from the end-user perspectives
UI design Mockups, screenshots The reference used from both requirement and testing perspectives

Usually,  the SRS or Requirement Document are used as the unique official input for both development and testing teams for the technical design/implementation and the test case design/test execution work. Let’s take a look at what we usually do when using the traditional V-model:

That approach is not agile enough. In a Scrum process we have very short period of time to get all the things done, we deal with a lot of changes sprint by sprint, our teams are not so functional –i.e. not responsible for only one area or role, and we hate unnecessary heavy documentation.

In my past experiences I’ve seen numerous times that my Scrum teams got a big headache from dealing with requirement/test case documents. They spent comparatively huge effort synchronizing the 2 important documents: SRS (or Requirement Document), and Test Cases.

  • We host long reviewing meetings trying to address all the mismatches between the test cases and the requirements.
  • It’s normal for the testing team and the development team to have different understanding of the requirements even if there is no gap between the documents.
  • The requirement documents are never detailed enough – we do 3 rounds of review meetings, but after we start the implementation we still find we need more detailed specifications for the requirement.
  • Issues are always exposed at the last minute – when system testing is performed and bugs are raised the teams start to argue whether or not the implementation aligns with the requirement.
  • It’s almost impossible to keep the requirement documents and the test cases up-to-date – some details are described in e-mails, wikis, phone calls, or issue tickets, and test cases have to be up-to-date on a regular basis. It’s very hard to trace these back to detailed requirement/requirement changes after 3 or 4 sprints.

We started to think about how we deal with these kind of headaches. Once our friend Ken McCorkell introduced the “Test Case Driven Requirements” to one of our projects. This approach allows using Functional Test Cases to directly represent the requirement, and use that representation as the only required documents for both the development and testing teams. That sounds really interesting to us because we realized that from the developer/tester perspective the test cases actually have the same objective as the requirement documents but are usually much more detailed with more information.

We’ve tried this approach in 2 or 3 projects (including big projects with complex requirements), and now we’re feeling that this approach could be a good option to the Scrum teams if we want to escape from heavy documentation.

  • In one of my past projects, we were still using requirement document in Use Case format, but it stays at a high level only figuring out the business value and the most important business rules.
  • We defined a new User Acceptance Test Case format, making it suitable to describe the typical behaviors of one User Story.
  • We build up a hierarchical structure between User Acceptance Test (UAT) Cases and Functional Test Cases: we added multiple Functional Tests under one UAT to cover all the major flows and alternative flows.
  • The developers worked together with the testers to refine every Functional Test Case; they added more detailed specifications until they felt comfortable to start development. Both teams worked together to maintain the Functional Test Cases since that became the new unique formal input to the development/testing work.
  • We don’t spent too much time in requirement meetings any more. Instead we talk about detailed and specific questions which were documented as “test steps” and “expected results”.

Using Test Case Driven Requirement, we came up with a new, more Agile like V-model:

A couple of best practices:

  • An “Integrated Development and QA team” is the key of using this model successfully. Please refer to my previous posts on how that team model works in the “Implementation” box on the above diagram.
  • We can still use a traditional format to describe the requirement. The difference is that we leave that document in a high level and put the details into our Functional Test Cases.
  • We’ll have two levels of Functional Test Cases to represent requirements – the UAT Case could be a User Story level document defining all the main flows and alternative flows, while the Functional Test Cases figures out the details for each flow. Below is one simple example for the UAT case format:

We’re still trying to implement this new approach in our projects, in the meanwhile I’d like to share this idea with anybody who is interested. We invite you to try this on your project and share your results and feedback so that we make improvements together.

]]>
https://blogs.perficient.com/2010/07/01/test-case-driven-requirement-a-new-v-model/feed/ 3 210445
Another option of doing estimation for your first sprint: Just don’t do it https://blogs.perficient.com/2010/05/16/another-option-of-doing-estimation-for-your-first-sprint-just-dont-do-it/ https://blogs.perficient.com/2010/05/16/another-option-of-doing-estimation-for-your-first-sprint-just-dont-do-it/#respond Mon, 17 May 2010 00:35:34 +0000 http://blogs.perficient.com/delivery/?p=336

Everybody knows how important planning and estimation is inside a Scrum development cycle – it helps the Scrum team to communicate and break the User Stories down into measurable pieces; it helps to estimate how much work the team could finish within a short period of time so that team would make commitments that they feel comfortable with; it also helps to establish the basis for tracking and evaluating, and establishes the team’s velocity.

But what is the best way to do planning, especially estimation, for one specific project? From my point of view even when we bring that question to an experienced Scrum team it’s still almost impossible to know the answer before they really deliver something – it’s less likely for a team to have a universal estimation model to fit all the different business solutions, technologies, clients, and time; or at least it’s less likely for them to be able to select the most suitable estimation technology from their toolbox in the first sprint. The reason is kind of simple – the project nature and basic characteristics decide that every project varies from another.

But we still do estimation in our first Sprint. In one of my previous projects, we spent 6 hours for our first 3-week Sprint, to finish reading through the necessary material and discuss around the details, based on which we then did task break down and hour based estimation for them. The whole team was tired up after that long session, and a little frustrated – we were not confident about the outcomes since there were so many new technologies that team was not familiar with, and we were feeling that using hours probably was not the best way we do estimation.

Unfortunately our concerns came true after we finished the first Sprint. After we failed to deliver the planned work we realized that probably Story points fits us better – different team members have different levels of knowledge on the new technologies and the estimated hours became meaningless when different people picked tasks up from our Sprint backlog. In the Sprint retrospective, a junior guy raised an interesting question that nobody could answer: Since most of the estimated hours were wrong and nobody was feeling comfortable to commit on those numbers and finally we decided to change to story points, why didn’t we just skip the estimation session until we got enough sense of those tasks?

That question gave us an idea on how we do estimation for our first Sprint more effectively, if the team is not comfortable to calculate the estimation and commit on them – we just don’t do it. We still collect effort data so that in the next Sprint we can learn from those data and make the decision on what the best way is to do estimation.

I practiced that approach in another project. In Sprint 1 we skipped the estimation on purpose, but we still did task break down, and the team built a spreadsheet to record all effort we put onto each specific task.

In the Sprint 2 planning session, the first thing we did was to have the whole team go through that spreadsheet. We picked up those special points and analyzed the root course, and then we made a team decision on which metrics the team would be interested in using in the following Sprints. All team members agreed that we should go with Story points, and then it became easier to finish the rest of steps in quite shorter time.

We value estimation and believe it’s one of the key technologies we utilize to keep our projects on track, but we should also use that technology in an Agile way. Remember estimation is never accurate. The more we practice by delivering real work, the closer we get to a steady team velocity, and that takes time. That’s why I would suggest we put “not doing estimation in our first Sprint” into our toolbox as well.

]]>
https://blogs.perficient.com/2010/05/16/another-option-of-doing-estimation-for-your-first-sprint-just-dont-do-it/feed/ 0 210379
Developer involved testing establishes new low defect rate benchmark https://blogs.perficient.com/2010/04/26/developer-involved-testing/ https://blogs.perficient.com/2010/04/26/developer-involved-testing/#comments Tue, 27 Apr 2010 05:02:55 +0000 http://blogs.perficient.com/delivery/?p=273

Last week I was facilitating an interesting conversation between my development team and testing team. The great experience we had on that conversation was how the two teams broke the silos down to secure code quality together at an earlier stage. I related a story that shared how that team made a significant difference in the code quality. They established a goal that the developers should deliver fewer bugs the first time they check in code into the code repository, and they made that happen – they saved at least one Sprint for “Stabilization”, and when compared with historical data, the number of identified bugs decreased by about 50%. Below I have listed several key things they’ve been doing:

  • The testing team was working together with the developers to conduct functional verifications on dev local before code check-in.
  • Every day the whole team was doing a 30 minutes smoke test after the daily build, and verifing the new functionalities integrated that day.
  • The team committed to resolving today’s issues today.

But by the end of that project the team found they could do more; they found the effort the testers spent were heavier than normal because when doing smoke testing on the development workstations before code check-in the developers were relying too much on testers even if they  had the required testing skills, and it was normal to have arguments about their understanding of requirement which lead to the definition of bug. Fortunately, later the same team got another chance to go further on their code quality improvement on a brand new project which has less technical risks so that the team could put more focus on testing and engineering practices.  This time the team decided that they want to beat even more aggressive targets:

  • Continue improving our code quality – decrease the number of identified bugs more than 50%.
  • Release the testers from regular functional testing work, letting them take more valuable work like performance testing and automated testing.
  • Practice Test Case Driven Requirement, making it possible to throw those obscure requirement documents away and build up an understanding baseline among all teams.

When defining our team model and Sprint process, we realized that our developers already had enough manual functional testing experience in the prior projects so that they should be able to take over most of the testing work, but they were not experienced enough to design high quality test cases, which means if our testing team could train them the skills of designing good test cases the developers could entirely handle the functional testing from the technical skills perspective. Hence, we found that we have only one problem left for the developers to completely take over the functional testing work: as a developer, it’s normal to be less easy to test against his/her own code, they always find fewer bugs if they’re testing for themselves.

But again, this creative team resolved that problem. They decided that they can do cross testing among developers because it’s always easier to find others bugs. And in order to secure the quality of testing, they realized that they still need a dedicated functional tester role, whose responsibility would not be taking real testing work but to provide direct support to the developers when they’re doing testing work. That role would provide on-job training on how to write profession test cases, how to design test data, how to decide the best timing to conduct regression tests, etc.

And since the whole team (including the testers) had some good practice of TDD (Test Driven Development), it was easier for them to accept the concept quickly and to design the activities inside a Test Driven Requirement Analysis cycle. The testing team provided a standard format for functional test cases, and based on that the team came up with a hierarchical structure for decomposing high level requirements to user stories and functional test suites, and the mechanism to maintain the requirement traceability.

The team was feeling ready to go. They started the development using the new approach, and finally they were successful again. The code quality was even better than before, with the number of identified bugs per KLC only 1/3 of our organizational benchmark: the development team delivered only  33.33% bugs compared with other teams. And the more important thing, almost all of the testing work was performed by developers – only the test lead spent part of his time supporting the development team, with the rest of the testers doing automated testing and performance testing. As time went by the team kept improving their approach iteratively and when the project finished, they already had a formally defined team process for their day-to-day work.

Below is a brief introduction of their final development process, which could be summarized to several key development roles, activities and principle/commitments.

The key development team roles and their responsibilities:

  • Story Owner – for each user story, the team would have an owner who would be responsible for the final delivery with high quality. A Story Owner should NOT take any real development work inside that user story although the person who takes that role could be developing for another user story. Instead, a Story Owner should take care of test case development and make sure all necessary testing effort would take place to cover that feature.
  • Quality Goalkeeper – the development team needs an experienced functional tester to provide ongoing support and to measure the current quality status, e.g., quality statistics analysis, overall quality reporting, technical supporting, etc. Quality Goalkeeper would be the go/no-go decision maker for that Sprint from the quality perspective.

The key activities inside a development lifecycle:

  • Develop functional test cases and use them as the unique documents to present high level requirement.
  • Test Driven Development, with the specific target for Unit Test code coverage.
  • Continuous Code Review and local functional verification on development workstations before code check-in.
  • Continuous Integration and test automation. Testing automation is a key reason for us to be able to conduct frequent inspection – it reduces the huge manual effort for regression test if we want to do functional testing everyday.
  • Daily Functional Testing – verify the functionalities newly integrated on the same day. This is a whole team activity which on average takes 30 minutes per day.
  • Sprint regression test which happens before the Sprint Demo and a final quality inspection before the product is delivered.

Several principles and team commitments:

They defined three Quality Gates and made corresponding commitments:

  • Local Testing before code check-in – developers will resolve today’s issues today.
  • Daily Functional Testing after daily build – Story Owner will make sure all tests are passed on the development server.
  • Sprint Functional Testing before the Demo – Testing Team would make sure no bug with minor or above severity remained in that Sprint.

The following diagram illustrates this simple development approach:

By applying an empirical approach to determine how we can continue to improve our overall code quality we have been able to develop a more cross-functional team, where more testing is performed by developers with more specialized testing performed by dedicated testers, while significantly increasing code quality. We are also able to deliver a runnable product at the end of every business day, and a potentially shippable product at the end of every Sprint. We invite you to try these techniques for yourself, and would like to hear about your results.

]]>
https://blogs.perficient.com/2010/04/26/developer-involved-testing/feed/ 9 210372