Kevin Sheen, Author at Perficient Blogs https://blogs.perficient.com/author/ksheen/ Expert Digital Insights Tue, 26 Sep 2023 14:33:53 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Kevin Sheen, Author at Perficient Blogs https://blogs.perficient.com/author/ksheen/ 32 32 30508587 How to Avoid ‘Hitting the Wall’ in Your Agile Transformation https://blogs.perficient.com/2018/10/18/how-to-avoid-hitting-the-wall-in-your-agile-transformation/ https://blogs.perficient.com/2018/10/18/how-to-avoid-hitting-the-wall-in-your-agile-transformation/#respond Thu, 18 Oct 2018 22:34:31 +0000 https://blogs.perficient.com/?p=232763

I’ve been an Agile practitioner and evangelist for over 20+ years. That’s pretty much 2/3 of my career as a person that builds software.

Over those years, I’ve seen mindsets change on Agile and what it should and shouldn’t be used for. In general, the ‘shouldn’t be used for’ list has gone down. A progression from ‘should not’ to ‘could’ to ‘should’ to ‘absolutely should’ is the right direction.

But over that same time-period, I’ve watched a number of organizations, populated with very smart, creative and passionate people, struggle with their own Agile journeys. Subsequently, I’ve spent a lot of time working with those organizations and introspecting on my own.  How do we  make the Agile transformational journey more efficient, fruitful and more enjoyable?

Ironically, the way most organization look to start or advance a stuck Agile journey is to seek out a formula or ‘best’ practices as input into their enterprise plan. Think about that for a moment – taking a plan-based approach to move away from a plan-based approach. That’s an anti-pattern and the antithesis of the Agile mindset.

So what should we do?

The best way to speed up your Agile journey is not by reading articles, books, blog postings or seek some kind of formula to put together a plan. While those things can educate and even inspire, they maintain the slower path – albeit one that generally ‘feels safer’ for some.

A far better prescription is to follow an approach to the Agile journey that is in itself – Agile.

Now in sort of a cruel twist, the further along your Agile journey, the more obvious and second nature this advice seems. The hardest moves to make in the Agile journey are indeed ‘starting’ the journey in earnest. To let go of the waterfall / plan-based handrails and push off into a new way of thinking.

It’s actually quite difficult to prescribe a formula or recommend specific practices to follow, without an honest assessment of where you are truly at in your own journey. This is why Retrospectives (as a ceremony specifically defined by Agile) are especially important at the start (or to get unstuck).

A Retrospective is not the same as an Assessment.

A Retrospective is far less ambitious and is thus far more responsive to discovery and learning that comes with taking incremental steps. Retrospectives are more – well – Agile.

Now – once you’ve had your first true retrospective:

  1. Pick a few changes that balance significant value to the organization with challenging but achievable results by the team.
  2. Make those results quantifiable and measurable.
  3. Start practicing those changes with weekly team retrospectives to gauge how things are going.
  4. As the discomfort and awkwardness starts to fade, take on another change to practice.
  5. If at any time things aren’t measuring up to your goals, you may need to adjust or totally pivot. Sometimes you may just need to keep plugging away; giving the current change more time to sink in and make it your own.
  6. Repeat often.

Sounds simple – right? Of course the devil is of course in the details. Along each step in the journey, there is an opportunity to wander down the wrong path, get stuck or even flail around for a bit. All of which are perfectly OKyou’re learning! You just have to gauge and balance it with your organizational tolerance for expediency. Certainly if you want to move faster, some truly experienced coaching or project by example approaches can help move things along.

So why all the disinformation?

So if it’s as simple and intuitive as taking an Agile approach to an Agile transformation, why do so many people promote a plan-based approach to Agile transformation? Why do so many vendors present a cookie cutter, formula based approach to the whole thing?

My guess is that some of it is due to inexperience. Quite a number consulting firms have jumped on the Agile band-wagon. Glad to have them finally aboard, but lots of these folks were dissing on Agile or hedging hybrid-Agile just a few years back. Some still are. If you’re still stuck in a plan-based mindset, then everything looks like a plan-based nail.

Ohers are simply trying to monetize the Agile wave. While there is nothing inherently wrong with ‘selling Agile’ (books, training, certifications, tools, etc..), always keep in mind that what you’re buying is the *ability* to be Agile – not actually *being* more Agile. I can read a bunch of books, train and get certified in speaking a foreign language. However, nothing will replace practicing it. Especially, practicing it in difficult, non-contrived situations.

Taking an Agile approach to make the Agile journey is what Perficient believes. It is the basis for how we approach Agile transformation with our most valued and strategic customers.

OK, sure…. but where are the ‘tips’?

Not knowing the specifics about your organization, your team members or where you are at in your own journey – there are some general things that seem to always bubble to the surface in the customer retrospectives I’ve been a part of through the years. Some basic rules-of-thumb that may or may not specifically apply to your particular situation:

Do you REALLY have Executive Sponsorship?

Be realistic about how much “executive sponsorship” you really have. True executive sponsorship means accepting that a large part of the Agile journey is going to be about change. Change and learning new things is hard, awkward and messy. Mistakes will be made and it’s going to cost more for a while. You can avoid some common pitfalls and speed things ups with proper coaching and training; If your leaders (business and IT) don’t accept that either less is going to get done and / or that things are going to cost more, then you actually don’t have executive sponsorship.

Start in the right place….

Companies often start their Agile transformation in the wrong place. They focus too much on the development teams. They train and certify developers and project managers; who in their first real Agile project, run into some pretty complex, real-world scenarios and challenges that they don’t have the skills and experiences yet to deal with*.

*side-note: getting ScrumMaster certified only gives you a Scrum 101 understanding – but calling ScrumMaster certification something like ‘ScrumAware’ – wouldn’t sell as well, so – there you go). To use the language metaphor again – it would be like taking two days of high school Spanish, and then dropping you into a remote village in Chile where nobody speaks English.

Product Owners make the Agile world go round…

A far better approach is to start with the business side, focusing on your business owners. Work with them to develop a strong Product Owner mindset (training, certification, coaching and practicing). Transform your current ‘requirements’ process into a Lean mindset (i.e. fine grained, value-based, quicker time to market, self-funding / perpetuating, measuring and adjusting – even pivoting).

Even if your development team continues to operate using waterfall, they will at least start to shorten development cycles naturally (Agile-Fall) – and when it’s time to train / certify / transform them – a massive road-black will be alleviated since the business will already be operating in an Agile way.

You’ll also avoid the emergence of things like Product Owner ‘proxies’, decomposition and reconstitution of stories from requirements (and necessary traceability matrixes) and rework waste due to gaps created with up-front, thud-factor requirements approaches.

Failing to start in the right place with your Agile journey is why most Agile transformations are so painful and often perpetually stall.

DevOps is a two-way street

The next place to tackle is deployment. Make sure you focus not only on a ‘push to deploy’ mentality, but just as importantly the ‘push to back-out’ mentality. Both need to be automated to a point where everyone has the utmost confidence that not only can something be deployed quickly, but if there are problems – then it can be safely backed out just as quickly. Agile / Lean organizations absolutely rely on both of these. Failure to get this in place will short-circuit your Agile journey with just one bad deployment. Or at the very least, make everyone hate each other.

An automated test is a predictable test

Next stop – QA / Testing. Quite simply you need to be at 70% test automation or more before you try shortening iterations to anything short enough to be called Agile. Otherwise, your iteration cadence will continue to creep outwards as you accumulate more and more manual regression debt. Then you might even do something really silly like starting to shave down your regression testing and accelerating full throttle to total mayhem and unpleasantness.

Ok…. NOW – you can start the transformation of the development team. Life will be good.

Distributed teams, multi-shore and Agile…

About ten years ago, my answer to a panel question on whether you could do Agile with a multi-shore team was,

 “Not only CAN you use Agile with multi-shore teams, but in my opinion – Agile is the ONLY methodology you should use with multi-shore delivery” – me…. ten or so years ago…on this panel thingy…

I remember it being a pretty cool moment. A bunch of representatives from the large (especially offshore) consulting companies had just answered that same question along the spectrum of ‘it’s not recommended’ to ‘absolutely not’. I answered last and it felt like the perfect setup. It was awesome.

I still believe what I said and we have lots of proof that it works. In fact it works great. The sense of team connectedness, transparency to progress and ability to quickly course correct that Agile brings is the perfect antidote for what people most dislike about working with offshore teams.

And – here’s the kicker: If you empower your offshore teams with Agile, it takes them to a whole new level of value in the lifecycle. Being a part of our global delivery teams for the past 12+ years has been the most rewarding collection of experiences in my entire career.

But make no mistake. Distributed Agile is not easy. It’s not ‘ScrumAware 101’ type stuff. So here’s the caveat: If you really want to be successful in Agile with distributed teams (especially offshore), you need to first make sure that your onsite Agile is really ‘Agile’. If your onsite day-to-day isn’t in order, then trying to inject a distributed team dynamic into this (especially offshore) is going to induce the worst of both worlds (plan based and pseudo-Agile). Everyone will end up hating each other and you’ll (wrongly) draw the conclusion that Agile offshore doesn’t work. But just to remind you – It can. It does; And in fact Agile is the BEST way to get the most from your offshore team. <boom… mic drop>

Beware the ‘hybrid chatter’…

If at any point during your Agile journey, beware when someone starts using the word ‘hybrid’ (or is chomping a little too eagerly at the bit to introduce a scaling framework like like SAFe. Really, really take a step back and ask yourself if you need to overlay non-Agile / non-Lean practices. This is ESPECIALLY true if you start hearing things like, “Well, our business is unique and we need to ‘adjust’ Agile to make it work”.

I’m not saying it doesn’t happen – and I honestly can argue both sides of the hybrid coin – I’m just saying be skeptical and hypercritical of any ‘hybridization’ rationalizations.

I also want to make a distinction here: ‘Self-organizing’ and ‘adapting’ are meant to be applied at the team level. Be wary of ‘institutionalizing’ things a bit too aggressively. When you start ‘institutionalizing’ adaptions then you start locking down levers. Levers the team can pull (and un-pull) as things change along the Agile journey: accumulation of team member experiences, new business imperatives, distribution of the team (geographically, chronologically and even culturally) and aspects of the environment itself.

It’s simple – just perfectly balance pragmatism, open-mindedness and skepticism. Ok?

Some final caveats…

The above isn’t a complete list and each one just scratches the upper surface of each of the topic areas. I also reserve the right to change my mind on any of the above at any time.

They are also not the only way to get there. For example: Can you still make progress on your Agile journey without true executive sponsorship? Yep. How about without the ability to start with the Product Owner? You bet.

And they are most certainly not meant to be simply written down and collated into giant document / deck with other ‘best practices’ harvested from other sources, and force fed to the organization. Because… yeah…. that’s kind of the definition of ‘institutionalize’ – see above.

Rather – the intent here is for each of them to ignite hours of healthy exploration. Conversation that leads to concrete, incremental new things to practice, measure and adjust. All the while on your own Agile journey. A journey that will hopefully be a bit more fruitful and expedient. And please have some fun while you’re at it.

]]>
https://blogs.perficient.com/2018/10/18/how-to-avoid-hitting-the-wall-in-your-agile-transformation/feed/ 0 232763
Perficient Establishes Domestic Delivery Center in Lafayette, LA https://blogs.perficient.com/2014/09/04/perficient-established-domestic-delivery-center-in-lafayette-la/ https://blogs.perficient.com/2014/09/04/perficient-established-domestic-delivery-center-in-lafayette-la/#respond Thu, 04 Sep 2014 17:26:48 +0000 http://blogs.perficient.com/delivery/?p=3030

This morning I had the privilege to stand alongside Perficient President and CEO Jeff Davis, as well as Louisiana Governor Bobby Jindal, to announce that Perficient will establish a domestic delivery center in Lafayette, Louisiana.The addition of the Lafayette-based delivery center will augment our global delivery centers in China, India and Macedonia, which for more than 10 years have provided critical offshore capabilities to our customers while complementing our technology, delivery management and industry vertical expertise and capabilities across North America.

Perficient Established Domestic Delivery Center in Lafayette, LAI’m so pleased to share this exciting news with the multi-shoring community. At Perficient we not only strive to provide the best experience to our customers but pride ourselves on being a trusted advisor to our clients as well.

I believe Jeff Davis said it best this morning when he stated that being a trusted advisor to our clients depends on having the capability and the expertise to staff the right skills, in the right place, at the right time. We strongly believe our new Lafayette-based center will broaden our flexibility and capacity to serve our growing roster of clients.

It’s an ideal location with an educated workforce and proximity to several universities and technical colleges, and is well supported by local and state leaders focused on economic growth.

Additionally, the domestic delivery center will be patterned off the same proven global delivery model and Agile methodology that we’ve used in our existing development centers. We’ll meld in best practices that we employ today throughout our existing U.S. office locations, and will add capabilities and improved service levels that cover the entire spectrum of the software development lifecycle, including:

  • Up-front business analysis using leading-edge visualization techniques and high fidelity prototyping
  • Software development using a wide range of platforms from Perficient’s strategic partners such as IBM, Microsoft, Oracle and salesforce.com
  • Quality assurance and testing
  • Up to 24×7 support services through a managed service offering called SupportNet

As we look to the future, Perficient remains committed to providing its clients an optimized global delivery approach. The domestic delivery center will ensure we continually deliver high-quality solutions in the most cost-effective way for our customers.

 

 

]]>
https://blogs.perficient.com/2014/09/04/perficient-established-domestic-delivery-center-in-lafayette-la/feed/ 0 210680
Measuring the Performance of Delivery Teams (Conclusions) https://blogs.perficient.com/2012/06/11/measuring-the-performance-of-delivery-teams-conclusions/ https://blogs.perficient.com/2012/06/11/measuring-the-performance-of-delivery-teams-conclusions/#respond Mon, 11 Jun 2012 15:00:37 +0000 http://blogs.perficient.com/delivery/?p=1451

This is the final segment of a 6 part series.

Part I introduced the concept of analytically measuring the performance of delivery teams.

In Part II – We talked about how Agile practices enhance our ability to measure more accurately and more often.

Part III defined a system model for defining 3 dimensions of performance (Predictability, Quality and Productivity). We then got into specifics of how to measure the first dimension; ‘Predictability’

Part IV covered the second of the three dimensions; Quality

and Part V covered the most controversial and often misunderstood dimensions of them all; Productivity.

The intent of this series wasn’t to present a one size fits all measurement model for all IT organizations. Rather, the goal of this paper was to emphasize the value and thus the priority of taking a more analytical approach to measuring the performance of your multi-shore (and even single shore) delivery teams and to provide a concrete, real-world example of how we have helped some of our customers move beyond subjective evaluation of predictability, quality and productivity for their organizations.

In the end, you may leverage all, parts or none of the specifics in this white-paper. But hopefully it’s gotten the conceptual ball rolling and shown that measuring IT performance is certainly not the hardest engineering problem IT management faces. It merely requires adopting a process and leveraging some fairly basic tools that you probably already have (or can easily adopt from the Open Source community).

Oh and – yes, you may have to let go of some old-school, subjective and emotional baggage. But the result of integrating measurement as a core practice within your software delivery methodology will yield an IT organization that is far better positioned to deliver measurable value to the business in the most efficient manner possible.

Further Reading

A Practical Guide to Feature-Driven Development (Paperback)

~ Stephen R. Palmer / Stephen R. Palmer

Agile and Iterative Development: A Manager’s Guide (Paperback)

~ Craig Larman

Succeeding with Agile: Software Development Using Scrum (Paperback)

~ Mike Cohn

Agile Estimating and Planning (Paperback)

~ Mike Cohn

Balancing Agility and Discipline: A Guide for the Perplexed (Paperback)

~ Barry Boehm

Applied Software Measurement: Global Analysis of Productivity and Quality (Hardcover)

~ Capers Jones (Author)

The Yourdon Report (Blog)

http://www.yourdonreport.com/

IT Value Metrics: How to Communicate ROI to the Business (article in CIO Magazine)

http://www.cio.com/article/144451/IT_Value_Metrics_How_to_Communicate_ROI_to_the_Business

(also contains links to other important articles like “The Metrics Trap”)

]]>
https://blogs.perficient.com/2012/06/11/measuring-the-performance-of-delivery-teams-conclusions/feed/ 0 210543
Measuring the Performance of Delivery Teams (Part V Productivity) https://blogs.perficient.com/2012/06/04/measuring-the-performance-of-delivery-teams-part-v-productivity/ https://blogs.perficient.com/2012/06/04/measuring-the-performance-of-delivery-teams-part-v-productivity/#respond Mon, 04 Jun 2012 15:00:21 +0000 http://blogs.perficient.com/delivery/?p=1437

This is Part V in a multi-part series.

Part I introduced the concept of analytically measuring the performance of delivery teams.

In Part II – We talked about how Agile practices enhance our ability to measure more accurately and more often.

Part III defined a system model for defining 3 dimensions of performance (Predictability, Quality and Productivity). We then got into specifics of how to measure the first dimension; ‘Predictability’

Part IV covered the second of the three dimensions; Quality

In this part, we broach the often controversial subject of ‘Productivity’; demonstrating that you can think analytically even about the most sensitive and seemingly ambiguous topics.

So how ‘productive’ are your developers?

This is probably the most highly charged and contentious area of measurement, not only because of the difficulty in normalizing and isolating its’ measurement, but also in the sensitivity to the implications of the measurement. Nobody likes to hear that their team has ‘low productivity’ – and in fact, most IT managers would probably be very surprised at how much ‘non-productive’ time is spent developing software.

If you doubt the last statement, let me just relate that I’ve often observed the following:

  1. IT Managers that purely measure productivity by how many lines of code a developers is churning out – regardless if that code actually efficiently written, matches up with requested requirements, maintainable or guarantees quality (code reviews, test first approach, fully regression tested against breaking other code), etc.
  2. IT Managers that don’t want developers to spend time testing.
  3. Software organizations that rely too heavily on tacit knowledge, usually constrained to a few developers which ensures job security for a few at the cost of increased risk to the organization.
  4. IT Managers (and Executives) that will proclaim that their development teams operate at near
  5. 90% ‘efficiency’ and want to use 40 hours per week / developer as the baseline for team velocity calculations (yet generally have no previous iteration metrics – probably because the development team games the metrics to meet the 40 / hour velocity expectation).

By the way – I’m not kidding on the last one. It comes up more that you’d think. In fact, I cannot even tell you how many times I’ve talked to an IT Manager that claims their organization is an expert in Scrum and yet is unaware that typical velocities are generally far below ideal hours.

But perceptions aside, let’s drift back to our original model and break down that Delivery Channel as a black box a little bit further. In doing so, we’ll highlight the various challenges with such a simplistic model.

As you can see from the above diagram, there are a number of complexities that make the normalization of the inputs and outputs to a Delivery Channel challenging, including the inter-dependencies of other dimensions of measurement (Quality and Predictability) themselves.

But does this mean we just throw up our hands and say that it’s impossible to analytically compare and contrast the effectiveness of various delivery teams? Well, this is exactly what some IT organizations do. But as we’ll see, there are indeed concrete ways to normalize the inputs and outputs of a Delivery Channel (Units of Work, Working Software and Costs) in such a way as to make measurement, comparison and trending practical.

First, let’s look at a “Unit of Work”. The key question here is how to standardize the measurement of a unit of work across an IT organization? This turns out to be more dependent on adopting a rigorous process of estimation and tracking than a purely theoretical modeling exercise. It’s interesting to note that over the years, several studies have been done to ascertain what affect technology and industry domain has on the actual complexity of software development.

 

As much as most IT organizations would like to believe, the answer to both is ‘not much’. With few exceptions, almost ALL industries carry with them a similar level of complexity to their business domain problem and almost ALL technologies carry with them a similar level of complexity to their use. This explains the fact that despite all of the increased abstraction of software frameworks and tools through the years, software development hasn’t really gotten ‘faster’. This is because as tools become more powerful, we leverage it to design more complex solutions. Most solutions today could not be built by development teams by using assembly language from scratch in the same amount of time that they are today.

This also explains the anecdotal evidence I’ve observed where almost every single executive in every single industry and customer I’ve ever met with, makes a point to stress something along the lines of, “Our industry is far more complex than other industries”. The only conclusion to be drawn from this what I mentioned earlier – that ALL industries carry with them a unique set of complexities that need to be addressed, and it’s actually the ‘uniqueness’ of the complexities that makes domain knowledge in a particular industry so valuable.

So let’s assume then that we can measure units of work in a way that at least we can incrementally standardize across a particular IT organization by following a standardized process and making adjustments as we get better and better at estimating work – as measured through Predictability improvements discussed earlier.

The actual mechanics of the process we follow is actually less important than following a process in a consistent manner. Over the years, there have been several different yardsticks by which we measure software development units of work. Some are better than others.

Measurement Method Pros / Cons
Lines of Code
  • Rather dated and dependent on technology
  • Not recommended for use across an enterprise
  • Properly ‘counting’ can be difficult (copy/paste re-use credits, adherence to standards, code commenting, etc..)
Function Points
  • Better than LOC
  • A lot written about this approach / dated but inherently sound
  • Requires another layer of analysis and metrics to relate to features
  • Tough to sometimes correlate to specific business value (ROI)
Feature Driven Development
  • A lot written about this approach as well
  • Natural fit for Agile (and even non-Agile methodology)
  • Metrics are tracked at the feature (or feature-point) level as part of the process – no need for correlation matrixes
  • Feature granularity can be ‘relatively’ standardized using ‘courseness’ guidelines and enterprise level review to standardize the process
  • Sign-offs by PMO and stakeholders assure ‘fair’ weighting
  • Easier correlation to actual, especially over time

So if the above matrix seems transparently weighted towards following a Feature Driven approach, it’s on purpose. There has been an evolution over time, especially in the Agile world, that using ‘features’ to define units of work is especially efficient with regard to minimizing ancillary documentation throughout the process – even if initially the business analysts resist leaving the island of the large ‘thud factor’ requirements documents. The extra step in that case is required regardless of the approach.

While Feature Driven Development (FDD) also describes and Agile process in itself, note that for our purposes here (standardization of Units of Work), I am simply borrowing the concept of ‘Features’ to decompose and draw boundaries around the work. This part of FDD can actually be incorporated into a waterfall process (if you absolutely must).

As I stated earlier however, it’s more important that an IT organization have some standard of measure, rather than how specifically it chooses to measure. Rather than getting hung up on a specific method, pick one and start measuring. You can always improve, refine and even change measurement methods over time. Don’t get caught up on finding the ‘perfect’ measurement approach or arguing endlessly about which is better. Remember that If you did nothing more than count lines of code, you’d probably be doing better than over 2/3 of the organizations out there that don’t consistently measure anything. In other words – you’d be in the top 1/3 of all IT organizations, just by doing something.

So assuming you are measuring something, there are a few things to be conscious of with regard to how you estimate costs associated with that unit of work:

Be conscious of TCO

Although your department charge-back model may not account for everything they are still real costs to the company. Be a good corporate citizen (or at least balanced with respect to evaluating alternatives) and account for:

  • Employees –  salary and benefits, utilization, training time, training investments, hiring / turn-over, infrastructure and management
  • Contractors – rates, conversion costs, utilization, ramp-up time, turn-over, infrastructure, management.
  • Offshore – infrastructure (localized environments, communications, licensing, etc.), audits, management, turn-over, ramp-up / transition time, knowledge management / transition (KM/KT) costs, etc.

Use dollars instead of hours

Too many organizations get hung up on hours – but it all really boils down to costs and in fact it’s difficult to capture actual TCO as above with pure hours calculations. Costs tend to leak into features that way. Using costs also ensures that conversion factors are consistent and allows a more direct correlation to business value and comparison to alternatives (build vs buy vs SAAS / hosted) decisions.

One standard unit of work that tends to apply well is what I’ll call “Effective Blended Rate”. In short Effective Blended Rate (EBR) is the cost spent per feature-point (where feature-point is the your standardized measure of features). You could also easily substitute cost per  KLOC (thousand lines of code) or whatever your measure is.

Having an EBR is important when comparing multiple delivery channels. For example, in a build vs hosted solution (SAAS) you could estimate features for a particular solution and easily compare the cost to build vs the cost to host. Sounds too simple right? Well – the key that made this simple was deciding on using a measurement of work (feature-points) that directly correlates to business value.

Another type of EBR might be cost per feature-point per resource:

EBR = $ / feature-point / resource

Assuming that you can keep feature-points near standard to ideal hours, then this works nicely to get you in the true ballpark of an effective ‘bill rate’ for a resource. (although due to realistic velocities of say 25-30 hours per week, you have to recalibrate your thinking around what an effective rate per hour actually looks like – higher than contractual rate / hour).

This works well with multi-shore comparisons to pure onshore (US) teams. Consider the following example.

  • You are trying to compare two teams, on completely US based (onshore / onsite) and the other a multi-shore team (30% US / 70% offshore). The US team is composed of 10 developers, each with a TCO hourly cost of $100 / hour.
  • The multi-shore team produces that same number of feature-points every iteration, but is composed of 4 US developers and 9 offshore developers (total team size = 13 or 30% higher with regard to iteration velocity (number of hours per iteration) than the completely US team.
  • Let’s also say that the offshore developers fully loaded cost is $35 / hour.
  • At those rates, the blended rate of the US team is $100 / hour and the blended rate of the multi-shore team is $55 / hour. But looking at just that statistic would be a mistake since the all onshore / onsite team is more ‘productive’ than the multi-shore team (most likely from a combination of communication efficiencies, higher industry domain knowledge in the developers and perhaps slightly higher overall seniority of the onshore team).
  • But let’s normalize this around feature-points. Let’s say that the current number of feature-points per three week iteration (by both teams) is 780 feature-points. Using this statistic, the Effective Blended Rate of each team now works out to:

EBR (onshore) = $154 / feature-point = (10 dev x $100 / hour x 120 hours) / 780 feature-points

EBR (multi-shore) = $110 / feature-point = ((4 x $100) + (9 x $35) x 120 hours) / 780 feature-points

  • The above still leans towards the multi-shore team being more ‘efficient’ with regard to overall implementation cost, even though that model requires 30% more contingency hours to get the same work done. In fact, a team of 5 onshore and 12 offshore (17 total developers), which represents a contingency of 70% additional, results in an EBR of $141 / feature-point.
  • I think you can see from the above example why there is such a strong push to multi-shore teams. For organizations that take the time to make them work effectively (typical contingency ‘adds’ across the industry are roughly 20% – 50% depending on project complexity). You can also see why pure financial models – or getting hung up around pure ‘developer hours’ can be so misleading since pure hours tells you very little.

Quite simply – if an IT Organization doesn’t make a priority of measuring, then it really is missing an opportunity to manage its costs effectively and truly understands its break-even points in making effective offshore decisions.

So if you’re not yet convinced, consider also that over time, the productivity of multi-shore service arrangements tend to increase in the near term, but decrease over a period of 1-2 years. The reasons have to do with complacency, diminishing returns on additional cost cutting (yielding more junior resources), growths in project turn-over, etc.).

But if an IT Organization does a good job of either measuring directly, or having it’s service provider regularly report (auditable) metrics – then you place the challenge where it belongs. On maximizing your organizations Effective Blended Rate and maximizing the value for your IT spend.

What sort of dashboard might you expect from a service provider in this regard? The diagram below provides an actual (scrubbed) example:

Notice that we’ve added something as well. The gray area represents how we can account for the interdependence of other measurable areas (quality and predictability) as well as external influences (things the team dealt with but had no control over such as a missed dependency or in this case having operations perform a VM upgrade mid-iteration. Also notice that we have a factor for where the team sits with regard to where they were operating with regard to the schedule elasticity curve (i.e. Brooke’s Law / mythical man month). More specifically, if a team is pushed harder than it recommends to make a date, they effectively get ‘credit’ associated with the inefficiencies of operating further up the curve.

Notice that there is a certain element of subjectivity to these influencers, at least in the short term. But over a very short period of time (3-4 iterations) an IT organization that was putting a priority on capturing metrics, would be able to better quantify these factors. In fact, you would be surprised as to how frighteningly accurate these metrics and factors can become over time – given an analytical attention to measuring and tracking.

Whew…. we got through that (even with a lot on the table to digest and discuss!)

In our final segment we’ll summarize and wrap this topic up.

 

]]>
https://blogs.perficient.com/2012/06/04/measuring-the-performance-of-delivery-teams-part-v-productivity/feed/ 0 210533
Measuring the Performance of Delivery Teams (Part IV – Quality) https://blogs.perficient.com/2012/05/29/measuring-the-performance-of-delivery-teams-part-iv-quality/ https://blogs.perficient.com/2012/05/29/measuring-the-performance-of-delivery-teams-part-iv-quality/#respond Tue, 29 May 2012 15:00:40 +0000 http://blogs.perficient.com/delivery/?p=1428

This is Part IV in a multi-part series.

In Part I – We introduced the concept of analytically measuring the performance of delivery teams.

In Part II – We talked about how Agile practices enhance our ability to measure more accurately and more often.

In Part III – We introduced a system model for defining 3 dimensions of performance (Predictability, Quality and Productivity). We then got into specifics of how to measure the first dimension; ‘Predictability’

In this part, we talk about how to measure the second dimension; Quality

There has been probably more written with regard to quality metrics than on any other aspect of software development. And in fact, most quality tools produce a plethora of statistics, graphs, dashboards and ‘eye candy’, all asking for your attention – as the following collage demonstrates.

The real key in quality metrics is knowing which metrics will give you the biggest bang for your attention. Which ones require the least amount of effort to capture, yet will answer the questions you want to know the most.

While all projects ‘ask’ different questions – and sometimes at different times depending on where in their lifecycle they currently are – some general ‘tried and true’ metrics are worth mentioning here.

  • # Open Defects (weighted / normalized*) – Current number of open defects (operational and functional) weighted by severity.
  • Defect Arrival Rate (weighted / normalized*) – Rate at which defects are being discovered (weighted by severity). Should trend downwards as QA completes.
  • Defect Closure Rate (weighted / normalized*) – Measure of development team capacity to close defects as well as an indirect measure of code structural integrity, decoupling, cross-team training, etc.
  • Total Defects Discovered (weighted / normalized*) – Number of defects opened during formal QA, weighted by severity. This is tracked independently by the QA team. All things being equal, a lower number indicates a higher level of quality. Note that this is closely linked to productivity and predictability metrics since ‘gaming’ those metrics will naturally result in a higher weighted defects metric.

Note that in all the above quality metrics, the key is to ‘weigh and normalize’ each metric to account for the severity of the defects under measure as well as the normalization of those defects against the complexity (amount) of the work produced. The second concept – normalization – is needed to compare multiple delivery channels or variations in work being done over time. Complex development or larger number of development hours tends to produce a larger number of defects. One way to normalize is against total ‘development hours’ that went into the code being tested, although in the next section on productivity we’ll establish a better metric of ‘work completed’ and even tie all of these metrics dimensions (predictability, quality and productivity) together into a comprehensive dashboard.

For now, let’s assume you either normalize or you decide to simply weight and leave it at that. There are quite a number of useful graphs that can be produced just by having the statistics listed above. And the nice thing here is that most defect tracking tools will produce these graphs right out of the box. All you have to do is diligently capture the data.

So where is the best place to capture testing metrics? Obviously in an Agile project, testing is something woven into the very fabric of the development iterations themselves. But to really measure the quality a delivery team is producing, you should be performing an independent quality check at the end of the iterations.

This final Quality Gate, can be of relatively short duration and may even leverage the automated tests and tools produced during the development iterations (since the quality of what is produced can also be measured by how completely they tested the code prior to this quality gate and testing scripts are a good measure of rigor in this department). But an effective Quality / Validation Gate will also add their own testing – including ad-hoc testing and business transaction level testing. In this approach, Quality Gate testing follows more of a Quality Control model, popular in manufacturing than a dumping ground for poorly tested software.

Next week in Part V, we’ll cover the final (and often most controversial) dimension: Productivity. Then we’ll wrap it all up in the final Summary post.

]]>
https://blogs.perficient.com/2012/05/29/measuring-the-performance-of-delivery-teams-part-iv-quality/feed/ 0 210532
Measuring the Performance of Delivery Teams (Part III) https://blogs.perficient.com/2012/05/22/measuring-the-performance-of-delivery-teams-part-iii/ https://blogs.perficient.com/2012/05/22/measuring-the-performance-of-delivery-teams-part-iii/#respond Tue, 22 May 2012 18:34:29 +0000 http://blogs.perficient.com/delivery/?p=1419

This is Part III in a multi-part series.

In Part I – We introduced the concept of analytically measuring the performance of delivery teams.

In Part II – We talked about how Agile practices enhance our ability to measure more accurately and more often.

In this part, we’ll talk about “Which Dimensions are Most Important to Measure”

As we’ve stated in the previous sections, we could certainly (and do) measure lots of things in a project. What’s most important though is knowing what are the most important measurements that will give us a meaningful set of metrics on which to rate our sense of progress on a project, or to compare various delivery channels / teams / vendors / approaches. 

To accomplish this, we start with a black box ‘value statement’ of an IT delivery organization. At the highest level, and IT organization is a ‘factory’. We put requirements in and get working software out. Taking this model a little further, consider that our ‘factory’ is actually composed of three different delivery channels or assembly lines. Each delivery channel takes in a set of requirements, and produces working software (and production metrics) that is then integrated into a production environment (which also produces some useful metrics on a day to day basis on how each of those software ‘products’ are performing once in production.

If I keep the complexity out of the model, and assume that I can normalize all my metrics, then these metrics should be able to tell me how each delivery channel is doing compared to one another or how they have improved or regressed over time.

So what are these metrics? Well to start, let’s break these metrics up into three dimensions:

Predictability – A measure of how close estimates come to actuals with regard to both delivery costs and deadlines. A key variable in measuring predictability is lead time. Specifically, what is the measured level of predictability at various points in the project lifecycle. Predictability should increase as quickly as possible as lead time shortens.

Quality – A collection of measures that ensure overall integrity of the delivered code is tracking properly to a base-lined production level acceptance; both from an operational support level as well as a business user level.

Productivity – Measurements that assess the amount of work completed as a function of cost. These metrics can be used to compare two different teams (such as a pure onshore vs a multi-sourced team) to assess the efficiency of delivery using either approach – assuming predictability and quality are the same.

Let’s also realize that these dimensions are inter-dependent. Over emphasis of one can often lead to a decline in another. But we’re going to save that interdependence for a little later in this paper and tackle these one by one first.

Predictability

Predictability isn’t just about how close you can come to your measured estimates. It’s also about when you gave that estimate. Consider the following two scenarios – each that represents a 15% slip (inaccuracy to the original estimate) against a 9 month project:

Scenario 1: The 15% slip occurs a few weeks into the project when it’s discovered that a new technology (through ‘spiking’ / ‘proto-typing’) is discovered to have some significant short-comings in its’ ability to delivery as it’s been advertised. In this case a mitigation plan can be put in place, re-architecture can be done and perhaps even some of that 15% slip can be mitigated through trading off certain features or function – managing change early with the business.

Scenario 2: The 15% slip occurs late in the project (say the last 2 weeks before go live) when it’s discovered that the new technology platform doesn’t scale properly. The late occurrence of this was because performance testing was not scheduled until the last few weeks of the project. In this case, the same 6 week slip is going to have much more serious consequences to a wider range of stake-holders that must now react to this change.

As contrived as the above examples appear – scenario 2 actually occurs more often that it should. And in fact, while both projects missed their go-live date by 6 weeks, the very different impacts of that slip demonstrate how important lead time is as a measurement of predictability.

Obviously, the best way to avoid scenario 2 is to measure sooner and more often throughout the entire lifecycle. The process flow diagram below is from an actual Agile project. The ‘stars’ capture the places where estimates are captured.

Ballpark estimates represent the first level estimates for the project. They are used to create the overall release plan and are typically done (as a matter of need) without having fully detailed requirements.

Once the project is underway, estimates done at the start of each iteration provide another sanity check to the original ballpark estimate. Rather than being top down estimates, iteration estimates are generated through task decomposition and bottom estimation of those tasks at a feature level (more on this in the productivity section).

Finally, iteration actuals demonstrate the final true measure of predictability, both to the original estimates as well as to the iteration estimates. Because this occurs every one to three weeks in an Agile project, the chance for early course correction (based on incorrect early assumptions) is raised and the chance that a ‘scenario 2’ will occur is decreased.

Although an Agile methodology was used here, the more important concept here is measure early and measure often during a project. And the more we can tie these measurements to working code, the more faith we can put in predicting how the project will turn out based on those metrics.

So what might a dashboard of such metrics look like? Take a while to look at the actual dashboard example provided (and scrubbed) below. How useful might some of the insights gained through these metrics be in establishing how well a development team was performing with regard to predictability?

 

In Part IV and V we’ll look at the other two dimensions of performance; Quality and Productivity.

 

]]>
https://blogs.perficient.com/2012/05/22/measuring-the-performance-of-delivery-teams-part-iii/feed/ 0 210531
Measuring the Performance of Delivery Teams (Part II – Agile) https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-part-ii-agile/ https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-part-ii-agile/#respond Fri, 04 May 2012 19:28:08 +0000 http://blogs.perficient.com/delivery/?p=1231

How Agile methodology can enable more accurate and timely measurement

Not surprisingly, development organizations that operate with a truly Agile methodology, tend to have far more meaningful, quantitative and frequent measurements of their operational performance than those using more classical (i.e. waterfall) methodologies. That isn’t to say that practitioners of waterfall methodology don’t generate a deluge of metrics. It’s just that many of those metrics are not as meaningful or practical as they sound (e.g. percentage complete, number of documents and specifications delivered, bugs per KLOC, etc.).

The reason that Agile methodologies tend to produce more effective and practical measurements, is that the principles of such are actually built into the DNA of Agile methodology practices themselves.

Agile Methodology ‘DNA’ Result in the measurement dimension
Promotion of a high degree of transparency and openness Since the basis of the methodology itself is to be as transparent as possible, the degree to which metrics can emerge is enhanced. In short – all development team members are always trying to be very open and clear about where things are at – such that even a subjective ‘feel’ for project progress is more accurate.
Planning is a fully integrated activity with daily management and tracking of change built into the methodology at the developer level. Many implementations of ‘plan based’ methodologies, which often have a disconnected planning cycle from actual development. It is done extensively up front, and project plans are generally ‘owned’ by a project manager. In fact – most times developers do not even refer back to the plan unless forced to by the PM when seeking updates to progress – which again is tracked at the ‘activity’ level rather than ‘working features’ level.In Agile by contrast, while there is absolutely some planning that is done up front, the executable plan is embodied in the actual working documents of the project (iteration plan, release plan, etc.) that are updated by developers as a matter of course of day to day activity. There isn’t an artificial ‘planning update’ overlay – it’s done as part of the development activity with very little overhead.
Iterations and adaption to change Work in Agile is broken up into iterations, which are in simple terms, small segments of requirements, design, construction and test. Rather than ‘mini-waterfalls’ – iterations are actually highly integrated as part of an overall release plan. Features can move back and forth between iterations, guidance is still provided by some degree of up front requirements, architecture and decomposition. It is because of this interaction of iterations, that the participants in an Agile project get very well practiced at not only adapting to change, but knowing concretely how that change will affect other iterations (and the project as a whole). Change is measured at a feature level (and often a ‘task’ level) – and project level change can be measured from the bottom up on a daily basis.
Focus on delivering features / stories The key concept required for effective measurement of productivity is the ‘unit of work’ itself. This unit of work must result in something concrete. While the completeness of a requirement or design can be ‘cut short’ to produce a document – working code cannot be similarly cut short and still pass the more easily measured dimensions of such things like 85% coverage, passing code quality analysis using automated tools like Sonar, and working code (per very specific test case driven requirements).

There are certainly other aspects of Agile that aid in measurement – the above are intended just to provide a few key examples. It’s also not the intent to completely dissect Agile methodologies with respect to measurement in this whitepaper. The above was intended to just give a feel for how Agile can certainly help the process of measurement.

In the next few posts in this series, we’ll start getting into the nitty gritty of what to measure and how.

]]>
https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-part-ii-agile/feed/ 0 210519
Measuring the Performance of Delivery Teams (Part I) https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-part-i/ https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-part-i/#respond Fri, 04 May 2012 19:21:21 +0000 http://blogs.perficient.com/delivery/?p=1227

The Challenges of measuring performance in software delivery

Surveys on software development metrics are nothing new. This topic has been a source of discussion for decades with little change to the dichotomy of findings. To quote just one example:

A recent global survey of over 150 CIOs found that while over 75% of them recognized a strong business need to measure performance of their IT organizations, less than 1/3 of those same IT organizations actually measured their performance.

The above is definitely consistent to what I find when speaking to many executives in the IT space and is fairly uniform regardless of industry, size and maturity level of development methodology. The only exceptions to this being fairly small IT organizations (less than 50). These smaller organizations ‘tend’ to have a better handle on the performance and measurement of their organizations – although one could easily argue this has more to do with the fact that these smaller organizations have an easier time getting their arms around their organization’s day to day activities. In other words – in smaller organizations, even subjective evaluations coupled with a small amount of sampling metrics, seem to reasonably predict how their organizations are fairing.

Certainly this is not true of all small IT organizations and for the majority of IT organizations of any moderate size, subjective evaluations of performance at the executive management levels are often quite far off from the actual inner workings of how things are actually ‘getting done’.

Furthermore – over 70% of mid-size to large IT organizations currently engage in some sort of multi-shore engagements and 90% of IT organizations are evaluating it as part of their strategic planning. Yet results from a recent Forrester Research survey in 2009, less than ½ of IT organizations are ‘satisfied’ with the results they are getting from their multi-shore efforts.

So if the majority of IT leadership agrees that metrics are important, why are so few actually measuring?

Based on my own observations through the years, I’ve heard a wide range of answers:

  • Lack of time and / or money
  • Lack of discipline, priority or perceived incremental ROI
  • Lack of effective tools
  • Not knowing what metrics to standardize on
  • Lack of industry metrics to compare to / apples-to-oranges across industries
  • Procrastination (always ‘working on it’)
  • Inability to project meaningful metrics at an executive summary level

But all of this really boils down to a matter of prioritization. IT folks are generally pretty good problem solvers given the opportunity and priority. The key is to integrate measurement as part of the development lifecycle so metrics become a natural output of that lifecycle.

In the next post of this series, we’ll explore how Agile methodology can enable more accurate and timely measurements.

]]>
https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-part-i/feed/ 0 210518
Measuring the Performance of Delivery Teams (Overview) https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-overview/ https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-overview/#respond Fri, 04 May 2012 19:18:28 +0000 http://blogs.perficient.com/delivery/?p=1225

How much can you save by using a multi-sourced team, where some of the work is done offshore?

Depending on whom you ask, the answer can vary wildly. The reason for this is that a truly accurate answer from a mature IT organization, takes more into account than simply multiplying the ‘rates’ by the ‘body count’.

Mature IT organizations look at the bigger picture. In simple terms – “What does it cost me to deliver a particular business function or collection of features from a particular delivery ‘channel’?” But the answer is not so simple to come by – for a variety of reasons.

First and foremost – even many mature IT organizations do not have an established process or metrics from their existing organizations. Most often, the perception of development efficiency is based on ‘gut feel’ or at best, through loose correlations to historical budgetary numbers. In short – there is no standard process to compare delivery teams to one another.

Secondly, the performance of a delivery team tends to change over time. Everyone knows about the ‘ramp-up’ time to get a team efficient, but few ever discuss (or manage) the natural decay in productivity that occurs over time. Additional oscillations can occur due to a variety of outside influences and factors as well – further clouding the true operating value.

Third – industry standards and rules of thumb are often out-dated, misquoted or misused with regard to efficiency, sparking theoretical arguments; all which bring you no closer to being able to make proper adjustments as well as enact wholesale change in your IT organization.

And Finally – having teams that are geographically, chronologically and culturally dispersed further complicates the picture.

Faced with the above, many organizations simply throw up their hands and proclaim the problem ‘too hard’ or ‘not worth it’ – but in this day where the business scrutinizes every dollar of IT spend, it’s more important than ever for IT organizations to become analytical about how well they are performing  and improving as a delivery oriented organization.

This series of posts will take a thoughtful and analytic approach to the following topics:

  1. The challenges of measuring performance in software delivery
  2. How Agile methodology can enable more accurate and timely measurement
  3. What dimensions of performance are most important to measure?

 

]]>
https://blogs.perficient.com/2012/05/04/measuring-the-performance-of-delivery-teams-overview/feed/ 0 210517
Agile and our office move https://blogs.perficient.com/2011/05/09/agile-and-our-office-move/ https://blogs.perficient.com/2011/05/09/agile-and-our-office-move/#comments Mon, 09 May 2011 21:42:56 +0000 http://blogs.perficient.com/delivery/?p=927

User Story: As a development team, we need to have enough office space and capacity to grow to over 2x our current size.

Ok, I’m kidding a little. Not in the above statement – which was our true ‘epic’ requirement over a year ago and one that we realized this past weekend in our move to a new office space.

Rather, I’m kidding in the simplicity in which the above was worded and in even calling this a user story to begin with. But as I was reflecting on our recent move, I realized how important it was to keep the original intent in mind as we wound our way through the myriad of requirements and details and change over the past 1+ years. Without a guiding intent – it’s easy to get lost and off track. To forget what you’re priorities are. To realize that all the project plans, contracts, designs and risk mitigation plans are a means to an end. The ‘user’ in me just wanted us to be in a bigger room. That’s all.

I’m also reminded that between an emphasis on a ‘plan’ vs the ability to adapt to change – I’ll value the ability to adapt to change over a ‘plan’ any day of the week. Mostly because our ‘plan’ changed daily.

At the start of this all, we got a project plan from our primary general contractor. But having a detailed project plan provided me very little comfort when I first saw it. It was impressive. There were lots of items on it, some I didn’t even know what they were or did. There were dependencies linking every which way. I think it was in color.

Then almost immediately, change started to occur. I won’t bore you with the details, but I can think of few things that remained intact and can’t really estimate what percentage of tasks kept to the original sequencing. Honestly, I haven’t seen that project plan in a couple months. I’m sure it exists, it’s just not the way I (nor others) judged progress.

Now you can guess what gives Agile advocates like me a warm fuzzy feeling that progress is being made.I didn’t ask to see the updated plan. I didn’t ask what percentage complete each task on the plan was. Instead, I and others walked around the site as it was being built out. When I wasn’t there I got updates via ad-hoc walk-around pictures posted in one of my colleagues MobileMe gallery. Every few days I got a ‘release’ that I could see and assess. It was crude, but much more effective – because honestly; you either have the HVAC installed or you don’t. Wiring is either snaked through the drop ceiling or it’s still spooled up on the floor. The floor is either leveled, or covered in debris. The ‘demo’ wasn’t scripted or planned. It was a random assessment of progress. Yet there was a gut feel at how things were going that pretty much everyone agreed upon. And it helped us make day to day decisions based on our discussions and collective agreement on the progress the pictures represented.

Besides the ‘demos’ (reviews), we planned for changes that we hoped wouldn’t occur. There were daily stand-ups across shores and offices – not just the formal ones but informal ones as well. There were (albeit informal) sprints and work-arounds and annotated design documents. There were re-prioritizations of work based on project needs (e.g. bring up the phones first or the VPNs)? And there was a final delivery this past weekend. Oh – and of course there will be some minor bug fixing that will likely be occurring for weeks to come as we shake things out.

But the most important recollection of the experience is that there was change. Lots and lots of change. And as an Agile team – we adapted. And I’m as pleased with that as with the final result. Which was quite good.

Many thanks to our IT and Operational folks at Perficient (both in China and in the US). They took the job personally and to heart – and it showed. Thanks to the on the ground project teams – who went through every project and ensured we had continuity plans in place in case something went wrong.

The whole team displayed a truly Agile mindset in the way they approached the move. And the final ‘release’ showed just how competent they are and how powerful Agile as a mind-set it.

Thanks guys!

I’m sure a ton of pictures and commentary will be posted shortly. I can’t wait to see it in person in a few weeks!!

]]>
https://blogs.perficient.com/2011/05/09/agile-and-our-office-move/feed/ 1 210491
Measuring Performance of Delivery Teams – ‘starter’ metrics https://blogs.perficient.com/2011/01/24/measuring-performance-of-delivery-teams-starter-metrics/ https://blogs.perficient.com/2011/01/24/measuring-performance-of-delivery-teams-starter-metrics/#respond Mon, 24 Jan 2011 21:52:49 +0000 http://blogs.perficient.com/delivery/?p=853

Recently I was asked about ‘starter metrics’ for projects (both multi-shore and single shore) looking to transition to a much more objective measure of delivery team performance.

Here are the first tier metrics that I would recommend as a good starting point. There is a lot more detail in the webinar and associated white-paper on Perficient.com

www.perficient.com/webinars (March 2009)
www.perficient.com/whitepapers (March 2010)

The framework defined in the above divides metrics into three inter-dependent categories:

Predictability – A measure of how close estimates come to actuals with regard to both delivery costs and deadlines. A key variable in measuring predictability is lead time. Specifically, what is the measured level of predictability at various points in the project lifecycle. Predictability should increase as quickly as possible as lead time shortens.

Quality – A collection of measures that ensure overall integrity of the delivered code is tracking properly to a base-lined production level acceptance; both from an operational support level as well as a business user level.

Productivity – Measurements that assess the amount of work completed as a function of cost. These metrics can be used to compare two different teams (such as a pure onshore vs a multi-sourced team) to assess the efficiency of delivery using either approach – assuming predictability and quality are the same.

The metrics under Productivity require the most rigorous metrics to measure accurately and compare different delivery teams to one another. However, there are some thoughts there on how to get started.

For PREDICTABILITY, I would start with ensuring there are multiple measurement points in the delivery lifecycle. At a minimum, you should capture the following:

  • Budgetary Estimates (at the use case level, or better yet at the feature level) – prior to any detailed requirements or decomposition (bottom up task based estimation). This estimate is usually used for budgetary purposes and precedes final project prioritization / ordering in the project portfolio.
  • Development Estimates – these are the bottoms up / decomposition estimates that development does once requirements are fairly complete (waterfall) or at the start of each 2-3 week iteration (iteration planning). They are done at a task level (functional and engineering tasks) and are then rolled up to compare to Budgetary Estimates at a use case level. Framework costs should be spread across each use case relative to the weight of each use case (relative size of each budgetary estimate).
  • Completion Actuals – these are the final actual captured (hopefully by task – but at the very least at the use case level).

You can then compare the variances between these three tap points during a project / release level retrospective. During that retrospective, the variances should be explained in terms of accepted change requests and missed dependencies as well as ‘white-space’ issues that arose during the project (those things that were not anticipated such as defects in vendor libraries or a key team member not being fully available to the project).

For QUALITY, I would look simply at the number of defects (at each severity level) in delivered code, over time, normalized to the total project weight (Completion Actuals). Project actuals can be used as a crude indicator of project complexity and weight of development. Units for this statistic could be ‘Defects per 1000 development hours’ (or whatever works to normalize across multiple projects). This alone will give you tremendous insight into delivered quality. Notice too that code that has been ‘short-cutted’ with regard to maintainability / scalability considerations will drive higher defect to project actuals in subsequent releases.

Finally, for PRODUCTIVITY, you may have to do some additional analysis since you won’t have normalized requirements (see the explanation in the whitepaper for how to normalize requirements). What you could do here is to pick particular use cases from each team that result in similar task breakdowns (for example you may focus on use cases that require integrations, database / web service access, ETL or front-end development. The measure of productivity will be at the task level (from Development Estimates which had small variances to Completion Actuals – or if there were tie-backs at the Actuals level to tasks – such as within an Agile iteration plan). You can then compare and contrast for example the variances in similar technical tasks (accounting for complexity in both tasks). Granted, there is some conversation that will need to occur to normalize the tasks, but at least you’ll be comparing apples to apples type development activities against measured actuals (rather than anecdotal statements from one developer claiming that task would only take them ½ the time). You also want to make sure and account for any differences in delivered quality.

The above are obviously just a start, but they would go a long way to starting down a road of more rigorous project delivery metrics without a lot of time investment or changes to existing artifacts or process. There are next levels of sophistication described in the white-paper.

]]>
https://blogs.perficient.com/2011/01/24/measuring-performance-of-delivery-teams-starter-metrics/feed/ 0 210481
Webinar – Establishing a Successful Multi-Shore Support Arrangement https://blogs.perficient.com/2010/12/02/webinar-establishing-a-successful-multi-shore-support-arrangement/ https://blogs.perficient.com/2010/12/02/webinar-establishing-a-successful-multi-shore-support-arrangement/#respond Thu, 02 Dec 2010 16:58:59 +0000 http://blogs.perficient.com/delivery/?p=797

A while back I posted “Establishing a Successful Support Arrangement – 4 Key Process Steps”

This really was intended to be just the tip of the iceberg with regard to this topic. On Thursday, December 16th – I’m going to be presenting a webinar that dives a little deeper into these waters.

During this hour long webinar, we’ll cover topics that go beyond just the simple contractual nature of such arrangements and lay out some key steps to keep control of the quality and value in your application support arrangements. Topics we’ll cover include:

  • A structured approach for assessing, mobilizing and launching a high-quality application support model
  • Being honest about your support priorities
  • Key steps that are often ‘skipped’ and end up costing you later
  • Avoiding the long-term erosion of ROI through well-managed Service Level Agreements

To register for the event, visit: The registration page on http://www.perficient.com/webinars/

Hope to see you there!

Kevin

]]>
https://blogs.perficient.com/2010/12/02/webinar-establishing-a-successful-multi-shore-support-arrangement/feed/ 0 210476