Common Reasons for Adapting
Applies to: Agile
This topic contains the following sections.
- Adapting During an Iteration
- Three Ways Acme Media Adapted during Its First Iteration
- Adapting at the End of an Iteration
Take the world as it is, not as it ought to be.
Acme Media adapted to a revised product vision during feasibility and reacted to feature discoveries during planning. In this chapter, we’ll discuss how to react and adjust to information you discover during development iterations.
No matter what methodology you use, you’ll always have to deal with issues and challenges during development. Your advantage is that you’re expecting change and you have tools and processes in place that support and embrace adaptation.
Managing changes and decisions during development is still a difficult feat. You’re trying to stay on schedule, meet the customer’s needs, and support nonfunctional requirements such as performance needs. Discoveries require diligent, collaborative decision making. You’ll refine requirements, reprioritize the work, and re-plan based on what you encounter.
Teams that are new to Agile often have questions about the timing of adapting. Here are three common questions:
Can we adapt at any time? Yes, you can and do adapt at any time.
If we adapt all the time, how do we get any work done? This is a superb question. Many anti-Agile folks want to know how we get any work done if we spend all our time talking about it instead of doing it. That is a fair question. The answer is that there is a fine line between work labeled adapting and work labeled development. Are you adapting when you’re stuck on a technical constraint and you’re Googling for a workaround? Are you performing development when you refine requirements with a customer or analyst? At the end of the day, it’s all work that supports delivering the correct solution to the customer.
How do we adapt at the end of an iteration? Acme Media will demonstrate a solid process for gathering feedback at the end of an iteration and recalibrating the project based on the customer response.
Let’s start by discussing common reasons for adapting.
Common reasons for adapting
When you need to adapt, you go back to one of the Agile core principles: How can you deliver the most important features early? You still want to hit iteration delivery dates, and you still want to hit your deployment dates, but your main goal is to deliver value as soon as you can within the reality of your constraints. The common reasons for adapting are illustrated in Figure 20.1. Here are some common issues that come up during development and some of the ways we’ve seen teams adapt to them.
Figure 20.1 Adapting occurs throughout an iteration and following an iteration review by the customer.
Feature is Larger than Expected
Frequently a feature will surprise you when you start developing it. The code takes longer than expected, or you underestimated complexity. We also see this when teams are working with an off-the-shelf application. Sometimes the software provider promises functionality that isn’t quite there, and you have to Figure out how to close the gap. Here are some of the ways we’ve seen teams adapt to feature overrun:
Work with the customer to prioritize the functionality in the feature and potentially reduce the scope. Try to deliver the highest-priority functionality within the iteration schedule.
Accept the discovery and continue the work into the next iteration. Try to demonstrate the state of the feature at the end of the iteration if possible, with a test harness or limited user interface.
Cancel the feature, and re-evaluate the feasibility of the project. If a feature is too large, the cost may exceed the benefit, and the feature shouldn’t be pursued. But if a critical feature is cancelled, the value of the project may be lost. The team needs to reassess the project’s viability.
Another reason a feature could grow is that a customer begins to understand their needs more and they need to refine their requirements. Let’s look at this issue in more detail.
Customer Refinement of Requirements
This may be the most frequent reason to adapt. On occasion, we’ve worked with teams that fought the customer during refinement discussions. Because a refinement request could make the project run longer, these teams worked hard to talk the customer out of the request.
An Agile team takes a different approach. You want to be good listeners and make sure you understand the request. If the request will have an impact on your capacity, you can adapt:
Revisit the entire scope of the feature. Can other parts of the feature be sacrificed for the refinement request?
Delay other work. Explain the impact to the customer, and show them how other work may be delayed or pushed into subsequent iterations.
When you kicked off your project, the customer provided their priorities in the tradeoff matrix (schedule versus resources versus costs, and so on). You can use this matrix to help the customer decide how to react to the change in requirements and how to best triage the discovery.
The Business Need Changes
The world doesn’t care about your project and goals. Frequently, the playing field will reset during your project. Imagine what would happen if Craigslist decided to start offering free auctions while Acme Media was building the Auctionator. How would Acme Media adapt to this mid-project? Would the company cancel the project or identify a feature that would still separate it from the competitor?
Another example that comes to mind is desktop search functionality. Many people started using Google’s desktop search utility a few years ago. The utility was superb and free. It was a great marketing tool for the Google brand. We’re confident the folks at Microsoft cringed at seeing Google invade their world and be successful. We believe desktop search was a medium-priority feature at Microsoft, with no urgency to get it to market. Google changed all that with its success; 2 months later, Microsoft released its own desktop search utility. We believe Microsoft adapted to the change in the competitive climate and reprioritized the feature midstream, moving it up to an earlier iteration/release.
Here are some ways we see teams adapt to a change in the business climate:
Reprioritize features, just like Microsoft did.
Add a new feature.
Cancel a feature. Sometimes a feature loses its value during a project.
In drastic situations, you may find that the need changes enough to cancel the entire project.
A Technical Constraint Is Discovered
This issue is related to a feature being larger than expected. How many times have you encountered a technical issue during a project? Perhaps a better question is, how many times have you not encountered a technical issue?
We’ve seen issues with performance, browser compatibility, security, and product compatibility. The list of issues you can encounter is infinite.
Here are some ways you can resolve technical constraints:
Speak to software vendors for guidance.
Look at blogs and internet postings where others have solved the issue.
Research other technology options.
Have a discussion with the experts within your company who may be able to help.
If the issue can’t be resolved within the iteration, you can:
Ask the customer if you can remove the feature from the project.
If the feature is of a critical or high priority, discuss extending the work into the next iteration.
Delay the feature for another iteration.
Remove the functionality from the requirement that leads to the technical constraint.
A Team Member Is Unavailable
What do you do if a team member becomes unavailable during an iteration? What if they’re sick, or they have to address a production issue? If you lose a team member, is the iteration in jeopardy?
If a team member misses a day or two, the team can frequently keep the iteration intact. Other team members may be able to take on some of the work, or the work may get reprioritized to work around the missing team member.
In a worst-case situation, an iteration may have to be stopped and restarted when resources can re-engage. We’ve seen this happen for serious production issues, where the majority of the team was tied up for days when a server or database went down.
A Third Party Doesn’t Deliver
Third parties are the highest-risk area for any project. If you have an issue internally, you can triage however you like; but if you have an issue with a third party, you have limited control.
Note that third party means any group outside your project team. You may have little influence on groups that support your project. This is common in large companies where individual groups control areas such as data centers, load balancers, system monitoring, or shared infrastructure such as virtual machine environments.
Your focus should be to work with third parties as early as possible to give you the most time to resolve issues. But what do you do if they still don’t deliver?
One option is to do the work yourself. Does the third party provide a service that you can’t perform or choose not to because it isn’t a core competency? A few years ago, Greg worked with a team that created an online advertisement site for travel businesses. If you were a hotel, you could create your own website and advertise in a major online travel directory. The team Greg worked with decided to outsource the application because they didn’t want to make a heavy technology investment. The travel directory was a beta test for future directory models, and they didn’t want to create code that had potential for being disposed. But after the project started, the vendor came up short on critical requirements such as integration with the existing user-registration application. Greg’s team was halfway through iteration 1 when they made a decision to release the vendor and develop the travel directory in house.
Team Throughput Is Lower than Expected
We’ve discussed story points and how you can use them to determine your capacity for an iteration. Even though your capacity estimate is based on real work, sometimes you’ll underestimate the time needed to complete an iteration. This isn’t an exception. Your velocity will fluctuate with each iteration. Over time, you’ll have more consistency, and your estimates will become more accurate; but there will always be iterations that exceed or beat your estimate.
For example, say you average 30 story points per iteration but only complete 20 in a particular iteration. A few features are in a partial status or not started. What are your options?
You’re fighting two Agile goals when you can’t complete all the work in an iteration. First, you’re trying to deliver the minimal level of functionality needed to support a release. If you don’t complete all features, you probably don’t have a releasable product.
Second, you want to demonstrate status to customers and stakeholders at the end of an iteration. That is difficult to do when features are incomplete.
Here are three common strategies we see teams use when the work isn’t complete by the target date:
Continue the incomplete features into the next iteration. When you do this, you should still demonstrate status of the incomplete work. You need to have a feel for the work remaining so you can estimate it for the next iteration. You should also get feedback from the customer. Anything you can demonstrate will help you with this goal. Teams frequently create temporary user interfaces or test harnesses to demonstrate status on incomplete features.
Stretch the iteration. As a rule of thumb, this isn’t a good practice. The team will give less respect to the deadline if it can always be stretched. But if this is your last iteration, you must stretch the iteration or reach agreement on leaving out features or pieces of their functionality.
Remove the feature(s), or deliver them in a partial state. This is frequently a customer and team decision. The team outlines the repercussions for removing or partial delivery, and the customer makes the call about what they want to do. Partial delivery is usually an iffy proposition. If a feature is incomplete, it will probably need some level of cleanup to be usable.
You may also wonder about determining capacity for the forthcoming iteration. If you underestimate this iteration, what stops you from underestimating the next?
Because your capacity estimate is based on real work, this release should be an anomaly. The team will discuss this during the adapt week and see if a resource change or other area has affected the accuracy of the running capacity estimate. At a minimum, this low throughput iteration will be averaged into the existing capacity algorithm, and your estimate for the next iteration will be lower.
Adapting During an Iteration
We’ve seen two schools of thought about adapting during an iteration. One viewpoint is that you can adapt for technical issues, but you don’t want a lot of customer interaction during the iteration because the customer will get confused by seeing a partial product and won’t provide valuable input. Teams that take this approach also like to provide a level of isolation for the development team. The developers are given a 2week timeline to deliver a working product. The team feels that if the developers have frequent customer interaction, they will lose momentum and miss the deadline.
The second school of thought is that you embrace customer interaction in parallel with dealing with technical issues. You demonstrate your work during the iteration, ask the customer clarifying questions, and try to deliver the iteration on schedule.
Which method is the best? If your customer is new to Agile, you may be better off going with the first method and gathering customer feedback during the adapt week. You may also find that your development team is more productive if they can be isolated and allowed to focus on delivering code.
But if you stop and look at this approach, you may wonder if we’re discussing an Agile process or a waterfall approach broken into iterations. If developers are isolated the customer, how can you build the solution together? Your goal is to build the desired solution on time. Meeting the deadline provides no value if the result is not what the customer needs.
We have empathy for teams when the customer is too involved and hurts the process more than helps it. We’ve seen this on occasion, and we believe it’s more about training the customer than the fact that the customer is involved.
Should You Hide the Developers?
On occasion, we’ve supported having a developer work from home when they needed uninterrupted focus. But over the last few years, we’ve seen developers adapt to a collaborative environment and learn how to get their privacy while sitting with the project team. Some developers put on headphones to isolate themselves, and others set their IM status to Busy.
We do a lot of interaction with developers by walking up and asking them if they have a moment to discuss a feature. In the old days, everyone was polite and said “Sure.” These days they frequently ask if we can come back later because they’re in the middle of solving something. We like this new attitude. Although no one likes to hear “No, I don’t have time to speak with you,” we like the fact that developers are performing self-management and looking out for the project.
It reminds Greg of a manager he worked with 10 years ago. On occasion, the manager got under a tight deadline and hung a sign on his cubicle entrance that read, “Unless my cubicle is on fire, don’t disturb me!!” If you peeked inside his cube, you saw him with headphones on, hammering away on the keyboard.
As much as Agile is about collaborating, there are times where you need to give the developers the privacy they need to bring home a solution.
We personally embrace customer involvement during an iteration. Just like the team, the customer needs to be trained on how to be collaborative and productive. You can achieve this over time.
Three Ways Acme Media Adapted during Its First Iteration
To illustrate adapting during an iteration, let’s return to Acme Media. We’ll start by looking at a request to modify the search feature.
A Change in Feature Scope
As you may recall from Ryan, the designer, noticed that the customer, Jay, hadn’t requested the ability to filter searches by location. After a quick discussion, Jay agrees that the filter is needed. Ryan feels the additional work can be completed with minimal effort and the feature doesn’t need to be re-estimated. Ryan discusses the change with the project team, and everyone agrees that the additional work is minimal and can be easily added to their existing tasks.
An Issue with Performance
As you may recall from , Acme Media had been burned by not performing load testing on features in the past. During iteration 1, Matt, the developer, identifies a potential load issue with the auction engine.
Matt and Jay, the customer, estimate that as many as 100 people can be bidding on an item concurrently. Matt uses the load-simulation tool to simulate concurrent bidding and notes that the server is maxing out at around 75 concurrent bidders.
Matt creates a queuing process for the bids to minimize the impact to the end user, but bids can take as long as 10 seconds to process when 100 people are bidding at the same time.
Matt researches various technical options and notes that he’s doing little caching and that every request is going to disk. Matt finds that he can cache most of the bidding page, which reduces the peak response time to 5 seconds. Jay agrees to this performance level and doesn’t think it will be a usability issue. Jay will be happy if they get as many as 100 people bidding at one time.
Underestimating the Registration Need
Acme Media wants to let potential buyers bid on items without creating an account. They can bid by providing their email address, and by design the Auctionator will encrypt their email address and store the bid. The encrypted email address will represent the bidder ID during the auction.
The issue is that the bidder has no idea what their encrypted ID is. If they view the bid list for an item, they can’t tell if they’re the highest bidder. They only see an encrypted string.
Matt discusses this issue with Jay. They think of two options:
Email the bidder, and tell them their encrypted ID so they can recognize it.
Require registration to perform bidding.
The first option will work, but even then it may be hard for the buyer to discern their encrypted bidding ID when viewing bid history.
The second option is more palatable. Requiring registration for users will make it easier to design the overall system and provide benefits to the user. The user won’t have to submit their credentials for every bid if they’re registered and logged in. Jay, the customer, agrees to this option, and the team pursues creating a system that requires registration for bidding.
Adapting at the End of an Iteration
When an iteration ends, you focus on four areas:
Demonstrating and gathering feedback
Reviewing team performance and velocity
Let’s look at each of these in detail.
Demonstrating and Gathering Feedback
Demonstrations can take many forms. The most common forms are as follows:
Impromptu. This type of demonstration usually happens during development. A developer or designer can show the customer working code, a proposed UI, or anything where feedback will help guide the team. We also see informal presentations between team members during a project. For example, developers can review early functionality with the team to discuss usability and performance.
Structured. Greg was taught to use a more prearranged demonstration technique at the end of an iteration. This format works well when you have a short amount of time for review and you want to quickly gather feedback from many customers and stakeholders.
User Acceptance Testing (UAT). This technique is great for getting focused feedback from the customer. It also works well in a regulatory environment where formal approval is required.
Which technique is best at the end of an iteration? Similar to the menu you use at the start of a project, the team should make the call about the best way to demonstrate. Smaller teams and smaller projects can probably go informal throughout the project. As projects get larger and have more customers and stakeholders, it may be best to do formal demonstrations in conjunction with User Acceptance Testing.
Let’s consider Acme Media’s Auctionator project. The project has several stakeholders and one person playing the customer role. The project team is composed of nine people. We consider this a medium-size project.
When the Acme team reviews the project, they decide to present structured demonstrations and customer UAT at the end of the iterations, due to project size and the number of people affected. The entire team and stakeholders will participate in the formal 1-day review at the end of each iteration, and the analyst will lead a UAT session with the customer in subsequent days.
Re-evaluating Priorities: What Are Your Options?
In a perfect world, you’d go through the demonstration cycle, and the customer would be 100 percent satisfied. In the real world, you’ll see some of the following:
The identification of issues, both functional and technical
Requests to modify features in progress
Requests for new features
Requests to decrease the scope of features
Managing and prioritizing all this information is a cerebral process. How do you determine what is truly critical and what adds minimum value? What foundational information can you use to help triage?
In Acme’s case, the tradeoff matrix indicates that the schedule is fixed. The team must meet their project date. They have light flexibility with their resources and high flexibility with scope. They should focus on delivering the minimal amount of functionality needed to support the Auctionator. The date is critical, and they may enhance the functionality with a future project or release.
Reviewing priorities is critical to this triage process. The team needs a guiding light, or they may get lost in a sea of potential options for each issue. With date being the driving force, and knowing that adjusting resource levels won’t help much at the end of an iteration, they need to focus on scope. What does the team have to deliver to go live with the project? What is the minimal set of functionality they must deliver?
Figure 20.2 illustrates this point.
Figure 20.2 When you discover an issue, you have many options. The team uses their collaborative knowledge to choose the best solution.
You have many options as you review issues within your team. Some common options are as follows:
Modify the requirements. This may sound unusual, but if you encounter a constraint that can’t be realistically overcome, the customer may change their requirements. This happens frequently when you’re constrained by a commercial software package and you don’t have the ability to modify it.
Identify a workaround. In many applications, you can accomplish an objective more than one way. For example, if you create a search engine and can’t get the category functionality to work, the user may be able to perform a workaround by entering a category title with their search string.
Do nothing. You’ll often do nothing when an issue is low priority, such as a barely noticeable cosmetic issue. There isn’t enough value in pushing out the project to make a fix for a minor issue.
Write additional code. Sometimes you have to edit or create more code to meet basic need. This can be caused by identifying a missing critical requirement during demonstrations.
Purchase a solution. In some cases, a missing requirement can’t be easily supported in house. You may have to buy some functionality to support the requirement.
Defer the issue. Deferring is different than doing nothing. Sometimes you’ll defer an issue until you see how forthcoming features relate to it. A feature that is being delivered in subsequent iterations may remove the issue.
Redesign. A requirement may change so completely that you can’t use any of the work you completed during the iteration. You’ll need to revisit the design and start coding from scratch. These types of changes are usually driven by a change in the business environment rather than the customer.
Note As you review these ways to adapt to change during development, you may be thinking, “I already do these.” That makes sense; these are common ways to adapt regardless of whether you follow Agile principles. What is unique is that you identify the issues much earlier than with classic techniques, and you highly involve the customer in the triage process.
Now, let’s look at another aspect of adapting: analyzing team performance during the iteration.
Reviewing Team Performance and Velocity
When you complete an iteration, you measure how many story points you’ve completed. You continually measure the number of story points you complete and add them into your running average. Your running average is the number you use to determine capacity for the next iteration you pursue. For this process to work, you must keep your iteration length consistent and the people on your team the same. If you lose or gain a team member, you should begin recalculating your run rate based on the team change.
Acme Media didn’t have a running average for its first-ever iteration, but the team estimated their story points so they could initiate the averaging process.
Re-planning and Reacting
After you finish gathering feedback and reviewing team performance, you review the existing plan for the next iteration and make appropriate changes. Your changes are based on discoveries during development, feedback and testing at the end of the iteration, team performance, and changes in the business climate. You may remove features previously assigned to the iteration or add new features based on your discoveries.
Previous article: Testing: Did You Do It Right?
Continue on to the next article: Adapting During Adapt Week
©2009 by Manning Publications Co. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by means electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher.