15 signs you’re doing agile wrong

Misconceptions and ‘best practices’ may have your team spinning wheels rather than continuously churning out productive code

15 signs you’re doing agile wrong
Drew Graham (CC0)

It’s easy to jump on a bandwagon and end up in a ditch. Nowhere is this maxim more evident than in agile development. Plenty of organizations jump into agile in pursuit of its advantages—ease of embracing change, decreased cycle times, evolutionary architecture, and so on—only to find their best agile practitioners leaving the company, and the uneasy remainder unable to fix a development process gone wrong.

The problem with most approaches to agile is not a problem with agile; it’s a problem with Agile, the Capitalized Methodology. Agile isn’t a methodology. Treating it as one confuses process with philosophy and culture, and that’s a one-way ticket back into waterfall—or worse.

Fortunately, it is not difficult to recognize the signs of an agile approach gone wrong and to take steps to restore harmony. Here we examine 15 signs you’re doing agile wrong. Even one of these can seriously derail your software development efforts.

Doing agile vs. being agile

Agile begins with attitude. If your company emphasizes doing agile rather than being agile, you’re on the wrong foot right from the start. Agile is a paradigm, a mental shift in how you approach software development. The specific techniques and ceremonies come later, and they’re the least important part. The point is to be agile; embrace and employ the philosophy outlined in the Agile Manifesto, and you will “do” agile automatically.

Be sure to look carefully at the manifesto; its choice of words is no accident. Think about the implications: shed useless ceremonies, administration, and paperwork; focus on working code and fast feedback cycles; self-organize, self-examine, and self-optimize. This is the revolution. The specific practices of how to go about achieving what the manifesto outlines continue to evolve.

If you’re following a one-size-fits-all agile “process” mandated for all teams, you’re doing it wrong. The notion of a “standard” agile process is contradictory—agile means adapting and improving, continuously.

To remediate this, remember that the main goal is to deliver working software, not to follow a recipe; there is no recipe that always works for every project and team. Therefore, let each team adopt their own practices and take responsibility for adjusting and improving them.

Treating story points as goals

User stories are a key facet of agile, capturing a software feature requirement from the user’s perspective. Stories are assigned point values to estimate the level of effort necessary to implement the story.

These story points are neither a promise nor a goal. They have no intrinsic meaning or measure. They are an informal agreement among team members reflecting a common understanding of a project’s complexities and the team’s capabilities.

Your team’s three-point story might be another team’s five-point story. Using story points as a measure of success destroys its usefulness as a means of estimation, and invites “gaming the system” to appear successful (velocity X achieved) without actually being successful (working, useful software delivered).

The fix is simple: agree on and measure useful goals with the product owner (or better yet, the users). Don’t mistake conformance-to-estimate or compliance-with-plan for “success”; success is value delivered.

Comparing velocities of teams or individuals

Obsessing over metrics is almost second nature to most programmers. But if your team considers velocity, the (average) measure of story points per iteration at the team level used in sprint planning, as a point of comparison, you’re doing it wrong.

Again, velocity is a neutral metric intended only for estimation. Comparing team velocities is meaningless because the basic unit (a story point) is “defined” differently for each team. Because teams are unique, comparison of velocities has no value, and doing so can encourage negative behaviors and interteam competition instead of cooperation.

The same goes for individuals that make up a team. The individual’s contribution to the story point effort is fractional. And, as above, story points themselves are not metrics. Comparing velocities of individuals, even on the same team, is meaningless. The only metric that matters is a subjective metric: value delivered via working software.

The easiest fix for this: Stop. It’s counterproductive and a waste of time.

Writing tasks instead of stories

The agile story template is useful for framing a feature in terms of its benefits for a particular user/role. This reminds us that our goal is to deliver working software to someone who expects specific benefits from using it. If most of your “stories” are tasks in disguise, then the development process becomes task-focused (doing things) instead of delivery-focused (creating value). It’s important for the development team to stay connected to users; software with no users is useless.

The fix for this is balance: There will always be some tasklike items that must be done, but a story should be sized well enough to be completed in a single iteration, so breaking it into tasks serves no purpose. A 75 percent “done” story is useless. Do or do not; there is no partial credit. If a story is so complex that it cannot be done in one iteration, and it does not naturally divide into substories, play it more than once (see next section).

Never iterating stories

If you’re decomposing larger stories into smaller stories merely so they can be completed in a single sprint, you’re doing it wrong. The result of this kind of practice is a set of less-cohesive, task-oriented “stories.” Instead, stick to the larger, more natural story, and let it play in multiple sprints. Attack the story end-to-end, starting with the smallest “walking skeleton” of functionalities that enable the intended capability, then layer in additional behaviors and elements in later sprints. This allows the story to stay intact, evolving from walking skeleton to usability.

Once the walking skeleton is in place, its structure and behavior may imply substories, or you may find in the next sprint that priorities have changed, in which case the walking skeleton can be put aside. But if you decomposed the story into tasks because it seems easier to complete one task as a “story,” the resulting software will have no discernible added value, as tasks tend to focus on disconnected components, not connected value streams.

Mistaking scrum for agile

Scrum is a process-management technique, not a software-development technique. Ditto for Kanban. Scrum and Kanban without strong agile principles eventually revert to waterfall. This is exacerbated in many enterprise environments by huge initial backlogs (inviting waterfall designs instead of incremental evolution) and “standardized” agile practices.

Huge backlogs

If you’re concerned about feature lead time—how long it takes an idea to go from conception to production—the best way to kill it is to have large queues. Unfortunately, many organizations still plan, authorize, and execute software development projects in large chunks, resulting in huge backlogs from the get-go and guarantees that features at the end of the queue will have terrible lead times.

Suppose you’re going on a run to find a hidden lake you’ve heard about. Would you load up a backpack with everything you own or pack only what you’ll need so that you can keep up a good pace? Huge backlogs are like this; you’re looking to discover/validate feature value as quickly as possible, but your backpack is overloaded from the start.

Projects don’t really exist; they’re a mental model, not an actual thing. We invented projects so that we can talk about a nebulous stream of work as if they were single blocks of time and effort. There are no projects; there are only products. The key is to pare down. Organize projects around an initial set of features that can deliver measurable value, followed by “waves” of small, measurable enhancements.

Never pairing (or always pairing)

Pair programming is loved by some and despised by others. It’s a tool, folks, not a religion. It should be used where it’s appropriate—and yes, some level of it is almost always appropriate.

Pairing spreads knowledge of the system, tools, techniques, tricks, and so on across the team; reinforces human connections; supports mutual mentoring; and in many cases can produce higher-quality software faster than developers working alone. If you look at a story and think “two heads would be better than one on this,” then pairing is an obvious choice. If anyone on the team can implement a story, then pairing may not be helpful. Like every agile practice, pairing is a tool; use it when and where it is effective.

Not refactoring

Refactoring not only helps improve the mechanical quality of code; it also helps you learn from your code. When refactoring, you converge on better models. Right now, your code works, but it may feel tense, even a bit brittle. Refactoring reveals the implicit model, which informs your understanding of the domain. In the test-driven development cycle of red-green-refactor, “refactor” is not optional, lest you accumulate technical debt and fail to learn from the coding experience.

Stand-ups that don’t end

It’s easy for stand-ups, which are supposed to be brief team-sharing ceremonies, to turn into extended meetings. Limit the conversation to simple statements about things the entire team should know—what you did yesterday, what you’re doing today, and any blockers or help needed. In addition, a sentence or two on what you’ve learned can be helpful. That’s it. You may do this round-robin, you may do this by “walking the story wall” or however your team prefers.

Stand-ups are not venues for technical discussions, making decisions, proposing designs, swapping war stories, reorganizing a sprint, or doing anything other than communicating what’s necessary for group coordination. Come prepared, so you can listen to what others have done and are doing and decide if it’s relevant to you, instead of thinking about what you’re going to say. Anything that comes up outside of the mutual status update should be deferred to a huddle or email. A stand-up should take no more than 15 to 30 seconds per team member.

No retros

Agile teams should self-organize, choosing practices and ceremonies that fit their collective behavior. This, too, should be examined periodically, with everyone contributing to identify ways to improve the process and take the corresponding actions. The is often called a “retrospective” and is a neutral approach to fix processes without wasting time blaming people.

For example, one team member may have noticed that feedback from production users was coming in too late and suggests that shorter sprints may help. The team agrees, tries a shorter sprint time, and reconvenes in the next retro to see if it helped. In this way, the team’s processes are continuously fine-tuned.

One-size-fits-all “agile” often results in teams skipping retrospectives or reducing them to rote ceremonies with no actionable learning. If you’ve noticed a problem with your team’s process but are afraid to bring it up in a retrospective, the retro has been reduced to rote. An unexamined process cannot be improved, and team members must feel safe and encouraged to do so.

Manual testing (or no testing)

Testing is essential to producing operational software, and if you haven’t automated your testing, you’re missing out on significant efficiency and accuracy. Lightweight test-specification techniques like behavior-driven development (BDD) make excellent companions to agile stories. In waterfall terms, BDD descriptions define use cases, specify requirements, and are acceptance tests, in one very compact form.

Automating these test cases—along with the rest of the “test pyramid” (technical unit tests, functional integration tests, interface contract tests, user acceptance tests)—provides an efficient and reliable option to verify that a code change works as intended without breaking anything. Automated tests are a safety net, providing the team with confidence and courage.

Skipping modeling and design completely

Favoring working software over documentation does not mean “skip all modeling and design activities and only write code.” What you are trying to avoid is spending endless hours undertaking the speculative task of creating detailed diagrams and specifications. After all, the only way to know if the model/design is right is to test it by writing code.

But if you have to solve a really hard problem, use any and all means necessary to figure it out. A low-fidelity model/design can be brain-tested on the story’s test cases, and different designs can be explored rapidly. You might even want to time-box this activity based on the story size to start: for example, five minutes for a one-point story to review the basic flow and touchpoints, 15 minutes for a two-point story to see whether there’s hidden complexity, and so on.

Your model/design should speak to the story’s benefits and give you a jump-start on the solution, which should be tested in code. Use your judgment as to how much design, at what fidelity, using what methods, and for how long, for each story; don’t feel like you “can’t” model or design because you’re “doing agile.”

Avoiding devops

If something is painful, do more of it. This will incentivize automation.

Treat machines as cattle, not pets, by automating infrastructure using tools like Ansible, Chef, Puppet, and so on. Make running tests and deploying software automatic or at least push-button triggered. Get the infrastructure out of the way; make it invisible by incorporating it as another part of the code base and/or using self-service platforms like AWS. Cycle time—the time required to process a code change into a production release—can be drastically reduced by automation, which allows for faster feedback cycles and, in turn, accelerates learning. Accelerated learning leads to higher-quality software delivered more frequently.

Adopting “best practices”

There are no such thing as universal “best” practices. What works well for one team may not work well for another, even at the same company, even on the same project. Everything we build is a unique snowflake of design and circumstances; every team a unique combination of personalities, skills, and environment. Read about practices that have been effective for others, give them a trial run if they seem applicable, but don’t automatically adopt them because some authority says they’re “best.” Someone else’s “best” practices may be your team’s straitjacket. 

Copyright © 2016 IDG Communications, Inc.