Tuesday, March 01, 2011

A Simple, Definitive Metric of Software Development Productivity

Developer productivity is made of two components: 1-) the speed at which work is done when work is getting done, and 2-) the amount of time that is lost when anything stands in the way of getting work done, or more specifically, of getting "value-added" work done.

The most common mistake in measuring software developer productivity is the failure to take both components into consideration.

Productivity is what's left over when you subtract the amount of time lost to non-value-added work from the total time spent working. If you only ever measure your pace when doing value-added work, the results will be a massive over-statement of developer productivity.

Complicating matters further, developers' natural pride in workmanship can make classifying work as "non-value-added" a difficult task. Without an environment that is entirely supportive of transparency, learning, and improving, truthful measurements of productivity are just not possible. Far too many personal preservation mechanisms will get in the way.

The hard, cold truth about developer productivity is that a truthful measure of productivity on a typical software development team is in the 10% to 30% range. Getting a clear picture of the reality of software productivity on your team remains a necessary effort - even in light of the difficulties of undertaking it.

Non-Value-Added Work

Pride in workmanship is what keeps most developers engaged in the work, but it's a double-edged sword. Peter Drucker's famous quote applies: "There is nothing so useless as doing efficiently that which should not be done at all".

We have to come to terms with non-value-added work. We have to consent to it. Non-value-added work simply is.

If we're willing to consent to the reality of non-value-added work, then we'll be open to seeing it. Seeing it is the first step toward measuring productivity meaningfully, which is the first step to developing an action plan for restoring productivity.

If you can get comfortable with the reality of non-value-added work, then you can get comfortable with using its proper name: waste.

Here are a few things that we do that are waste - regardless of any pride we might have in the hard work and perseverance involved:

  • Teasing apart code to understand how it works, especially when this effort is repeated whenever a team member confronts that code
  • Performing deep and complex setups of data in an external system (or systems) in order to prove that changes to a local system are correct
  • Hiding distributed systems behind interfaces that pretend that remote objects are local objects
  • Failing to create transparency in objects' interfaces as to their underlying latency, or network and storage protocols
  • Creating development infrastructure, tools, or automation that is only readily-understood by people who have had extensive experience with them
  • Failing to properly codify or tag tests so that knowledge of which tests to run and how to run them when a change is made is obvious
  • Designing tests who's failures aren't definitive proof of either defacto functional errors or mistakes in setting up or operating up the test environment
  • Failing to recognize excess tests as coupling problems that obstruct needed design changes
  • Designing tests with either excess specificity, test granularity, or penetration into adjacent modules
  • Underestimating the destructiveness of ambiguity in any code or any other development artifact in general
  • amongst many other things that can appear as hard-won effort

In short, any work you do that isn't directly related to adding desired capabilities to the system is waste. Any work that you do to resolve obstructions to getting your work done is waste - even if that work improves the overall quality of the system. The reason is simple: obstructions are mistakes that you haven't yet learned not to make.

Some design challenges are due to unforeseen changes in direction in technology or business. As long as you can make those changes without also having to deal with negligent design choices and expediencies of the past, then you're still adding value. The moment you have to deal with obstructions to the work, you're incurring waste - no matter how attached you might be with the designs, mechanisms, and even human organizations and processes that underly the obstructions.

There may always be some waste in the work, but the presence of it doesn't excuse an aversion to seeing it for what it is, accepting that it's there, consenting to its presence, and developing the means to aggressively and progressively eliminate it.

Measuring Productivity

Here's how to get a clear, honest look at software development productivity:

The moment you're forced to divert to non-value-added work, write down the time. The moment you're back on track doing value-added work, write down that time. At the end of the day, add up all the non-value-added time and subtract it from your total work time.

Your work time shouldn't include things that aren't time spent on developer activities unless they're directly related to dealing with an obstruction. For example, a lunch break isn't non-value-added time. However, a walk to the break room to blow off the steam building up from dealing with frustrating design mistakes is counted as non-value-added time. A standup meeting isn't non-value-added time, but a meeting to address an obstruction or to resolve an ambiguity is non-value-added time.

You'll inevitably come across grey areas where some time spent isn't easily classified as non-value-added or value-added. This is especially-true of explicit improvement efforts. Discussing these grey areas with your team and coming to some conclusion is a valuable part of the larger improvement journey that you're on.

Make all the factors feeding your considerations explicit and come to the best decision you can for the circumstances you face. And most of all, understand that different circumstances might lead to different conclusions. Make these differences explicit as well. Open dialog and flexibility is an absolute must, but so is rigor. Keep an open mind (an open heart is also helpful), but be vigilant against allowing a discussion to become a zoo.

If you do a daily standup or status meeting, or have an activity stream app like Campfire, an IRC channel, or even email, report on your productivity every day. Even if you're the only person on your team undertaking the exercise (note: this isn't really sustainable), you can report your productivity.

The goal is to be transparent about the truth of your software development productivity. Once the truth is laid out for all to see, it has a chance of affecting the moment-by-moment decisions that you and your teammates make that continue to confound progress.

Next Steps

Wherever there's waste, there's an accumulation of excess complexity that encourages even more waste. The goal is to fix the design problems that cause it, and to restore productivity to sensible levels. And ultimately, to fix the processes that encourages the waste that exacerbates the design problems that take up long-term residence.

We have to remain utterly unattached to the efforts we invest in solving problems that should not have existed in the first place. If we're not capable of adopting such a disposition, we're incapable of harvesting enough learning about counter-productivity to restore productivity.

If you can measure your productivity truthfully, then you can work with your team to collectively improve the situation, and to continue to improve it.

The start of an improvement effort begins with recognition of a problem and a pathology. Being able to make measurements of the problem is essential to recognizing the problem and also to monitor progress. By developing the ability to recognize waste, you've also given yourself some basic tools to monitor your progress. Measurement might be a bit depressing at first, but it also becomes encouraging as you make progress.

The anatomy of an improvement effort is the subject for another day, but suffice to say that what you're trying to create is a Learning Culture. Doing this isn't a quick-fix, but undertaking it will yield some benefit immediately, and continue to yield a constant stream of value along the way.


Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com

Tuesday, January 18, 2011

Kaizen: "Relentless", Rather than "Continuous"

"Kaizen" is usually translated from Japanese to English as "Continuous Improvement". It loses its power in the translation.

"Continuous" is perfectly reasonable and correct, but it doesn't really convey Kaizen as the underpinnings of a way doing work. Kaizen is certainly continuous, but it's continuous as a side effect. More importantly than being continuous, Kaizen is relentless.

"Continuous Improvement" can become a flaccid thing - yet another empty corporate mission. At its worst, it reinforces workers' cynicism about cultural initiatives that don't really make work or outcomes any better. "Continuous Improvement" is passive. It's even apologetic and deferential. "Relentless Improvement" is active. It's even fair to say that Kaizen is aggressive.

Often, the real problems robbing an organization of its productivity are the smaller, pervasive problems that are finely-woven into the fabric of the way that work is done and conceptualized. When you solve problems that are inherent in the DNA of the work (and consequently, the organization), you'll likely free up more resources than you might have imagined. A Kaizen culture understands that microscopic, pervasive rot causes a kind of osteoporosis in the bones of the work that can't be seen without actively hunting it with fine optics, or until the bones start to break.

A Kaizen culture doesn't wait for problems to show up. It looks for problems. It's imbued with an awareness that negligible problems often aren't negligible at all - especially when they're pathological and pervasive. It knows that pathological and pervasive problems are often charged with the greatest potential for destruction. Kaizen is literally "looking for trouble", knowing that if it doesn't, trouble will find it. It challenges its belief that these negligible problems are not worth effort or attention. It grounds its exploration in meaningful but practicable, and accessible measurements, rather than in methodology mysticism. In progressively learning what measurements are interesting and practical, it learns to ask better questions.

When a Kaizen culture doesn't see problems, it wonders first whether its vision needs improvement, and sets out to sharpen its eyes rather than rest on its laurels and dull its senses. The adage, "If it ain't broke, don't fix it", is a fallacy. Waiting for problems to come to you means that you're dancing to their tune. The downward, entropic cycle of fire-fighting and decay isn't transformed into a greater capacity to keep moving forward without constantly losing ground.

In the end, "Continuous Improvement" is useful because it's a standard term. But when it's time to actually do Kaizen, "Relentless Improvement" sets more realistic expectations about how it achieves pervasive dominance over pervasive problems.


Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com

Thursday, October 28, 2010

Testing User Stories - Problem Development Before Solution Development

User stories can be wrong. Business Analysts and Product Owners can be wrong. They can be the best representatives of user interests but they're often still proxy users. The most expensive mistakes to be made in software product development are made when we get the requirements wrong. A quick way to increases the risk of proceeding to solution development on the basis of incorrect requirements is to not prove that they're right. Or, in other words, to not test them.

The only way to definitively validate that the problem statements for a software product are right is to put the working software in users' hands.

User interviews, focus groups, simulations, and mockups are steps along the way, but their purpose is to reduce the time and cost of getting users in front of an experience that is immersive enough to validate assumptions. Allowing users to experience our proposed solutions to their problems clarifies whether the problem is understood.

Generally, we're taking too long to get stories validated - even in colloquial Agile development. Sending a user story through an entire development process in order to validate it is too much. In order to validate user stories, we need to collapse the solution development timeline so that feedback can be had without the expense of the solution development. We commit to the solution development cost once we've validated the problem.

There are two distinct (but inextricably-intertwined) steps that make up a development process that recognizes the value of reducing the time-to-feedback for testing user stories. The later step is the plain-old Solution Development that we use regardless of whether we recognize a preliminary first step. The earlier step is Problem Development.

Problem Development is software development. There's code. There's deployment. There's testing. There's design and analysis. There's process. There's just much more of it in Solution Development (even Agile development) than in Problem Development. And there are different kinds and variations of these activities in each. Problem Development is software development that works hand-in-hand with business development.

Problem Development and Solution Development have different goals. Problem Development is concerned with feeding Solution Development with input that has less risk (at a time when risk is the riskiest).

Assumptions are liabilities. User stories are assumptions until they're validated. To validate user stories, deliver software experiences that propose to solve the problems that user stories describe, and measure the feedback. There's delivery in Problem Development, but not final delivery.

Final delivery is a concern of Solution Development. Final delivery requires long-term sustainability. Solution Development is concerned with long-term sustainability. When we work toward achieving long-term sustainability before we need it, we end waiting until the longer Solution Development process plays out before getting feedback on user story validity. Which is too long.

Problem Development is concerned with short-term sustainability. The reason that it has to be sustainable at all is that the output of Problem Development is the input to Solution Development, and these two steps are ultimately one process that flows together.

It's far too easy to see Problem Development as "throw-away coding", but it's absolutely not. Problem Development is a way to manage some of the biggest risks in software development - risks that often go un-addressed and are frequently realized as wasted effort and resources, and lower productivity. This kind of risk isn't effectively managed with practices that are satisfied by "throw-away" code.

If throw-away code is the output of Problem Development, then Solution Development has to undergo a (mostly) cold start, benefiting from far less of the momentum created by Problem Development. It might be necessary to make the kind of tradeoff that creates these kinds of batch transfer problems, but making a habit of it is a path to a cumulative productivity deficit.

The kinds of practices that characterize Problem Development might look like software development heresy (or at least "Agile" heresy) to many disciplined solution developers. The realities of Problem Development mean that traditional aspects of Solution Development are short-circuited - after all, Problem Development is looking for a shorter circuit.

The Problem Development practices might even seem irresponsible, but recognize whether you're looking through a Solution Development lens. For just as Problem Development practices can seem irresponsible from a Solution Development perspective, applying Solution Development practices to the point in the timeline best suited to Problem Development can be equally as irresponsible.

Problem Development is about testing or validating user stories' problem statements. Solution Development is about building software based on validated problem statements.

The whole of development process is concerned with finding the way with the least wasted effort in getting from concept to cash.

Feeding un-validated user stories into the Solution Development transformer is often the most efficient way to increase wasted effort.



Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com

Monday, October 25, 2010

The Myth of Iteration 0

"Iteration 0" is the initial phase on an Agile project when collaboration tools such as a source code repository server, a continuous integration server, and distributed build and test agents, as well as document tools like wiki servers, and other tools are set up. It's often also a time to configure individual workstations and team members' tools in preparation for everyone starting work.

"Iteration 0" is a bit of a popular misconception that is a side-effect of software projects initiated by technologists - which sounds a bit ridiculous at first. After all, who else but technologists to do software projects? Of course technologists are the right people to do software projects, but are all technologists the right people to execute project startup? And for that matter, what is "project startup" as differentiated from the rest of the work? Is it something that should be approached differently than the rest of the work?

The technology-focused project initiation isn't necessarily the wrong thing to do, but it's often only the right thing to do from within the technologist's perspective, and that perspective can be limited. A tool-biased perspective is a challenging thing to overcome by someone whose moment-by-moment work is necessarily suffused with a constant focus on tools, or whose initial career path passes through a lengthy time of tool focus.

If we literally codify the "first iteration" as "Iteration 1", we technologists can use a bit of geekery to make allowances for a pre-iteration focused on technology. As programmers, we're accustomed to counting a list of things starting from zero. If the list of iterations starts at the "zeroeth" position, but work is only scheduled to start on the first position, then we get an extra "unclaimed" iteration to work with: "Iteration 0".

Any way you cut it, the first iteration is the first iteration, regardless of the numerical designation assigned to it - and regardless of creative license and word games. If you choose to have a zero-based project schedule, then we should naturally change the schedule to state, "Deliver something user-valued in Iteration 0". Which, consequently, is not an invitation to technologists to insert an "Iteration -1" at the head of the schedule.

Over the past few years the caution to not exit any iteration having only shipped technology has been whittled down to a more technologist-friendly caution against exiting an iteration having only shipped frameworks. It's a narrower interpretation of a deeply-powerful principle. It sets the stage for the institution of a technology-focused "Iteration 0".

Nonetheless, it's sometimes necessary to devote a period of work entirely to technology concerns - especially if technology concerns become obstructions to user-valued concerns. And it's all too easy for technology to become an obstruction to delivery in a technology product delivery effort. But this isn't the problem with "Iteration 0".

"Iteration 0" is a project startup issue that is technology-focused startup rather than a product-focused project startup. There's an implicit assumption in "Iteration 0" that - while perfectly reasonable from a technologist's perspective - isn't necessarily an optimal path for project startup.

If a project startup is executed with "all hands on deck", then coordination of the people involved becomes an issue that must be dealt with right of the bat, including the setup of tools that support the coordination.

So, is it necessary to start a project with an amount of people that necessitates coordination and collaboration tools? If project startup is executed with a small workcell of skilled pathfinders, do they need the tools that necessitate the "Iteration 0" work? Can a project startup be executed without the need for the collaboration tool setup?

And more importantly: Is there a benefit to executing project startup without the full compliment of technologists that we'll need for full-steam-ahead solution development?

The answer to this is usually a resounding "Yes", there is a benefit. A good bit of the reasoning and guidance is found more in the Lean Startup and Lean Product Development bodies of knowledge than the colloquial Agile body of knowledge.

There's a time to activate a full compliment of development staff, but the optimal time is rarely the start of development work.


Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com

Wednesday, October 20, 2010

The Least Way: Pivoting Away from Agile Excess

One of the most compelling justifications for Agile development methods and techniques is made through a comparison of the cost curves of both Agile and "traditional", or "non-Agile" methods.

When Agile was new and its case was pleaded more frequently, the following graphic was quite common:

NOTE: The above graphic is intended to represent to relative shape of the cost curves, and not convey exact scales for your organization. Your own organization's cost curve mileage may vary.

The Cost of Traditional Development

The essential proposition is that the cost of a unit of software development effort gets exponentially higher as a project progresses. The same curve represents the cost of change of existing features equally as well as the cost of adding new features.

The underlying principle is pretty straight-forward: the more stuff you have, the more stuff that will go wrong that you can't predict and that you don't detect. The more stuff going wrong, the more rework in-progress. Rework is the crumbly raw material that makes all foundations shaky - be they the foundations of software, or the foundations of the plan.

The number of modules in a software project increases as a project progresses. The number of inter-relationships between modules grows as the number of modules grows. The number of unpredictable repercussions of any change to existing modules or addition of new modules, as well as the extent that repercussions dissipate throughout a system of modules, increases as the number of inter-relationships increases.

When there are fewer modules, it's much more straight-forward to add new modules. However, at a critical turning point, this ease rapidly erodes and it becomes exponentially more difficult to get work done without undoing work that was done previously.

The more modules in a system, the more likely that any new additions or changes will cause adjacent modules to no longer work as they once had, and to no longer work correctly. The longer that they stay in this incorrect state, the more that other adjacent modules will be built incorrectly, or built in a way that reduces development productivity.

The more modules that are in a system, the less likely it is that its programmers can assess any detrimental effects brought about by changes. The system becomes an increasingly unsolvable puzzle that must be solved every time a change is made to add new code or to change existing code. Early on in the life of a system, the puzzle is dead-simple to solve. As the number of modules grow, the possibility of solving that puzzle with every new change closes in on improbable. This is the source of the extra effort that accretes around software development as time goes on.

How Agile Flattens the Traditional Costs

The Agile cost curve is fundamentally different. It was such a radical re-imagining of managing the factors that influence the cost curve that it was frequently dismissed out-of-hand, despite affirmations and reports from people who had given it an honest empirical and experiential trial.

Agile development's power rests in the drive to recognize the conditions that cause the cost of effort to increase over time. It uses techniques that keep effort leveled over the length of a software project, rather than allowing them to spiral toward the inconceivable, unpredictable, and unmanageable.

The cost of change is controllable only when you can predict whether any change will have detrimental consequences. You need to be able to apply countermeasures immediately when any of these detriments are created. The longer the wait to detect a design flaw, the faster the cost grows. This ultimately leads to the vicious circle of rework, where problems are found after the work is delivered to the next phase in the development process, while developers continue to build on that work, believing it to be a sound foundation.

In other words, Agile's radical re-shaping of the traditional cost curve comes from the ability for workers to easily and continuously check that their work isn't a cost magnet while that work is still open on their workbenches. They embrace a principled understanding of the kinds of software designs that are directly and easily tested, and a diligence paid to recognizing, avoiding, and remediating designs that aren't.

And of utmost and paramount concern - concern that can make or break an Agile effort: that organizations realize that its the rigidity of organizations and organizational systems and behaviors that obstruct software development from being done in a way that allows software workers and teams to rehabilitate the traditional cost curve. The impact to organizations - especially entitlements to silos and rigidity - can be profound. Organizations adopting Agile have a responsibility to shift from defensible, garrisoned blame cultures to proactive, optimized learning cultures. If the organizational and management system locks the steering wheel into one, largely immovable position, there should be little surprise when projects end up in the productivity ditch.

The Agile cost curve also suggests that there's a bit of a startup tax to pay up front, but that once the initial bump is navigated, productivity stabilizes and becomes much more predictable.

This is in contrast to traditional methods where the inherent absence of complications early-on lulls developers and managers into a false sense of success. They fail to practice productive countermeasures from the very start by failing to recognize the technical and organizational naivete inherent in the work. The success doesn't last. It's outlived by the realities of the exponentially-increasing number of complications that are natural side-effects of the progress of a software development effort.

The Agile Project Startup Tax

The graphic depicting the Agile cost curve overlaid on the traditional cost curve tells this whole story at a glance. This is what makes it so compelling. And yet, while the Agile cost curve is incredibly compelling, the most interesting point of the graphic isn't so much the benefits that Agile methods avail, but the point on the curve where the lines intersect:


Before that point, Agile development has a higher cost. Before that point, traditional methods should have a lower cost. The time spent using by-the-book, colloquial Agile development techniques before the point where traditional techniques become too costly is a likley volunteer mission for exacerbated costs.


The Best (or Least) of Both

A methodology that uses aspects of traditional development up to the point the costs of traditional methods are on the verge of spiraling out of control, and then makes a methodological "pivot" to techniques that can be considered "Agile" can reap the benefits of each approach's strengths.

Such a "blended" approach takes advantage of traditional methods that, while unsustainable, can deliver more bang for the buck up-front. The resulting path has the least amount of cost if the pivot is well-executed.


The resulting methodology is both traditional and Agile based on where work is in the project timeline and in the smaller development cycles. Neither its traditional aspects nor its Agile aspects are left unchanged by the imperatives of blending what is usually seen as incompatible methods into a continuum. But the most effective and salient principles of development productivity remain in-play.

My own practice of that continuum is informed by Lean Development, and more recently by the Lean Startup movement and its understanding of up-front phases that are often not mentioned by predominating methods like Scrum, and traditional methods that came before.

The Least Way

I was bemoaning failures with Agile and Agile community to Jason Cohen a couple of years ago. He offered the "The Least Way" as a name for the approach that I was embracing to deal with a number of the issues with Agile stasis and orthodoxy that I was trying to work around. I'll ultimately add more meat to those bones on LeastWay.org, and talk about how it underlies some tooling that I'm working on, and how it has affected work on projects. Maybe you'll find it as useful as I have if packaged up in a consumable form.

One of the key differences between an approach that doesn't just cherry pick techniques from different methodologies, but executes a full methodological pivot, is the explicit management of that switch. Planning for it, preparing for it, recognizing the right time to make the pivot, and making a clean entrance and exit from it is a bonafide part of the practice.

Missing the pivot is like missing the last exit before entering the Lake Pontchartrain Causeway. By the time you've realized that you've zoomed past your best chance to take the turn off, you've likely been captured by the gravity well that will start you toward the acceleration of costs. The cost curve begins it's rapid incline immediately after the pivot. The longer it takes to realize that you've missed the turnoff, the faster you'll be traveling, and the more you'll have to spend to put on the breaks and get back on-track toward managing the cost-per-effort.


Shaping the whole methodology around the pivot changes some of the conditions that drive staffing and work management at different phases of a whole software effort. It's not just "changing gears", but more like changing vehicles while they are still moving: you'd want to have vehicles designed for this specific purpose. Modifying the existing vehicles is a good place to start.

With the pivot being a full-fledged part of the methodology, with practices in place to recognize its approach and to execute it cleanly, the cost curve should be smoother, representing the best of what traditional methods have to offer - up to the point that they become unsustainable, and the best of what Agile has to offer - but not before it is sensible.


It's easy to see the methodologies as "one vs. the other" issues - "traditional vs. agile", for example. This perspective can even be helpful when more radical approaches are needed to shake up a rigid, entrenched, silo culture. However, plotting methodologies along a continuum helps us to see those parts of the spectrum where one blends into the other, and to learn to recognize and take advantage of the distinct transitional phases between them that we might otherwise miss or ignore.

Our tendency to look at some colors as "primary" and those in-between as "secondary" is largely an arbitrary side-effect of perceptual bias. The spectrum is one, fluid continuum of material that is neither primary nor secondary. When we choose to see transitional states as secondary, we naturally tend to tune them out. The powerful continuum of abilities that flow smoothly from each other is turned into a checkerboard of disconnected, shallow monotones.

Rapid feedback is the lifeblood of software development that doesn't spiral out of control. Agile protects vital feedback mechanisms from becoming calcified after a projects initial phases. Traditional methods provide rapid - but unsustainable - access to product and customer feedback early-on. Leveraging both and planning for and executing the transition as a fully-fledged part of the methodology rather than a secondary interest is a good part of navigating a "least way" of development obstructions (and costs).


Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com

Monday, October 04, 2010

User Stories Belong to Everyone

When Agile development is done well, user stories are always visible to everyone involved in turning ideas into working products. And they're visible all the time. When individuals or individual specializations involved presume a sense of "ownership" over user stories, doing Agile development well becomes difficult if not entirely impossible, save for a veneer of "Agile Theater".

As user stories flow through the whole development process from conception to delivery, they pass through the hands of a number of job functions and workers. Any time that people working in the vicinity of a given work step are given to believe that they "own" the user story in their current purview, they are likely to displace the user story from the most public and most visible medium that is common to the whole team. User stories are often then removed to the bowels of tools that are practical and accessible only to workers at the current work step and its immediate vicinity.

User stories can, do, and should change as they march forward through elaboration from concept to working product. Product development is a process of constantly unearthing a clearer understanding of the work we're doing while we're doing it. No amount of up-front analysis and design can stop this from happening. It's the nature of the kind of work that software development is, ie: product development.

If software development were manufacturing work, we'd know what would be happening in each step of the process before it happened. But then, we'd have to be creating exactly the same product again and again, which would be nothing short of an absurdity for software development.

The further we get into the process of turning an idea into concrete, material product, the greater the clarity and amount of thinking that gets invested into what we're doing. The more thinking invested, the more that we come to understand the subject matter we're working with, the circumstances of product's intended audience, and the fit of the decisions we've made earlier in the process to the reality waiting at the end of the process when real workers will try to do real work with the delivered product.

Analysis and design are present in absolutely every kind of work done in software development, and present in every stage and every work step. And while we may indeed have work steps early in software development processes that appear to be characterized by work exclusively in analysis and design, this is ultimately a consequence of not having concrete product yet in-hand at those early stages.

These early stages are less "Analysis and Design" stages as they are "Absence of Material Product" stages. We resort to seeing early stages as "Analysis and Design" due more to a habit of human perception that tends to draw attention to things that are present than to things that are absent. But later stages are no less "Analysis and Design" stages as a "Testing" stage in any less of a "Development" stage.

It's the same cognitive wrinkle that makes negative space visualizations like Rubin's Vase or the FedEx logo so compelling when we finally recognize the secondary images conveyed by the space that our cognitive facilities filter out. It's also what keeps us from being in a constant state of anxiety due to cognitive overload, and what allows us to be mindful of predators (not to mention to be unmindful of camouflage).

The kinds of work we do in software product development are rarely unique to any given stage. The kinds of work we do accumulates continuously so that later stages have responsibilities for nearly all of the kinds of work done in software development throughout the entire process. That's not to say that front-loading software development work with conceptual work is a bad idea. It's obviously an essential part of doing product development work successfully. But it's easy to mistake early-stage work as the domain of "analysis and design" due to the absence of material product this early in the game. The absence of material product at early stages should be a conspicuous absence, but the conspicuousness is typically lost to cognitive filters.

The disastrous side effect of failing to recognize this negative space perspective is the mistaken conclusion that analysis and design is the sole domain of early-stage work. And it starts the people involved in software work on the slippery slope toward believing that work on requirements analysis only happens early in the process, and that requirements - once defined - will not have to change. As if we could really know anything so conclusive so early in the process of uncovering clarity and understanding.

In the end, early stage work is naturally limited in that it can usually only be conceptual work. Conceptual work is realistically the only work that can be done so early. While the accumulation of responsibilities at work steps later in the process continues, analysis and design never wane. New clarity and new understanding never subsides. In fact, clarity and understanding only get sharper. This continues even once a software product has been made operational.

Because understanding continues to get sharper the further along we are, any concrete record of our understanding must be kept up-to-date to make sure that our collective clarity remains collective. Otherwise, workers at different stages of the whole process will have different understandings of what the goal is - as currently defined by all recent learning.

Making sure that everyone is on the same page is of paramount importance in any development effort. Locking down the definition of what's needed isn't the way to do this - although it's often the mistaken path that software organizations take when they fail to recognize the cumulative nature of understanding and clarity as product development unfolds.

To keep everyone on the same page, user stories must be accessible to everyone involved in a development effort, from early-stage conceptual work, all the way through delivery when placeholders like user stories are replaced with concrete, material product. They must be accessible with the least amount of effort by all people involved and at all stages and work steps.

At any point in the whole process, anyone who is empowered to add user stories, change existing ones, remove user stories, or otherwise make use of them should not have to go through a worker with specialized tool skills due to user stories having been removed to a tool that is not generally-accessible with an absolute minimum of effort and a maximum of immediacy.

Stated as a general principle: No single work group at any work step should remove user stories to a medium that, while more expedient to their work, causes user stories to become less accessible by others with different skills and responsibilities.

The inevitable result of workers at a work step feeling that user stories can be removed to a more locally-optimized medium is that separate copies of a user story will be used by workers at different work steps, and these copies will inevitably diverge. When multiple copies of the same user story diverge, workers at different work steps have different understandings of expectations and goals, and work at cross purposes.

In the worst of these situations, workers at different work steps aren't even aware of the others' divergent versions of the truth, and labor under the resulting conflict without any idea as to why expectations are consistently not being met. This drives up rework, reduces the credibility of the team, increases stress and conflict on the team, and generally leads back to the chaos that Agile development was supposed to address.

And yet, there are indeed advantages to specialized tools for specialized work steps. Great care should be taken though when considering specialized tools for artifacts that aren't specialized. It's often a mistake and an oversight when specialized tools are put in place at a work step that makes collective artifacts less accessible for the duration of the work step, or from that work step onward. In the case of user stories, it's hard to find a single artifact that is more collective, public property and thus less amenable to workstep-specialized tools.

At every stage of development, workers can back-slide into misconceptions of authority over artifacts that are responsible for unifying the whole effort and for stitching diverse workflows and workers together. The further we get into a software process, the more likely it is that workers at a later stage will remove user stories from more generally-accessible media and sequester them into the bowels of tools that are accessible only to people with specialized skills.

For example, consider the Cucumber framework and tools developed in the Ruby community, and the family of derivatives and clones that have since been created in a host of programming languages.

This style of tool encourages the displacement of user stories from more generally-accessible media into programmer-specific media. It doesn't do so accidentally. It's done as a recommended practice - reinforced by articles, books, and software conference presentations and workshops.

While Cucumber is built on some fairly compelling technology, and is itself an impressive work, it's not exactly methodologically-sound in that it fails to recognize its own contradiction of Agile development's core value of individuals and interactions over processes and tools.

As an aside, I brought up this issue with Aslak Hellesøy, the Principal of the Cucumber project, during a trip to Sweden last year. At the time, he informed me that work was being done on a tool to extract user stories from the specialized media to make them generally-accessible. But there are three non-trivial oversights inherent with this solution:

Firstly, it disregards the a-priory value of whole-team organizational methodologies like Agile by allowing collective and communal artifacts that are fundamental to higher-order productivity to become appropriated by one specialization.

Secondly, it seeks to address a problem that exists because of overly-elaborate tooling by creating even more elaborate tooling. The path back to simplicity and clarity - i.e.: "elegance" - likely also suggests removing the first wave of tooling rather than adding a second wave of compensating tooling. There are presently solutions that are arguably more holistically-effective as well as arguably more simple and clear. Although, they're based on less elaborate programming technology, and this is likely why the programming specialization remains distracted for alternatives.

Thirdly, it doesn't address the need to allow user stories to be added, removed, combined, divided, or changed by anyone with the authority to do so regardless of whether he or she is comfortable with tools, approaches, and perspectives that are more natural to the programmer specialization.

A Cucumber-style tool can be justified arguably as a general tool because it serves the whole of the process with testing and specification, but merely serving the whole of the process doesn't atone for the reduction in accessibility and immediacy of collective artifacts.

To Aslak's credit, he demonstrated a way of using Cucumber to me where user stories are not removed to its media. Nonetheless, removing user stories to Cucumber media continues to be a practice that is perpetuated in the user community remains a non-trivial methodological problem that is likely rooted in the narrowing of focus of specialized workers due to impressive tooling.

NOTE: There are more methodological oversights inherent in these tools, but that's a subject for a subsequent article.

Here a some principles to keep in the forefront of your Agile practice amidst the distraction of so many Agile-targeted tools:

1. Individuals and interactions over processes and tools has profound meaning. It doesn't suggest that you shouldn't use tools or processes, but that these often become distractions from more powerful means of ensuring success. Consider tooling choices very carefully - especially specialized tooling. Understand how it might inadvertently narrow the perspectives and values of its users.

2. User stories are a whole team artifact. Any tool that moves user stories to a medium that the whole team doesn't have at their finger tips should only be used when there are no viable alternatives. Forcing some team members to go through other team members to work with collective artifacts is categorically not what is intended by individuals and interactions over processes and tools - even if it does cause people to work together to gain access to resources. These resources are supposed to be generally and immediately accessible to begin with.

3. If you choose to remove collective artifacts to workstep-specific tools, put protocols in-place to ensure discrete hand-offs between tools so that multiple versions of the truth are not available to different workers who work in different work steps. When a user story moves from one system to another, destroy the previous version and leave behind a placeholder - a kind of forwarding address for a user story - that tells the interested party how to find the user story's new home.

4. Don't fool yourself into believing that you'll be able to keep multiple versions of user stories in-sync through the life of a project. This kind of work is too costly to keep up and will be quickly abandoned by the project team, leading straight back to the multiple versions of the truth problems. Adding customized electronic synchronization automation to this problem is also not a good answer. This just adds more elaborateness to a situation that is possibly already too inelegant. It's a problem that likely shouldn't exist in the first place.

The inescapable truth of user story management is that the least-elaborate technology is often the most productive solution - even if the technology is no more elaborate than a ten-foot by five-foot cork board with index cards pinned to it. In the very least, this keeps user stories immediately visible and accessible, and makes changing them no more difficult than putting pen to paper.

Of course, the tried and true cork board and index card solution isn't a good foot for all teams. That said, as you seek more elaborate solutions, take tiny steps forward in settling on your solution. A leaping at the most elaborate solution for user story management usually ends up planting a whole team in a tool that serves the specialized perspective of the people doing the tool selection - often to the disservice of everyone else involved in the whole effort.

It's vital that we're not lured into local optima interpretations like, "individuals and interactions over tools and processes except when the tools are intellectually stimulating". All tools are intellectually stimulating. In many cases, tools can be stimulating to distraction. The problem is what we're being distracted from: the essential root cause of success in software product development. Namely, the magnification of understanding and clarity that comes from interactions between individuals over the course of time. The deeper problem is that we believe ourselves to be impervious to this kind of distraction because we have such positive feelings about our tool choices.

The immediacy and accessibility of user stories is a foundational corner stone that your Agile house is built on. Whether it stands strong and continues to be built upon, or whether it crumbles under its own weight can be decided entirely by this salient factor.

Do everything you can to make sure that user stories continue to belong to everyone, in every minute of every day. And be vigilant against any backsliding into specialized expediencies that remove user stories to any medium that is less immediate or less accessible than the simplest and clearest of tools at our disposal. An elegant solution is often not an elaborate one, but greater whole-team productivity is always more stimulating than elaborate, specialized tools.


Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com

Tuesday, September 28, 2010

User Stories are Temporary

It's obvious, but warrants mention: What we do in the future is likely to be different from what we're doing today. The implications for user stories should be obvious: User stories are temporary. Saving them for posterity doesn't serve the primary purpose of user stories, and doing anything that makes them less temporary can turn user stories from benefit to detriment.

User stories are a bit of semi-refined raw material that systems are created from. They're a way to represent users' needs. Most importantly, they communicate an understanding of users' context to the people who are building the software. User stories help to make it more difficult for software development teams to succumb to the ever-present threat of losing track of the user's bigger picture.

Aside from raw human ideas that user stories represent, they are the most flexible artifact in software development. Then can be created at any time. They can be changed at will. They can be combined. They can be subdivided. They're not intended to become fixed, and anything that serves to make them fixed is a disservice to the value they offer. User stories aren't fixed in a point in time, just as needs aren't fixed. They change because conditions change. They change because our own understandings change and improve as we gain the clarity that comes from working with user stories, and from building the products that satisfy needs.

When we realize new clarity, our responsibility is to reflect that new clarity in the form of new stories, or new subdivisions of existing user stories, or through the combination of existing user stories, and even from the removal of user stories from the product definition entirely.

The user stories that describe our needs tomorrow are going to be different from today's. Knowing this as we do, we have we gone from recognizing the transient, temporary nature of user stories to the point where we now tend to capture them in tools that make them harder and to change, and harder to take in at a glance and to glean new clarity from. When we do this, we're increasingly unlikely to update user stories to reflect the improved understand we cultivate as we work with them, with users, and with the software that we're developing.

The more that we calcify user stories in ever more concrete media, the less we make necessary course corrections along the way. There is little that so obviously contradicts "Agile" more than inviting anything that adds to the calcification of requirements. A principle duty of user story management in Agile development is to protect them from becoming encumbered by the notions of permanence that obstructed software development methodologies that predate the effectiveness of Agile methods.

We pat ourselves on the back for having evolved beyond the quintessentially-primitive "300-page requirements document", but often all we're doing is bringing some of the same old problems forward to our "new" requirements formats. Sure, we've outgrown the massive, big-batch requirements doc, but the genetic material of the disease that infected those documents have evolved into a newer problem for newer artifacts, and ultimately, the problem is still the same essential problem. We're so pleased with ourselves for our new Agile ways that we fail to see that we're repeating the same mistakes, except with new material.

Here are some things to keep in mind that can help with the relapse into the same old mistakes:

1. When you forget why you're doing something the way you do, the answer isn't in a user story archive. The answer is in revisiting what you're doing to make sure that it still holds true. Do this with interactions and with people. Use tools and processes to support the human interactions.

2. User stories are not artifacts for regulatory compliance. User stories are a scratch pad of human understanding at the present moment. They are chuck for the software development grinder. What comes out the other end should be the software that itself explains why things are as they are. If you need a permanent record for the purposes of regulatory compliance, go ahead and create one. Nothing in Agile stops you from doing that. Just don't hang a stasis anchor around the neck of user stories. If you do, you'll strip them of the benefit that they bring to software delivery.

3. Have one master copy of a user story while that user story is in-progress. The master copy should be the one that is most immediately accessible to the most people who are involved in turning ideas into software products. They should be be posted using the most visible medium possible - one that requires the least amount of human interaction to casually-scan a project's gallery of user stories.

4. Throw user stories away once you deliver the working software that addresses the needs expressed by user stories. The software and its supports are enough to describe why it is the way it is. If you no longer remember, and if the software and its supports and the mechanisms and practices of institutional memory aren't sufficiently self-descriptive, revisit all of these issues as an opportunity to improve the production system in a way that doesn't invite more tool-centric bureaucracy.

Here's a bit of a side note to consider: The principle sponsors of the Agile 2010 conference are both "Agile" tool vendors.

If one of Agile development's primary tenants is to value people and interactions over processes and tools, how is it that the only Agile organizations producing enough disposable income to support Agile development itself are tool and process companies specializing in Agile requirements management and requirements compliance? Somehow, we've walked in a circle - yet again. And one of the things we've left on the trail behind us is the understanding of how things went wrong last time, as well as our commitment to avoid expediencies that don't serve sustainability.

The process methodologies that predate Agile are often roundly dismissed by agilists. But there wasn't really inherently wrong with them. Things back then started getting silly when tools started getting silly; when we began to defer to tools as authorities rather than to the people needing and the people solving needs.

One of the first definitive steps into the downward spiral of mindless process bureaucracy hell is the calcification of requirements. And no matter how temping it is to succumb to the materialism of hardened collections of user stories, we don't get to hell without passing through those gates.


Ampersand GT

Working with software developers and organizations to help realize the potential of software product development through higher productivity, higher quality, and improved customer experience

Learn more about my work and how I can help you at ampgt.com