The common ground of #Estimates and #NoEstimates

One of the criticisms of the #NoEstimates (NE) stance is the view that even contemplating not estimating is a non-starter – because estimates are “for those paying our salaries”, not those doing the work. That the business folk in our organisations need to know what will happen and when in order to run their company successfully.

OK, even if NE advocates could successfully argue against that assertion, perhaps it is time we started to acknowledge the “impediments” of the debate? The back and forth arguments that prevent us from moving forward to a more constructive place.

Perhaps it is time to find the common ground, and build on that.

A simple truth is that business wants (needs) both speed and predictability. I think we can all (mostly) agree on that 🙂

Some NE critics argue that we should learn better estimation skills such that our predictability improves. Yep, sure. Difficult to argue that learning to do something better is a bad thing.

Given that we have to do a lot of estimating as software practitioners, learning and using more effective estimation techniques seems a good idea.

However, in return for NE advocates acknowledging that we need to provide estimates for those asking, and get better at doing so, I think it’s time for the critics to acknowledge that arguing better estimation as the answer to all the dysfunctions surrounding software estimation is another impediment to the debate moving forward.

I see common ground in that we are all trying to create better predictability for our teams, customers and internal stakeholders. If we put aside “better estimation” as a way of doing that, how else might we do it?

Better predictability can be achieved in many other ways:

  • Stability and autonomy of teams
  • Limited WIP of initiatives (1 per team at any given time)
  • Frequency of delivery of “done” software
  • Cadence of collaboration – planning, building and reviewing together
  • High visibility and transparency of work and progress
  • Shared understanding of variability, its effects on long range planning and how/when to try to minimise it or leverage it to our advantage

to name but a few.

To take the NE debate forward, we need to find ways to provide “those paying our salaries” with the predictability they need, while at the same time moving away from the dysfunctional behaviours associated with holding teams accountable to estimates in highly unpredictable environments.

What is an unpredictable software development environment? One in which 1 or more of the things listed above are not being addressed. It might not be a stretch to suggest that’s pretty much every software shop on the planet.

There is common ground between critics and advocates in this debate. Let’s move on from “no estimates for anyone!” and “just learn better estimation techniques” – these arguments will perpetuate butting heads.

Instead, let’s explore – together – how we might create more predictable environments for our software teams, such that estimation becomes easier (and, in some cases, redundant).

“We are uncovering better ways of developing software by doing it and helping others do it.”

~ Agile Manifesto

9 thoughts on “The common ground of #Estimates and #NoEstimates”

  1. Neil, I’d suggest predictability is very difficult to achieve in the presence of uncertainty. And estimate has two attribute – it is accurate and it is precise. The values of these two attributes are what those needing the estimates are after. With the knowledge of these two values of these two attributes the decision makers can then assess the “value” of the estimate. It may well be all they need is an order of magnitude (10X). “How much does it cost to install 50 sites of SAP, with 100 users at each site?” A question asked to our firm in the past. 100M, 200M, 500M? Or something more accurate and precise. “Can these two Features 2015007 and 2015008 going to make into the next Cadence release, scheduled for the end of November?”

    Predictable is not actually an attribute without knowing the “Desired” accuracy and precision. That request comes from those asking for the estimate. In our domain that starts with a Broad Area Announcement to provide a “sanity check” to those asking for the solution to “test the bounds of the budget they may need to acquire the capabilities from the project.

    Regarding the dysfunctions of business that are connected to estimates – there are many. Many in our domain and most other domains. But that dysfunction is not “caused” by the estimate. It may a “symptom” of poor estimates, but not the “cause.” Without first identifying the Root Cause of the symptom, not suggestion for improvement will be effective

    Finally without a context for asking for the estimate and providing the estimate, the assessment of the needed precision and accuracy cannot be determined.

    My suggest for a common ground is this to establish

    ■ the understanding that estimating are needed to make decisions for those providing the money
    ■ the needed precision and accuracy of that requested estimate
    ■ the available information (past performance), model (parametric or probabilistic) that the estimate will use, and how that information will impact the accuracy and precision
    ■ confirm that those making the estimate have the needed skills, experience, and knowledge needed to produce an accurate and precision estimate to meet the needs of those requesting the estimate
    ■ acknowledge on both sides of the conversation that making decisions in the presence of uncertainty requires making a decision based on an estimate of the outcomes of that decision. This estimate be “quick and dirty” even to the point of “it’s my feel this will be the outcome,” to a detailed bottom up Basis of Estimate for spending Millions if not even Billions of the customers money. This is the “Value at Risk” conversation. “What are you willing to risk of you make that decision without the needed accuracy and precision to “protect” you decision for a loss

    Then those exchanging ideas about the need for estimates can have a common set of principles to exchange ideas. At this point there no common sets of principles. I’d suggest that the #NoEstimates advocates have not provide the principles by which decisions can be made without estimating the impact of that decision,

  2. As always, good points. However, reducing contributions to uncertainty from the work environment may not be achievable. The pace of business has increased dramatically in the last 15 years. In many organizations, the trend is for the workforce to evolve and become more virtualized, work in progress to be continually re-prioritized, and the market to shift quickly in directions that can’t always be anticipated. We should anticipate that teams will be continually reformed, roles will change, goals will change, and so on. To bowdlerize The Dread Pirate Roberts, “Get used to uncertainty.”

    Let me suggest that the goal of estimating should be to reduce uncertainty to a level where decisions and commitments can be made. Not commitments to a precise cost or schedule, but simply to proceed, in pursuit of specific benefits. And that decision to proceed must always be contingent on evidence of progress commensurate with expectations. We should not manage on a ballistic trajectory, where nothing can change between pulling the trigger and impact on the target. Like modern guided missles, continuous corrections must be made, in order to optimize the effects of scarce resources – human, financial, and calendar. Thus, estimates without assumptions, a definition of done, or specific way to measure progress contribute little to the goals of reducing uncertainty or achieving benefits.

    The best estimates open a dialog. You wouldn’t seriously support the Hashtag, “#NoCommunicating,” would you?

    1. Dave Uncertainty is of two type – irreducible (aleatory) and reducible (epistemic) . These are the ONLY sources of uncertainty from a modeling and assessment point of view
      Effort to reduce uncertainties is only applicable to “reducible” uncertainties. Irreducible’s are naturally occurring in all project work. Research shows the IRREDUCIBLE’s are the primary source of cost and schedule delays (NASA Sofware Intensive Systems).

      As you suggest “Let me suggest that the goal of estimating should be to reduce uncertainty to a level where decisions and commitments can be made. Not commitments to a precise cost or schedule, but simply to proceed, in pursuit of specific benefits.” IS the basis of all good estimating processes.

      When #NoEstimates uses commitment its a fallacy in the estimating world. But estimates do have precision and accuracy measures. See for these. So Precision is important. But it’s critical to define how precise and how accurate the estimate must be to make a decision for the “Value at Risk” resulting from that decision.

      1. Glen, while I won’t challenge NASA’s research, let’s agree that their problem domain is a bit specialized. Customer-facing software development, packaged software implementation, and SaaS deployments in a commercial environment generally have more reducible uncertainty, since we generally aren’t pushing the boundaries of physics and the failures tend to be less catastrophic / more recoverable. With that proviso: while aleatory uncertainty can be estimated and factored into an executive decision, epistemic uncertainty is of more immediate interest to commercial entities, since the organization can decide whether or not to apply resources to reduce it. Which brings me back to my punch line: the best estimates open a dialog.

  3. “it’s time for the critics to acknowledge that arguing better estimation as the answer to all the dysfunctions surrounding software estimation”

    I totally agree. I’ve actually never thought that ‘better estimates’ is the real root cause of any dysfunction. All we can do is honestly – and in good faith – say what we think. And that should be dealt with honestly and in good faith as well. Perhaps have a discussion about how to deal with all this in a healthy and honest way?

    As an aside, ‘better estimates’ are implicit and inevitable. We learn all the time. Just as we learn to become better at writing code – we learn by doing. The same goes for estimates – we learn by doing.

    And BTW, I love Dave Gordon’s comment: ‘The best estimates open a dialog. You wouldn’t seriously support the Hashtag, “#NoCommunicating”’ 🙂

    Kind regards,

    1. A better argument is “find the root cause of the dysfunction” and provide the corrective action. If poor estimating is actually the Root Cause, then getting better at estimating is the corrective action.

      I suspect there are many other Root Causes and management dysfunction.

      But asking 5 whys is NOT Root Cause analysis. It’s a mechanism used during RCA, but much more is needed. Start here The notion of 5 why suggested by #NE is seriously flawed.

      Here’s how to fix that flawed notion

  4. Dave,

    You may be surprised to learn much of the NASA SW we work on is customer facing, packaged (COTS and GOTS and integration of COTS & GOTS) and many systems deployed SaaS in the cloud.

    My colleague – a former NASA cost director – and I have done many Root Causes Analyses of these types of projects.

    The conclusion for the RC is one of three:

    1. We couldn’t have known – It was a science project, and we’re inventing things that have never been done before.
    2. We didn’t know – because we were either too lazy to look for the uncertainties and the resulting risks, or we didn’t have enough time to look, or the customer didn’t fund that investigation.
    3. We don’t want to know – because if we knew, we’d cancel this program. This latter RC accounts for slightly over 50% of the programs that triggered a “over target baseline” or a Nunn-McCurdy breach.

    When there is a preponderance of Reducible Uncertainties, I’d suggest that indicates low visibility to the emerging risk created by those uncertainties. We start our Root Cause Analysis with the question – “why didn’t you see this coming?” “Was it knowable?” “If so, why did you not take action to prevent the unfavorable impact to the project? From this last question, you can pick from the list (1),)(2), or (3) above.

    So yes, estimates of any kind are the basis of improving visibility to the emerging problems that will reduce the probability of project success

    1. Dave, another thought – reducible uncertainty is event based – the probability that something will happen in the future – there is a 65% probability that the database server cannot keep up with the transaction demands from the user community. The uncertainty creates a risk of poor performance. And can be “reduced” by having a scalable backend to meet that demand. This can be done in the server farm, or in the cloud, but it is event driven.
      Irreducible uncertainty is naturally occurring variances in the project or the external environment. These are statistically based from the distribution function describing the population of possible behaviors. The only protection from irreducible uncertainties is “margin” or “reserve” for cost, schedule or performance. For schedule, it’s schedule margin to protect a deliverable date. For cost it’s reserve. For performance it’s surge capacity – running the DB servers at 60% capacity as a norm, and providing surge to 80% before the “event” of > 80% load is reached.
      So unless your COTS product domain has no natural variabilities, you are correct in your instant that the uncertainties are dominated by Reducible uncertainty, which would mean you know about them and have mitigations to “reduce” their impact.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s