What would my #NoEstimates book look like?

I mentioned on Twitter today that, at one stage a few years ago, I was close to writing a book on #NoEstimates. I was then asked what the theme of my book would be, in contrast to that of Vasco Duarte‘s popular and controversial book on the topic. The answer was way too big for Twitter, so I decided to write this post.

Despite the encouragement of many kind folks for me to write the book, I decided against it – partly because of work commitments, but partly because I didn’t want to risk becoming associated to the hashtag as a brand (I’ve kind of failed with that anyway, but hey ho!).

I was also wary of developing a bias toward whatever ideas made it into the book. I always want to remain as impartial as I can to respectful critique of my ideas, as well as new and challenging ideas from others, and putting my current opinions down in a book would give me a vested interest to defend them. With blogging there is the space for opinions to evolve relatively safely. For instance, my views have evolved significantly on SAFe since my 2012 rant piece, and they have similarly evolved on #NoEstimates.

All that said, if I were to write a book on the topic, what would it look like?

While there are many cross-overs in my ideas and opinions with those of Vasco, who (with far more bravery and commitment than I could muster) did take the plunge and write a book, my book would certainly come at things from a different angle.

Right from when I started blogging and speaking about #NoEstimates around 5 years ago, the most interesting aspects of the topic to me were less about the actual act of estimating and more about the dysfunctions surrounding it. I actually have no issue with estimating (I even enjoy estimating! – shhhhh), and the angle of “estimates don’t work” is not one I subscribe to.

But there are certainly dysfunctions. Ask any audience at a software conference – or managers and teams alike at almost any software organisation – if they have any problems with estimation in their company. The energy in the room immediately changes to one of awkwardness.

Make no mistake, software folks have major issues with estimation.

This is even more interesting when you consider that organisations are supposedly trying to leverage the benefits Agile Software Development – iterative and incremental delivery of value generating software, with the ability to quickly jump on and deliver new ideas for the customers’ competitive advantage – but are struggling to adapt their planning and scheduling techniques accordingly. How do we keep to schedules now that we’re doing this Agile thing? – they ask. How can we ensure we meet our business goals if we’re focusing on customer goals? It’s no surprise that “scaled Agile” frameworks such as SAFe and LeSS are gaining popularity.

So, with that precursor, here are some of the topics I would explore in my hypothetical #NoEstimates book. They are in no particular order, but I have separated them into two different perspectives – that of the provider of estimates and the requestor.

Provider (e.g. developers, teams)

  • Shooting yourself in the foot
    Not incorporating appropriate levels of uncertainty into an estimate (e.g. giving a precise date when a set of scope will be delivered, saying a precise set of scope will be delivered on a fixed date, not calling out risks/assumptions/issues/dependencies as part of the estimate, etc.)
  • Not revisiting an estimate
    i.e. not monitoring progress toward the goal upon which the estimate is based and revising the estimate accordingly
  • Not understanding/asking for the reason behind the request for an estimate, and thus providing the wrong kind
    This is a problem for the requestor also (see below)
    e.g. “How long will this take?” might indicate a need for a relative comparison of two pieces of work for ROI purposes rather than a lead time assessment of one piece of work
  • Being solution rather than problem focused, so the wrong thing gets estimated
    This is a problem for the requestor too (see below)
    e.g. building an automated email system, integrated with MailChimp, when a manual email with no integration is all that is needed to deliver customer value and determine feature viability

Requestor (e.g. managers, business stakeholders)

  • Treating an estimate as a commitment (i.e. “you said it would be done by today” mentality)
    Likely to be an issue with the estimate not incorporating the appropriate level of uncertainty, as described above, or management not allowing it to
    Leads to a situation where everything has deadlines, most of which are arbitrary, so real deadlines don’t get prioritised and treated as such
  • Not understanding the devastating effect of queues on lead time
    Queues are typically the biggest contributor to cycle time in the current software product management paradigm
  • Not understanding and addressing other causes of variability
    Such as volatile teams, too many projects in flight (WIP), complicated technical/schedule/other dependencies
    Predictability comes from having a predictable environment, not from making predictions – I’ve likened this in the past to the Shinkansen bullet train system – building a network for predictable, high speed trains rather than trying to make trains faster or more predictable
  • Treating effort estimates as calendar time
    A symptom of queue, WIP and other variation ignorance, above
    “It will take 6 weeks” is often a best-case effort estimate, where many assumptions are made
    The actual time it will take (without compromising quality) is typically way longer, even if the actual effort on that work item accumulates to just 6 weeks
    The relationship between effort and cycle time is often referred to as “flow efficiency”, and is an interesting factor to consider when discussing this topic – given how low flow efficiency is in your typical software development organisation
  • Poor/no prioritisation
    With no actual order/sequence of desired work items, or one that changes constantly, it is very difficult for teams to make reliable estimates or sound trade-off decisions
  • Ignorance of cost of delay
    If economic urgency is not incorporated into work prioritisation, work will become urgent when it is too late to trade off other commitments – this causes compromises in quality and predictability, and means deadlines are more likely to be missed
  • Not understanding/asking for the reason behind why they need to make a request for an estimate, and thus asking for the wrong kind
    This is a problem for the provider also (see above)
    The requestor might also be the provider for another requestor (e.g. boss or client), so there can be a chain of misunderstanding
    They need to know why they need the estimate, and what kind, so they can give this information to the provider and have a productive dialogue about the best approach
  • Being solution rather than problem focused, so the wrong thing gets estimated
    This is a problem for the provider also (see above)
    e.g. asking the team to estimate how long it will take to build a fully fledged reporting system when there is a far simpler and quicker way of giving the customer what they need

    This not only reduces predictability but also removes the team’s ability to get value to the customer (and thus the business) sooner
  • Asking individuals and teams to be predictive beyond a short 1-4 week timespan rather than using empirical process
    Due to the complex nature of product development, teams should only estimate individual backlog items and forecast how much they can do in a short timespan (e.g. 2-week sprint/iteration)
    For batches of items beyond this short timeframe (e.g. releases), empirical forecasting using “yesterday’s weather” should be used to answer “how long” or “when” questions, not asking developers
  • Asking for development estimates before a value and capacity assessment has been made
    How long a feature or project might take is utterly irrelevant if the anticipated business value of building that thing isn’t high enough compared with other opportunities, or there will not be capacity available soon enough to do so
    Yet often the requestor takes a back-to-front approach and wants to find cheap/quick options, or ones that will fit into a pre-defined schedule, rather than doing their homework with understanding value and capacity
    This leads to developers being constantly interrupted to provide estimates for low value work they may never do or, perhaps more worryingly, doing lots of low value work because it is estimated to be cheap
    On a related and ironic note, the higher the anticipated business value, the less it matters how long the work would actually take (assuming a standard cost of delay profile and that the value can be generated early enough to fund the development team) – a higher precision estimate (and actual) is needed when there is a lower margin between R (business value) and I (cost of the development team)
  • Not allowing a truly iterative approach, rendering the use of experiments by teams to “buy down” knowledge (and reduce risk) impossible
    Strongly linked with being solution focused (see above)
    If the team cannot iterate to solve a problem, they may be locked into building a solution which is later learned to be an incorrect one
    When the customer inevitably changes the requirements due to the emerging information, the organisation might be too invested in the current solution to throw it away (“sunk cost fallacy” comes in here), thus scope grows and expectations become misaligned with reality

I’m sure there are far more #NoEstimates topics I would cover in my book, but stopping short of actually writing it I think I’ll end here :).

#NoEstimates isn’t just about estimating

The #NoEstimates conversation is largely about estimating nowadays rather than NOT estimating.

Estimating, but in a probabilistic way. People often refer to this type of estimating as forecasting. Using cycle time. Throughput. Variance. Little’s Law. Monte Carlo.

All famously good stuff.

But I don’t want people thinking that’s all there is to the conversation. Many folks have interpreted it that way.

For me, larger questions remain. For example, is it possible, in certain situations, to deliver value to the customer at a rate which negates the need for doing any estimating at all, both up front and ongoing? Quick enough that they do not need to make any decisions or commitments based on anticipated delivery, only what was actually delivered?

Beyond whether this is possible or not in certain contexts, why might it actually be important or desirable to be in this state of not needing estimates? I can get away with not eating apples, but is it actually useful for me to not eat apples?

Well, the fact that estimates are usually needed implies that decisions and commitments of some form are made based on them. This is a common argument cited as to why estimating is immutable when working with customers in uncertain domains.

However, often the knock on effects of an initially inaccurate estimate are damaging financially or culturally. So I can imagine, in certain situations, it might be possible, and desirable, for the customer to ask for delivery of tiny working increments which can provide value for them right away and, explicitly, no estimates are asked for because doing so would create potentially irreversible knock on effects. Perhaps losing another customer’s trust by not meeting your “commitment” to them. Perhaps having to trash another project for which you had a team lined up to work on if things “went to schedule”.

I can imagine a few reasons why we might want to enter a working relationship in which we explicitly value the rapid delivery of added value over the anticipated delivery of value at some future point. Not to mention the trusted working relationship side of things. “Customer collaboration over contract negotiation”.

These are the broader questions I’m interested in. We get it, we can forecast with data to avoid deterministic estimation rituals and provide more solid, transparent estimates of when we will be done, or what will be done by when.

But can #NoEstimates thinking actually take us further? Into whole new ways of working with our stakeholders and customers?

My Slicing Heuristic Concept Explained

This is a concept I devised a couple of years ago, and it seems there is a new #NoEstimates audience that would like to know more about it.

A Slicing Heuristic is essentially:

An explicit policy that describes how to "slice" work Just-In-Time to help us create consistency, a shared language for work and better predictability.

Crucially, the heuristic also describes success criteria to ensure it is achieving the level of predictability we require.

The Slicing Heuristic is intended to replace deterministic estimation rituals by incorporating empirical measurement of actual cycle times for the various types of work in your software delivery lifecycle. It is most effective when used for all levels of work, but can certainly be used for individual work types. For a team dabbling in #NoEstimates, a User Story heuristic can be an extremely effective way of providing empirical forecasts without the need for estimating how long individual stories will take.

However, if you are able to incorporate this concept from the portfolio level down, the idea is that you define each work type (e.g. Program, Project, Feature, User Story, etc.) along with a Slicing Heuristic, which forms part of that work type’s Definition of Ready.

For example,

"A feature ready to be worked on must consist of no more than 4 groomed user stories"

or

 “A user story ready to be worked on must have only one acceptance test”.

The success criteria will describe the appropriate level of granularity for the work type. For example, you might want user stories to take no more than 3 days, and features no more than 2 weeks.

Here is the really important part. The idea is not to slice work until you estimate it will take that long. You never explicitly estimate the work using the Slicing Heuristic. Instead,  as the work gets completed across the various work types you use the heuristic(s) to measure theactual cycle times, and then inspect and adapt the heuristic(s) if required.

At the user story level, I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less. However, there are alternatives. Instead of acceptance tests you could  use e.g. number of tasks:

 "A user story must have no more than 6 tasks".

Here is an example Slicing Heuristic scenario for a Scrum team using the feature and user story heuristics described above:

  • Product Owner prioritises a feature that she wants worked on in the next Sprint
  • PO slices feature into user stories
  • If feature contains more than 4 stories, it is sliced into 2 or more features
  • PO keeps slicing until she has features consisting of no more than 4 user stories; they are now ready to be presented to the team
    Note: Unless this is the very first feature the team is developing, the PO now has an estimate of how long the feature(s) will take, based on historical cycle time data for the feature work type; no need to ask the team how long it will take
  • In Sprint Planning, team creates acceptance tests for each user story
  • If there is more than 1 acceptance test, story is sliced into 2 or more stories
  • Team keeps slicing until all stories consist of only one acceptance test
    PO now has an even more reliable forecast of when the feature(s) will be delivered because she can now use the user story cycle time data in conjunction with the feature data
  • Team delivers each story, and records its cycle time in a control chart
  • If a story is taking longer than 3 days, it is flagged for conversation in Daily Standup
  • Multiple outliers are a sign that the heuristic should be adapted in the Sprint Retrospective
  • When the feature is delivered, its cycle time is measured also
  • Again, if features are taking longer than is acceptable for the heuristic, the heuristic should be adapted to improve predictability (e.g. reduce maximum number of user stories per feature to 3)

CAIN (Continuous Attention to Individuals’ Needs) – An #AntimatterPrinciple approach to retrospectives

This is an idea for a series of focused retrospectives called CAIN (Continuous Attention to Individual’s Needs). It is inspired by, and based upon, Bob Marshall’s Antimatter Principle (Bob is @flowchainsensei on Twitter – you can find all related posts and tweets here).

The premise of the Antimatter Principle is simple – Attend to folks’ needs.

Think of CAIN as one example of a concrete implementation of this principle, or a method.

CAIN adapts the typical retrospective questions of “What’s working well?“, “What’s not working well?” and “How can we improve?” to directly address the needs of folks in a team in a systematic way.

The team is looked upon as a group of individuals with unique human needs rather than purely a homogeneous unit. Continuous improvement efforts are focused on the habitual attendance to each individual team member’s needs (hence the name CAIN) rather than trying to ascertain the needs of the team as a whole.

From a Toyota Kata perspective, the current condition is the number of unmet needs in the team. The target condition is zero unmet needs. The team as a whole will continuously endeavour to reduce the number of unmet needs of the individuals in the team via deliberate actions and experiments identified in the retrospective.

What’s working well for me? (needs being met)
What’s NOT working well for me? (needs NOT being met)

Each team member spends time individually reflecting on events since the last retrospective that directly addressed one or more of their innate needs, and those that did the opposite. They are also invited to highlight needs that feel unmet due to something that didn’t happen.

:) "I feel very valued this week, and that I am starting to form friendships in this team."
:( "I feel our work at the moment is quite mundane and uncreative."
:( "I did not receive recognition for my efforts this week, which leaves me feeling somewhat deflated."

For this exercise, the team might find it useful to refer to a model for representing human needs, such as Maslow’s Hierarchy of Needs.

2000px-Maslow's_Hierarchy_of_Needs.svg.png

Folks are invited to consider their emotional response to events rather than trying to be rational or scientific about what is working and not working. How they feel about what happened (or didn’t happen) rather than an objective assessment of what is effective or not.

For this to happen, it is more important than ever that the team feels they are in a safe environment, so the need for a safety check is paramount. Folks are being invited to share intimate thoughts and needs as human beings, so a high degree of trust is required. Consider that CAIN might also be used as an approach to build this required trust in the first place.

It’s also worth pointing out here that reflecting on one’s actual needs is not a simple task*. It is far easier for us to talk about what we want, or think we want, rather than what we actually need.

However, there is much inherent value in simply talking about needs – especially those of the deepest human kind – even if the “needs” that get identified are not truly the innate needs of the individuals. With practice, the team will become more effective at identifying genuine needs, and in the meantime they will at least be talking about them, building trust and perhaps making their work environment more joyful in small ways.

*This is also apparent in so-called requirements elicitation, where folks try and identify what the customer needs by asking them what they want. Actual needs are somewhat intangible in practice, and tend to emerge over time rather than be identifiable in the present.

How might we attend to my unmet needs?

Having celebrated the needs of its members that are being met, the team will turn its attention to addressing unmet needs. Folks are invited to spend time individually thinking of ideas that might reduce the unmet needs count.

All ideas are presented, and the group votes for the one they think might have the biggest impact.

An experiment is formed, and each team member goes back to her/his work routine, hopefully with an enriched view of her/his own needs as well as those of their colleagues.

What next?

My hypothesis is that CAIN might reap better results than other retrospective approaches because it reduces the risk of groupthink. No attempt is made to collate things identified as (not) working well into a team consensus. It is always the needs of the individuals that are focused on.

CAIN also might reap positive results because it focuses on the strongest lever for improving effectiveness, which is mindset. The conversations that arise when folks are unravelling their personal and professional needs will reveal differences in mindset – dissonance – which, left unaddressed, will result in a perpetuation of ineffective strategies for getting needs met, leading to conflict, competition, poor results from a team’s perspective and, ultimately, that of the entire organisation.

I invite you to give CAIN a try in your next team retrospective, and share your experience 🙂

8 signs of an agile organisation

1. Folks have shared goals with the organisation at a shared time

— i.e. they are synchronised with each other and the organisation’s goals

When we think about what makes Agile such an effective approach to software product development, we think about a single team, working toward a single product vision, happily iterating and incrementing toward a common goal.

As soon as you add just one more team or product in the mix (often referred to as “scale”), you have already added significant complexity to the situation in terms of product prioritisation, team processes, methods, estimation, relationships, dependencies and more. In short, keeping the magic of single team/single goal becomes increasingly difficult — seemingly impossible.

Well, It’s certainly not impossible. Agile organisations make principle-led decisions that allow them to keep the single team/product magic alive. They ensure that every person in the organisation has a clear goal at any given time — the same goal as that of the team they work with — which in turn is aligned to the correct organisational goal at that precise time.

Principles are one thing, but structurally this might sound too hard. Again, yes it’s hard, but it can be achieved via clear and ongoing prioritisation of initiatives (the things of [assumed] value that we want to achieve as an organisation), and forming autonomous teams/squads/tribes around them. If there are dependencies between teams, act to minimise them — remove them completely if possible — for your highest priority initiatives. Push the dependencies down the priority list.

It can also be achieved by forming teams around long-lived themes, such as customer capability. For example, MYOB builds accounting software. If I were to ask one of our SME customers “What does MYOB software enable you to do?“, the type of answer that would come back would be “banking“, “taxes“, “payroll“, “reporting“, etc.

These functions are all candidates for forming cross-functional squads around — squads that include all folks required to deliver end-to-end value within that area of capability for the customer. Suddenly, our business mission of “Making business life easier” is broken down into “Making banking easier“, “Making taxes easier“, “Making payroll  easier“, and so on.

As a side note – this kind of customer-centric approach is also a hallmark of a truly agile organisation.

2. Decisions are made quickly and daily by all

— via ubiquitous information, not based on rank

In knowledge work organisations, thousands of little decisions are made every day. If folks do not have good information with which to make those decisions — or they are not empowered to make them for some other reason — there can be a huge impact on the organisation, both culturally and economically.

For example, if I am asked to deliver two outcomes, and it becomes apparent it is only possible for me to deliver one of them, what should I do? Do I have the appropriate information (and authority) to choose which one to sacrifice? What about technology choices? How should I deal with this customer situation? Should I fix this bug? Should I ship this feature now or delay delivery for a week?

If I cannot make these decisions quickly — in a way that is consistent with how others would decide (because we all have access to the same information), and without fear of being punished for making the wrong decision — the organisation might effectively grind to a halt.

An agile organisation makes decisions quickly — based on clear decision frameworks that enable everyone from the CEO to the cleaner to fearlessly make good decisions every day.

3. “How the work works” is optimised for sustainable responsiveness to customer needs, not output nor strategic certainty

— i.e. responding to change over following a plan

An agile organisation is not [necessarily] one that can churn out masses of features every week. It is also not one that jumps around, shifting priorities from one week to the next. Instead, it is an organisation that can seize opportunities very quickly — opportunities that arise either via internal ideation or changes in the market.

An agile organisation should have a strategy, sure, but it recognises that the strategy might be misguided, or needs to change for some other reason, so it ensures that teams can adapt quickly to a change in strategy rather than require a painful restructure. It does not put all its eggs in one basket and optimise for delivery of the strategy. It actually embraces agility itself as a strategy.

Lead time is a common metric for lean/agile organisations to focus on, and rightly so – how quickly can we turn an idea or request into real value for a customer?

In theory, it is easy to turn that killer idea into a real thing for a customer quickly, even if your organisation does not yet have the necessary infrastructure for true agility. You can achieve it via an authority figure usurping other “priorities” and thus seeing their request expedited by a team swarming around it, doing what they are told to do as quickly as possible at the expense of everything else. Look how quickly critical production issues are resolved when all the key people are thrust together immediately with a single, shared focus and goal.

This is why organisational agility requires not just responsiveness but sustainable responsiveness. The expedite situation above can never be sustained. True agility is being able to turn on a sixpence due to work being done in small enough batches — and with enough slack in the system for folks to be quickly available when new and better opportunities arise than the ones we currently have.

4. Where rituals and practices are required in order to achieve 1-3, the preference is always for individuals and interactions over processes and tools

— and conveying information to and within a teams via face-to-face conversation.

Organisations typically default to processes and tools when trying to address improvements in performance. Truly agile organisations are full of folks who recognise that improving the quality and quantity of their interactions with other folks is the key to improved performance.

For example, frequent all-hands gatherings to plan together, celebrate and review achievements, learning and progress toward goals — over trying to get everyone to put all their tasks in Jira.

5. All hiring is for mindset over skills over experience, which allows for implicit trust that folks will always commit to doing their best

— i.e. build projects [sic] around motivated individuals – give them the environment and support they need, and trust them to get the job done

This one almost speaks for itself. If every hiring manager in the organisation hires other folks with a mindset aligned to the desired collective organisational mindset, they are [by design] hiring people they trust and who will be motivated to achieve the organisation’s goals.

There is no place — nor need — for hierarchical authority, carrots and sticks in effective agile organisations.

6. Organisational performance improvement is addressed via deliberate action toward systemic change (that is hypothesised to improve the organisation in the desired manner)

— not via the sum of collective individual performance appraisal

Managers and leaders who focus their efforts on trying to improve the performance of individuals in their teams are playing a low percentage game in terms of the likelihood of any significant effect on organisational performance. Managers are far better served addressing “the way the work works” — the system conditions — to achieve the kind of improvements they are being asked to make.

If you want your organisation to get better in a particular area (e.g. efficiency), fix the environment to support improved efficiency for the whole eco-system, not any one team or individual.

(I’ve referred in my talks to this systems approach to improvement as “building a network for high speed trains” rather than “trying to make trains faster“).

7. There is no delineation of “business” and “IT”

— technology folks work daily with other folks who share the primary concern of serving customer needs, thus the organisation operates as  one “we” rather than many “them’s”

As I’ve already alluded to, organisational agility requires high trust between individuals, teams and departments. Unfortunately, typically organisations are instead built around silos, which encourages and then perpetuates a low trust environment. This is why folks in, say, development teams, end up referring to folks in, say, sales or marketing teams as “the business”.

High level strategy and day-to-day task execution must be mutually respected in equal measure by the folks who are responsible for them. Everyone in an organisation is “the business”, and until folks recognise this and live it daily they cannot be part of a true agile organisation.

8. There is continuous attention to [technical] excellence, simplicity, learning and improvement

Organisational agility requires an embedded continuous learning and improvement culture. There are always better ways of doing things in complex environments (such as knowledge work organisations). “Best practice” and cookie-cutter processes will never achieve agility.

Instead, we must experiment with models, heuristics and methods that allow us to adapt, pro-act, react and enact with respect to what we’re building and how we’re building it.

What other signs are there of an agile organisation? I’m sure there are more — please share 🙂

#NoEstimates is neither the destination nor the journey

Being one of the early contributors to the #NoEstimates (NE) hashtag, and a regular blogger on the topic, I am understandably regarded as a “#NoEstimates advocate”. When I get introduced to folks at meetups and conferences, a typical exclamation is “Hey, you’re the #NoEstimates guy!”

Another consequence of my reputation as a pioneer of the “movement” is that I will often get asked questions that, when answered, are deemed to represent the views of all NE advocates or, more bizarrely, NE itself. It’s as if NE is a thing that can have an opinion, or is a single method/approach. “What does NE say about X?” or “Here’s what the NE’ers think“.

What some don’t realise is that there are wide and varied disagreements between so-called NE advocates. It’s similar to the variety of viewpoints that you would get within, say, a political party. The party represents a set of values and principles, but there will rarely be a situation where all the members agree with every policy proposed or pushed through in the party’s name. I guess the same could be said of Agile too.

Folks are naturally interested in the practicalities of what a #NoEstimates approach might look like. This is fantastic, and I welcome questions and discussion on this. I engage in such conversations often. But I do want to make a point about an underlying presumption behind most of the questions I receive. Here are some of the most typical ones:

“How do you prioritise at the portfolio level without estimates?”
“How can you make decisions in the presence of uncertainty without estimates?”
“How do you convince senior management to implement #NoEstimates?”
“How can we minimise the number of things we need to estimate?”

What these questions have in common is that “not estimating” at all levels of work is where we want to head. That the goal is to reduce our estimates across the portfolio, with zero estimates as utopia. That the premise of #NoEstimates is the less we estimate, the more effective we will be.

For me, DOING NO ESTIMATES, or even LESS ESTIMATES, has never been a destination from my point of view.

My focus has always been on improving the way we work such that estimating becomes redundant.

This means understanding our business better. Becoming more stable and predictable in our method of working. Building relationships based on high levels of trust and respect. Reducing dependencies between teams. And so on.

People ask “So, Neil, how do we get started with #NoEstimates? Should we simply stop estimating and see what happens?”

The answer to this is a categorical “NO“, at least from where I sit. There are a set of minimum conditions (or “barriers to entry”) before you can get anywhere near being in an environment where you do not need to estimate. Other NE’ers might not answer in the same way, but that has always been my stance. Read my earlier #NoEstimates posts if you don’t believe me!

My views have certainly evolved on the topic, and some of my early work might take a slightly more extreme stance. But I would never advise people to stop doing anything without knowing anything about their context. Even if I did know their context, I would be suggesting small experiments rather than simply stopping doing something that may be of value to that team and/or wider organisation.

Some people see #NoEstimates as meaning “NO ESTIMATES”, and can’t see beyond that.

To me, I see it more along the lines of:

#NoEstimatesAreImmutable
#HowMightDoingNoEstimatesAffectTheWayWeWorkAndTheOutcomes?
#NoEstimatesShouldBeDeterministic,TheyShouldBeProbabilistic
#NoEstimatesShouldBeOnesWhereTeamsAreAskedToEstimateBeyondTheNext2-4WeeksButInsteadShouldBeEstimatesBasedOnEmpiricalData
#NoEstimatesAreToBlame,ButGivenTheOngoingProblemsWithSoftwareEstimationWeMightWantToExploreAlternatives
#AreSomeDecisionsBetterMadeWithNoEstimates?
#NoEstimatesAreCommitments
#NoEstimatesAreAReplacementForAuthenticAdultConversationsAboutProgressAndDecisionsAboutWhatWe’reBuilding

If anyone wants to start tweeting to these hashtags, go ahead! I prefer to tweet to where the conversation actually is (and shorter hashtags :)), and trust that the reader does their own research and understands the nuances of the debate. You need to scratch well beneath the surface to find where the “NE’ers” agree and disagree.

The destination, in our jobs as software professionals, is becoming more effective at building great software for our customers. The journey is one of continuous improvement via experimentation. We can use Agile, Lean and Kanban principles to help us with that. We can use Scrum, XP, Kanban Method, SaFE, LeSS and other methods to help us with concrete implementations of the principles.

#NoEstimates started as just another Twitter hashtag. It has since become an enduring symbol of an industry that is unhappy with the prevailing way estimation is done, and the effect that has on what we’re trying to achieve professionally and personally. Some critics have cited “poor management” as the root cause of the dysfunctions we see around estimation. If that’s true, and estimates aren’t to blame, what next? How do we address a widespread problem with poor management?

Simply telling people how to do better estimations won’t do the trick. #ShouldWeDoNoEstimates? Perhaps, perhaps not. Either way, let’s at least have a bloody good debate about how we go about things in the workplace. Let’s put our heads together and “uncover better ways of working”.

Behind the NE hashtag is a world of opinion, ideas, principles and approaches that may be worth exploring and experimenting with on your journey to becoming more effective at software development. Many have done so. Many continue to do so.

I hope you do too 🙂

My presentations

Videos

Slides

Audio

podcast_badge_big