What would my #NoEstimates book look like?

I mentioned on Twitter today that, at one stage a few years ago, I was close to writing a book on #NoEstimates. I was then asked what the theme of my book would be, in contrast to that of Vasco Duarte‘s popular and controversial book on the topic. The answer was way too big for Twitter, so I decided to write this post.

Despite the encouragement of many kind folks for me to write the book, I decided against it – partly because of work commitments, but partly because I didn’t want to risk becoming associated to the hashtag as a brand (I’ve kind of failed with that anyway, but hey ho!).

I was also wary of developing a bias toward whatever ideas made it into the book. I always want to remain as impartial as I can to respectful critique of my ideas, as well as new and challenging ideas from others, and putting my current opinions down in a book would give me a vested interest to defend them. With blogging there is the space for opinions to evolve relatively safely. For instance, my views have evolved significantly on SAFe since my 2012 rant piece, and they have similarly evolved on #NoEstimates.

All that said, if I were to write a book on the topic, what would it look like?

While there are many cross-overs in my ideas and opinions with those of Vasco, who (with far more bravery and commitment than I could muster) did take the plunge and write a book, my book would certainly come at things from a different angle.

Right from when I started blogging and speaking about #NoEstimates around 5 years ago, the most interesting aspects of the topic to me were less about the actual act of estimating and more about the dysfunctions surrounding it. I actually have no issue with estimating (I even enjoy estimating! – shhhhh), and the angle of “estimates don’t work” is not one I subscribe to.

But there are certainly dysfunctions. Ask any audience at a software conference – or managers and teams alike at almost any software organisation – if they have any problems with estimation in their company. The energy in the room immediately changes to one of awkwardness.

Make no mistake, software folks have major issues with estimation.

This is even more interesting when you consider that organisations are supposedly trying to leverage the benefits Agile Software Development – iterative and incremental delivery of value generating software, with the ability to quickly jump on and deliver new ideas for the customers’ competitive advantage – but are struggling to adapt their planning and scheduling techniques accordingly. How do we keep to schedules now that we’re doing this Agile thing? – they ask. How can we ensure we meet our business goals if we’re focusing on customer goals? It’s no surprise that “scaled Agile” frameworks such as SAFe and LeSS are gaining popularity.

So, with that precursor, here are some of the topics I would explore in my hypothetical #NoEstimates book. They are in no particular order, but I have separated them into two different perspectives – that of the provider of estimates and the requestor.

Provider (e.g. developers, teams)

  • Shooting yourself in the foot
    Not incorporating appropriate levels of uncertainty into an estimate (e.g. giving a precise date when a set of scope will be delivered, saying a precise set of scope will be delivered on a fixed date, not calling out risks/assumptions/issues/dependencies as part of the estimate, etc.)
  • Not revisiting an estimate
    i.e. not monitoring progress toward the goal upon which the estimate is based and revising the estimate accordingly
  • Not understanding/asking for the reason behind the request for an estimate, and thus providing the wrong kind
    This is a problem for the requestor also (see below)
    e.g. “How long will this take?” might indicate a need for a relative comparison of two pieces of work for ROI purposes rather than a lead time assessment of one piece of work
  • Being solution rather than problem focused, so the wrong thing gets estimated
    This is a problem for the requestor too (see below)
    e.g. building an automated email system, integrated with MailChimp, when a manual email with no integration is all that is needed to deliver customer value and determine feature viability

Requestor (e.g. managers, business stakeholders)

  • Treating an estimate as a commitment (i.e. “you said it would be done by today” mentality)
    Likely to be an issue with the estimate not incorporating the appropriate level of uncertainty, as described above, or management not allowing it to
    Leads to a situation where everything has deadlines, most of which are arbitrary, so real deadlines don’t get prioritised and treated as such
  • Not understanding the devastating effect of queues on lead time
    Queues are typically the biggest contributor to cycle time in the current software product management paradigm
  • Not understanding and addressing other causes of variability
    Such as volatile teams, too many projects in flight (WIP), complicated technical/schedule/other dependencies
    Predictability comes from having a predictable environment, not from making predictions – I’ve likened this in the past to the Shinkansen bullet train system – building a network for predictable, high speed trains rather than trying to make trains faster or more predictable
  • Treating effort estimates as calendar time
    A symptom of queue, WIP and other variation ignorance, above
    “It will take 6 weeks” is often a best-case effort estimate, where many assumptions are made
    The actual time it will take (without compromising quality) is typically way longer, even if the actual effort on that work item accumulates to just 6 weeks
    The relationship between effort and cycle time is often referred to as “flow efficiency”, and is an interesting factor to consider when discussing this topic – given how low flow efficiency is in your typical software development organisation
  • Poor/no prioritisation
    With no actual order/sequence of desired work items, or one that changes constantly, it is very difficult for teams to make reliable estimates or sound trade-off decisions
  • Ignorance of cost of delay
    If economic urgency is not incorporated into work prioritisation, work will become urgent when it is too late to trade off other commitments – this causes compromises in quality and predictability, and means deadlines are more likely to be missed
  • Not understanding/asking for the reason behind why they need to make a request for an estimate, and thus asking for the wrong kind
    This is a problem for the provider also (see above)
    The requestor might also be the provider for another requestor (e.g. boss or client), so there can be a chain of misunderstanding
    They need to know why they need the estimate, and what kind, so they can give this information to the provider and have a productive dialogue about the best approach
  • Being solution rather than problem focused, so the wrong thing gets estimated
    This is a problem for the provider also (see above)
    e.g. asking the team to estimate how long it will take to build a fully fledged reporting system when there is a far simpler and quicker way of giving the customer what they need

    This not only reduces predictability but also removes the team’s ability to get value to the customer (and thus the business) sooner
  • Asking individuals and teams to be predictive beyond a short 1-4 week timespan rather than using empirical process
    Due to the complex nature of product development, teams should only estimate individual backlog items and forecast how much they can do in a short timespan (e.g. 2-week sprint/iteration)
    For batches of items beyond this short timeframe (e.g. releases), empirical forecasting using “yesterday’s weather” should be used to answer “how long” or “when” questions, not asking developers
  • Asking for development estimates before a value and capacity assessment has been made
    How long a feature or project might take is utterly irrelevant if the anticipated business value of building that thing isn’t high enough compared with other opportunities, or there will not be capacity available soon enough to do so
    Yet often the requestor takes a back-to-front approach and wants to find cheap/quick options, or ones that will fit into a pre-defined schedule, rather than doing their homework with understanding value and capacity
    This leads to developers being constantly interrupted to provide estimates for low value work they may never do or, perhaps more worryingly, doing lots of low value work because it is estimated to be cheap
    On a related and ironic note, the higher the anticipated business value, the less it matters how long the work would actually take (assuming a standard cost of delay profile and that the value can be generated early enough to fund the development team) – a higher precision estimate (and actual) is needed when there is a lower margin between R (business value) and I (cost of the development team)
  • Not allowing a truly iterative approach, rendering the use of experiments by teams to “buy down” knowledge (and reduce risk) impossible
    Strongly linked with being solution focused (see above)
    If the team cannot iterate to solve a problem, they may be locked into building a solution which is later learned to be an incorrect one
    When the customer inevitably changes the requirements due to the emerging information, the organisation might be too invested in the current solution to throw it away (“sunk cost fallacy” comes in here), thus scope grows and expectations become misaligned with reality

I’m sure there are far more #NoEstimates topics I would cover in my book, but stopping short of actually writing it I think I’ll end here :).

Advertisements

#NoEstimates isn’t just about estimating

The #NoEstimates conversation is largely about estimating nowadays rather than NOT estimating.

Estimating, but in a probabilistic way. People often refer to this type of estimating as forecasting. Using cycle time. Throughput. Variance. Little’s Law. Monte Carlo.

All famously good stuff.

But I don’t want people thinking that’s all there is to the conversation. Many folks have interpreted it that way.

For me, larger questions remain. For example, is it possible, in certain situations, to deliver value to the customer at a rate which negates the need for doing any estimating at all, both up front and ongoing? Quick enough that they do not need to make any decisions or commitments based on anticipated delivery, only what was actually delivered?

Beyond whether this is possible or not in certain contexts, why might it actually be important or desirable to be in this state of not needing estimates? I can get away with not eating apples, but is it actually useful for me to not eat apples?

Well, the fact that estimates are usually needed implies that decisions and commitments of some form are made based on them. This is a common argument cited as to why estimating is immutable when working with customers in uncertain domains.

However, often the knock on effects of an initially inaccurate estimate are damaging financially or culturally. So I can imagine, in certain situations, it might be possible, and desirable, for the customer to ask for delivery of tiny working increments which can provide value for them right away and, explicitly, no estimates are asked for because doing so would create potentially irreversible knock on effects. Perhaps losing another customer’s trust by not meeting your “commitment” to them. Perhaps having to trash another project for which you had a team lined up to work on if things “went to schedule”.

I can imagine a few reasons why we might want to enter a working relationship in which we explicitly value the rapid delivery of added value over the anticipated delivery of value at some future point. Not to mention the trusted working relationship side of things. “Customer collaboration over contract negotiation”.

These are the broader questions I’m interested in. We get it, we can forecast with data to avoid deterministic estimation rituals and provide more solid, transparent estimates of when we will be done, or what will be done by when.

But can #NoEstimates thinking actually take us further? Into whole new ways of working with our stakeholders and customers?

My Slicing Heuristic Concept Explained

This is a concept I devised a couple of years ago, and it seems there is a new #NoEstimates audience that would like to know more about it.

A Slicing Heuristic is essentially:

An explicit policy that describes how to "slice" work Just-In-Time to help us create consistency, a shared language for work and better predictability.

Crucially, the heuristic also describes success criteria to ensure it is achieving the level of predictability we require.

The Slicing Heuristic is intended to replace deterministic estimation rituals by incorporating empirical measurement of actual cycle times for the various types of work in your software delivery lifecycle. It is most effective when used for all levels of work, but can certainly be used for individual work types. For a team dabbling in #NoEstimates, a User Story heuristic can be an extremely effective way of providing empirical forecasts without the need for estimating how long individual stories will take.

However, if you are able to incorporate this concept from the portfolio level down, the idea is that you define each work type (e.g. Program, Project, Feature, User Story, etc.) along with a Slicing Heuristic, which forms part of that work type’s Definition of Ready.

For example,

"A feature ready to be worked on must consist of no more than 4 groomed user stories"

or

 “A user story ready to be worked on must have only one acceptance test”.

The success criteria will describe the appropriate level of granularity for the work type. For example, you might want user stories to take no more than 3 days, and features no more than 2 weeks.

Here is the really important part. The idea is not to slice work until you estimate it will take that long. You never explicitly estimate the work using the Slicing Heuristic. Instead,  as the work gets completed across the various work types you use the heuristic(s) to measure theactual cycle times, and then inspect and adapt the heuristic(s) if required.

At the user story level, I’ve found the “1 acceptance test” heuristic to be consistently effective over different domains for creating an average story cycle time of 3 days or less. However, there are alternatives. Instead of acceptance tests you could  use e.g. number of tasks:

 "A user story must have no more than 6 tasks".

Here is an example Slicing Heuristic scenario for a Scrum team using the feature and user story heuristics described above:

  • Product Owner prioritises a feature that she wants worked on in the next Sprint
  • PO slices feature into user stories
  • If feature contains more than 4 stories, it is sliced into 2 or more features
  • PO keeps slicing until she has features consisting of no more than 4 user stories; they are now ready to be presented to the team
    Note: Unless this is the very first feature the team is developing, the PO now has an estimate of how long the feature(s) will take, based on historical cycle time data for the feature work type; no need to ask the team how long it will take
  • In Sprint Planning, team creates acceptance tests for each user story
  • If there is more than 1 acceptance test, story is sliced into 2 or more stories
  • Team keeps slicing until all stories consist of only one acceptance test
    PO now has an even more reliable forecast of when the feature(s) will be delivered because she can now use the user story cycle time data in conjunction with the feature data
  • Team delivers each story, and records its cycle time in a control chart
  • If a story is taking longer than 3 days, it is flagged for conversation in Daily Standup
  • Multiple outliers are a sign that the heuristic should be adapted in the Sprint Retrospective
  • When the feature is delivered, its cycle time is measured also
  • Again, if features are taking longer than is acceptable for the heuristic, the heuristic should be adapted to improve predictability (e.g. reduce maximum number of user stories per feature to 3)

#NoEstimates is neither the destination nor the journey

Being one of the early contributors to the #NoEstimates (NE) hashtag, and a regular blogger on the topic, I am understandably regarded as a “#NoEstimates advocate”. When I get introduced to folks at meetups and conferences, a typical exclamation is “Hey, you’re the #NoEstimates guy!”

Another consequence of my reputation as a pioneer of the “movement” is that I will often get asked questions that, when answered, are deemed to represent the views of all NE advocates or, more bizarrely, NE itself. It’s as if NE is a thing that can have an opinion, or is a single method/approach. “What does NE say about X?” or “Here’s what the NE’ers think“.

What some don’t realise is that there are wide and varied disagreements between so-called NE advocates. It’s similar to the variety of viewpoints that you would get within, say, a political party. The party represents a set of values and principles, but there will rarely be a situation where all the members agree with every policy proposed or pushed through in the party’s name. I guess the same could be said of Agile too.

Folks are naturally interested in the practicalities of what a #NoEstimates approach might look like. This is fantastic, and I welcome questions and discussion on this. I engage in such conversations often. But I do want to make a point about an underlying presumption behind most of the questions I receive. Here are some of the most typical ones:

“How do you prioritise at the portfolio level without estimates?”
“How can you make decisions in the presence of uncertainty without estimates?”
“How do you convince senior management to implement #NoEstimates?”
“How can we minimise the number of things we need to estimate?”

What these questions have in common is that “not estimating” at all levels of work is where we want to head. That the goal is to reduce our estimates across the portfolio, with zero estimates as utopia. That the premise of #NoEstimates is the less we estimate, the more effective we will be.

For me, DOING NO ESTIMATES, or even LESS ESTIMATES, has never been a destination from my point of view.

My focus has always been on improving the way we work such that estimating becomes redundant.

This means understanding our business better. Becoming more stable and predictable in our method of working. Building relationships based on high levels of trust and respect. Reducing dependencies between teams. And so on.

People ask “So, Neil, how do we get started with #NoEstimates? Should we simply stop estimating and see what happens?”

The answer to this is a categorical “NO“, at least from where I sit. There are a set of minimum conditions (or “barriers to entry”) before you can get anywhere near being in an environment where you do not need to estimate. Other NE’ers might not answer in the same way, but that has always been my stance. Read my earlier #NoEstimates posts if you don’t believe me!

My views have certainly evolved on the topic, and some of my early work might take a slightly more extreme stance. But I would never advise people to stop doing anything without knowing anything about their context. Even if I did know their context, I would be suggesting small experiments rather than simply stopping doing something that may be of value to that team and/or wider organisation.

Some people see #NoEstimates as meaning “NO ESTIMATES”, and can’t see beyond that.

To me, I see it more along the lines of:

#NoEstimatesAreImmutable
#HowMightDoingNoEstimatesAffectTheWayWeWorkAndTheOutcomes?
#NoEstimatesShouldBeDeterministic,TheyShouldBeProbabilistic
#NoEstimatesShouldBeOnesWhereTeamsAreAskedToEstimateBeyondTheNext2-4WeeksButInsteadShouldBeEstimatesBasedOnEmpiricalData
#NoEstimatesAreToBlame,ButGivenTheOngoingProblemsWithSoftwareEstimationWeMightWantToExploreAlternatives
#AreSomeDecisionsBetterMadeWithNoEstimates?
#NoEstimatesAreCommitments
#NoEstimatesAreAReplacementForAuthenticAdultConversationsAboutProgressAndDecisionsAboutWhatWe’reBuilding

If anyone wants to start tweeting to these hashtags, go ahead! I prefer to tweet to where the conversation actually is (and shorter hashtags :)), and trust that the reader does their own research and understands the nuances of the debate. You need to scratch well beneath the surface to find where the “NE’ers” agree and disagree.

The destination, in our jobs as software professionals, is becoming more effective at building great software for our customers. The journey is one of continuous improvement via experimentation. We can use Agile, Lean and Kanban principles to help us with that. We can use Scrum, XP, Kanban Method, SaFE, LeSS and other methods to help us with concrete implementations of the principles.

#NoEstimates started as just another Twitter hashtag. It has since become an enduring symbol of an industry that is unhappy with the prevailing way estimation is done, and the effect that has on what we’re trying to achieve professionally and personally. Some critics have cited “poor management” as the root cause of the dysfunctions we see around estimation. If that’s true, and estimates aren’t to blame, what next? How do we address a widespread problem with poor management?

Simply telling people how to do better estimations won’t do the trick. #ShouldWeDoNoEstimates? Perhaps, perhaps not. Either way, let’s at least have a bloody good debate about how we go about things in the workplace. Let’s put our heads together and “uncover better ways of working”.

Behind the NE hashtag is a world of opinion, ideas, principles and approaches that may be worth exploring and experimenting with on your journey to becoming more effective at software development. Many have done so. Many continue to do so.

I hope you do too 🙂

My presentations

Videos

Slides

Audio

podcast_badge_big

The common ground of #Estimates and #NoEstimates

One of the criticisms of the #NoEstimates (NE) stance is the view that even contemplating not estimating is a non-starter – because estimates are “for those paying our salaries”, not those doing the work. That the business folk in our organisations need to know what will happen and when in order to run their company successfully.

OK, even if NE advocates could successfully argue against that assertion, perhaps it is time we started to acknowledge the “impediments” of the debate? The back and forth arguments that prevent us from moving forward to a more constructive place.

Perhaps it is time to find the common ground, and build on that.

A simple truth is that business wants (needs) both speed and predictability. I think we can all (mostly) agree on that 🙂

Some NE critics argue that we should learn better estimation skills such that our predictability improves. Yep, sure. Difficult to argue that learning to do something better is a bad thing.

Given that we have to do a lot of estimating as software practitioners, learning and using more effective estimation techniques seems a good idea.

However, in return for NE advocates acknowledging that we need to provide estimates for those asking, and get better at doing so, I think it’s time for the critics to acknowledge that arguing better estimation as the answer to all the dysfunctions surrounding software estimation is another impediment to the debate moving forward.

I see common ground in that we are all trying to create better predictability for our teams, customers and internal stakeholders. If we put aside “better estimation” as a way of doing that, how else might we do it?

Better predictability can be achieved in many other ways:

  • Stability and autonomy of teams
  • Limited WIP of initiatives (1 per team at any given time)
  • Frequency of delivery of “done” software
  • Cadence of collaboration – planning, building and reviewing together
  • High visibility and transparency of work and progress
  • Shared understanding of variability, its effects on long range planning and how/when to try to minimise it or leverage it to our advantage

to name but a few.

To take the NE debate forward, we need to find ways to provide “those paying our salaries” with the predictability they need, while at the same time moving away from the dysfunctional behaviours associated with holding teams accountable to estimates in highly unpredictable environments.

What is an unpredictable software development environment? One in which 1 or more of the things listed above are not being addressed. It might not be a stretch to suggest that’s pretty much every software shop on the planet.

There is common ground between critics and advocates in this debate. Let’s move on from “no estimates for anyone!” and “just learn better estimation techniques” – these arguments will perpetuate butting heads.

Instead, let’s explore – together – how we might create more predictable environments for our software teams, such that estimation becomes easier (and, in some cases, redundant).

“We are uncovering better ways of developing software by doing it and helping others do it.”


~ Agile Manifesto

Context is no longer King

One of my frustrations as a software practitioner is our seemingly programmed human bias toward keeping the status quo.

I guess it wouldn’t be so bad if the status quo (pictured above) was actually something approaching effective, inspiring or at least motivating. But unfortunately the reality for many (most) people making their living in the crazy (in a bad way) world of software development remains one of boredom, dysfunction, wasting time on unimportant things, going along with stupid decisions (or lack of them), stress, hatred of Mondays, being put in our place by our “superiors”, et cetera, et cetera.

“23,858 tweets and counting. Worthwhile or a colossal waste of time?”

I tweeted this yesterday. Often I wonder why I stay in an industry that suffers from the afflictions listed above. My work mood swings from utter dejection to tremendous elation. Like the software we create, the variability in my mental state is subject to wild fluctuations.

Here’s the thing. The reason I do this; the reason I stay in the industry, tweet opinions, tips and debate; the reason I write these blog posts; the reason I give a significant portion of my time freely, mostly at my own cost, to talk at meetup groups, conferences and company brown-bag lunches; is…

Because I want to play a small part in creating a better world of work for those involved in software development.

Particularly developers, who I believe have been treated for years like some kind of underclass in organisations of all sizes and industries. Crammed like sardines into some dark, dingy corner of the building, given to-the-letter specifications of some crappy software system that will keep them busy for a few months and then will never be used by a soul. Forced to commit to an estimate of how long this will all take (minus whatever needs to be trimmed off because the estimate doesn’t fit into the already agreed timelines). Constantly being micro-managed and asked “why is this taking so long?” and “why is this so hard?”.

Yes, I’m angry about this. And I want things to change. So I’m trying to do that in my own little way.

I want us to start treating smart, motivated people with the respect they deserve – right from the moment we hire them. Why on earth companies put engineers through 3 or 4 rounds of interviews and then fail to actually trust them once they get the job is beyond me. Managers continue to spoon feed solutions to their subordinates because they “can’t be trusted” to solve business problems quickly and efficiently enough.

This is why I am challenging the status quo in our industry. Sometimes what I write or say is found provocative by some. One dimensional. Context-less. “It depends on the context”, people say. “There’s no one right way. No advice is universal.”

I get disappointed (sometimes annoyed) when people who have never met me and know nothing about my professional reputation and abilities confuse what I tweet as “professional advice”, and then start questioning my integrity and ability as a consultant. It is hypocritical and way off the mark.

The reason why people write blog posts with provocative titles, and tweet with controversial hashtags, is because it is interesting. It invites conversation and debate. It stirs things up a bit. God knows (and so should the rest of us) that this industry is in dire need of some stirring up.

I was questioned by a couple of people about a tweet I wrote recently:

“In fact my tip is NEVER do a MoSCoW prioritisation. The implied fixing of scope makes change very difficult. Order things instead.”

A tweet, I might add, that was retweeted dozens of times, so obviously resonated with many.

I was told that my opinion was “unjustified”. That I shouldn’t make “categorical statements”. That “never is a long time”. That some poor soul may take my advice (assuming a tweet constitutes professional advice?!) and destroy a project because I am uninformed about their “context”.

I am constantly told the same kind of things about the #NoEstimates debate. That I can’t tell people not to estimate because I don’t know their context. Their boss might need estimates. Sometimes we need them, sometimes we don’t. Et cetera, et cetera.

With all due respect to these people, they are completely missing the point. For a start, I think it’s ridiculous to suggest that people would read a tweet from little old me and that would somehow create a chain of events that would destroy a project. Even if I were someone with anywhere near the influence and expertise of the great Ron Jeffries or Kent Beck, I don’t think I would yield that kind of power over people.

I do not use Twitter to dish out free professional advice. It is a forum for opinion, conversation and debate. Well written tweets resonate with people in some way, such that they retweet them, favourite them or, preferably, start conversations about them.

Perhaps reading a tweet like the one above will encourage someone to think a bit more about a practice that they have always done without question. To look into alternative ways of organising and prioritising work. To completely reject what I’m saying. Good tweets create a reaction, and whether this reaction is an angry disagreement or a nodding of the head, it has done its job.

Twitter is not to be taken too seriously, but the conversations it can create are serious and, I believe, are helping us as an industry to increasingly question long established practices. This can help us improve the way we work. The way we think. It is vitally important for us to have our world view challenged on a regular basis. This is how we learn and evolve.

I don’t just want to read tweets saying that “it depends on context”. Stuff that confirms my world view. Stuff that I agree with all the time. If every piece of advice or opinion “depends on context” then we might as well just give up trying to improve things.

“Depending on your context, you might want to consider alternatives to MoSCoW prioritisation. However, if it works for you then fine, just keep on doing it.”

Politically correct, perhaps, but it’s not exactly going to give me a reaction. I’ll probably not even notice that tweet on my timeline. “Be happy”. Ooh, can’t say that, it depends on context.

Moving away from social media for a second and into the real world of professional coaching and consulting – As Agile coaches I believe we can do much, much more for our clients. If someone tells me that I’m being unprofessional for suggesting better alternatives than MoSCoW then we are on different planes, I’m afraid. I know that there are certain principles and practices that have proved effective for me time and time again.

I’m not alone on this. I believe some statements are universally applicable, regardless of context. Questioning the way we do things doesn’t depend on context. Respecting each other and striving to work more collaboratively doesn’t depend on context. Adopting good engineering practices will help you to deliver incrementally and iteratively at a constant pace over time – this is universally applicable also.

Of course context is important – to me that’s so obvious that I can’t believe people keep saying it. We know that. It goes without saying.

But it’s not the point. The point is that many, many companies are still struggling to grasp the principles and practices that we in the Agile and Lean community know can increase effectiveness. Our clients deserve better advice from us than “well, if that’s working for you then keep on doing it”. We all know that something “working” is a perception and may actually be destroying the morale of the employees, or even putting the business as a whole at risk.

It is not “professional” for us to keep playing the context card. We need to be bold in our decisions and advice giving. Take risks. Challenge the status quo. Encourage innovation, not just of products but of process also. Be a true change agent, not just blend into the environment.

If you like what I tweet and blog, that’s wonderful so please do keep following! If you don’t like it, please unfollow. Twitter is wonderful because it is the ultimate pull system. If we don’t like what we see we can block and unfollow. We can filter out content that doesn’t interest us. It’s brilliant. And I shall continue to use it to challenge, provoke and generate conversation and debate. I cannot begin to measure how much I have learned and evolved my thinking thanks to conversations on, or starting on, Twitter. I’m pretty sure others will say the same.

And I will continue to help clients, in their context, get better whilst trying to create happy and humane workplaces. I want to live in a world where people enjoy going to work. It’s time away from our family and friends, and we spend most of our time there, so for God’s sake if we’re not enjoying it then what are we doing?

I don’t get it right all the time. Probably not even most of the time. But I do this because I care. I will continue to risk getting lambasted by people and losing the respect of gurus and experts. Like the rest of us, I don’t know it all – far from it. But I do not learn by being uncontroversial and not pushing the boundaries of what I believe or how I think things should work.

Thanks for listening 🙂

Note: I will write a follow-up post about  MoSCoW prioritisation itself. Aside from the fact that it perpetuates the myth of “requirements” (if something is not a “must-have” then how can it be a requirement?), I’m not including my further ideas on the topic here because it’s not really what this post is about.

Many have already written about the damage it can do and some better alternatives to set you on the road to delivering a successful project (read building a successful product). For starters, Joakim Holm wrote a great post about it the other day. And there’s lots more to investigate using our friend Google!