The curse of knowledge and the danger of fuzzy statements

Fuzzy statements in lessons learned are very common, and are the result of “the curse of knowledge”

Fuzzy Monster
Clip art courtesy of

I blogged yesterday about Statements of the Blindingly Obvious, and how you often find these in explicit knowledge bases and lessons learned systems, as a by-product of the “curse of knowledge“.

There is a second way in which this curse strikes, and that is what I call “fuzzy statements”.

It’s another example of how somebody writes something down as a way of passing on what they have learned, and writes it in such a way that it is obvious to them what it means, but which carries very little information to the reader.

A fuzzy statement is an unqualified adjective, for example

  • Set up a small, well qualified team…(How small? 2 people? 20 people? How well qualified? University professors? Company experts? Graduates?)
  • Start the study early….(How early? Day 1 of the project? Day 10? After the scope has been defined?)
  • A tighter approach to quality is needed…. (Tighter than what? How tight should it be?)
You can see, in each case, the writer has something to say about team size, schedule or quality, but hasn’t really said enough for the reader to understand what to do, other than in a generic “fuzzy” way, using adjectives like “small, well, early, tighter” which need to be quantified.

In each case, the facilitator of the session or the validator of the knowledge base needs to ask additional questions. How small? How well qualified? How early? How tight?

Imagine if I tried to teach you how to bake a particular cake, and told you “Select the right ingredients, put them in a large enough bowl. Make sure the oven is hotter”. You would need to ask more questions in order to be able to understand this recipe.

Again, it comes back to Quality Control.

Any lessons management system or knowledge base suffers from garbage In, Garbage Out, and the unfortunate effect of the Curse of Knowledge is that people’s first attempt to communicate knowledge is often, as far as the reader is concerned, useless garbage.

Apply quality control to your lessons and de-fuzz the statements

View Original Source ( Here.

The curse of knowledge, and stating the obvious

The curse of knowledge is the cognitive bias that leads to your Lesson Database being full of “statements of the obvious”

Obvious sign is obvious.There is an interesting exercise you can do, to show how difficult it is to transfer knowledge.

 This is the Newton tapper-listener exercise from 1990.

 Form participants into pairs. One member is the tapper; the other is the listener. The tapper picks out a song from a list of well-known songs and taps out the rhythm of that song to the listener. The tapper then predicts how likely it will be that the listener would correctly guess the song based on the tapping. Finally, the listener guesses the song.

Although tappers predicted that listeners would be right 50% of the time, listeners were actually right less than 3% of the time.

The difference between the two figures (50% and 3%) is that to the tapper, the answer is obvious. To the listener, it isn’t.

This is the “curse of knowledge“.

Once we know something—say, the melody of a song—we find it hard to imagine not knowing it. Our knowledge has “cursed” us. We have difficulty sharing it with others, because we can’t readily re-create their state of mind, and we assume that what is clear to us, is clear to them.

Transferring knowledge through the written word (for example in lessons learned, or in online knowledge bases) suffers from the same problem as transferring a song by tapping. People THINK that what they have written conveys knowledge, because they can’t put themselves in the mind of people who don’t already have that knowledge.

Just because they understand their own explanations, that does not mean those explanations are clear to he reader.

This effect can be seen in written knowledge bases and lessons databases, and often appears as Statements of the Blindingly Obvious (SOTBOs).

These are statements that nobody will disagree with, but which carry obviously carry some more subtle import to the writer which the reader cannot discern. These include statements like

  • “It takes time to build a relationship with the client” (Really? I thought it was instantaneous). 
  • “A task like this will require careful planning”. (Really? I thought careless planning would suffice)
  • “Make sure you have the right people on the team.” (Really? I thought we could get away with having the wrong people)
  • Ensure that communication and distribution of information is conducted effectively. (Really? I thought we would do it ineffectively instead)
The writer meant to convey something important through these messages, but failed completely. Why is this? Often because the writer had no help, no facilitation, and was not challenged on the emptiness of their statements.

In each case, any facilitator which had been involved in the capture of the knowledge, or any validator of the knowledge base, would ask supplementary questions:

  • How much time does it take? 
  • What would you need to do to make the planning careful enough? 
  • What are the right people for a job like this? 
  • What would ensure effective communication?
This further questioning is all part of the issue of knowledge quality assurance, to filter unhelpful material out of the knowledge base, or lessons management system, and to turn an unintelligible set of taps into a full tune.

Without this, people rapidly give up on the knowledge base as being “unhelpful”, and full of SOTBOs.

View Original Source ( Here.

14 barriers to lesson learning

Lesson learning, though a simple idea, faces many barriers to its successful deployment. Here are 14 of them.

I posted, back in 2009, a list of 100 ways in which you could wreck organisational lesson-learning. These were taken from my book, The Lessons-Learned Handbook, and represent the many ways in which the lessons supply chain can be broken or corrupted.

Here’s an alternative view.

From the paper “Harvesting Project Knowledge: A Review of Project Learning Methods and Success Factors” by Martin Schindler and Martin J. Eppler, we have a list of 14 barriers to effective lesson learning through project debriefs, drawn from an extensive literature review.

  1. High time pressure towards the project’s end (completion pressure, new tasks already wait for the dissolving team).  
  2. Insufficient willingness for learning from mistakes of the persons involved.  
  3. Missing communication of the experiences by the involved people due to ‘‘wrong modesty’’ (with positive experiences) or the fear of negative sanctions (in case of mistakes). 
  4. Lacking knowledge of debriefing methods. 
  5. Underestimation of process complexity which a systematic derivation of experiences brings along. 
  6. Lacking enforcement of the procedures in the project manuals.  
  7. Missing integration of experience recording into project processes.  
  8. Team members do not see a (personal) use of coding experience and assume to address knowledge carriers directly as more efficient.  
  9. Difficulties in co-ordinating debriefings. 
  10. Persons cannot be engaged for a systematic project conclusion, since they are already involved in new projects. 

In those cases where a lessons learned gathering takes place, the gained knowledge is often not edited for reuse, or not accepted as valuable knowledge by others. If debriefings are conducted, there is still a certain risk that the results (i.e. the insights compiled by a project team):

  1. are not well documented and archived,  
  2. are described too generically or are not visualized where necessary, which prevents reuse due to a lack of context (e.g. it is too difficult to understand or not specific enough for the new purposes),  
  3. are archived in a way so that others have difficulties retrieving them,  
  4. are not accepted, although they are well documented and easy to locate (the so-called ‘‘not invented here’’-syndrome).

View Original Source ( Here.

5 success factors for project learning

Learning effectively from projects is a goal for many organisations. Here are some ways how to do it.

The list below of success factors for project-based learning was proposed by Schindler and Eppler, two researchers working out of University of St. Gallen (Switzerland), in their paper “Harvesting Project Knowledge: A Review of Project Learning Methods and Success Factors“.

It’s a pretty good list! Their text is in bold, my commentary on each of these is in normal font.

  1. “From single review to continuous project learning – we stress the necessity for continuous project learning through regular reviews”.  In Knoco, we recommend any project define the methods and frequency up front through the use of a Knowledge Management Plan, and that suitable processes are the After Action review and the Retrospect, or Lessons Capture meeting, or even a Learning History in the case of mega-projects. This is the Process component of  project-based learning. Learning within the project is covered by After Action review, export of learning to other projects is covered by Retrospects.
  1. “New project roles and tasks – the need for new roles for project knowledge management should have become obvious”.  This is the Roles and Accountabilities component of project-based learning – we recommend that someone in the project team itself – a project Knowledge Manager –  takes accountability for ensuring learning processes are applied, making use of facilitation skills as appropriate. This need not be a full time role, but it should be a single point accountability.
  1. “Integration of learning and knowledge goals into project phase models–  project learning is too important to be left to chance or to the initiative of motivated individuals”. This is what we include as part of the Governance component of project-based learning. By embedding knowledge management processes into the project phase models or project management framework, we set a very clear expectation that project learning is important and part of normal project activity. Lesson Learning shoul dbe emphasised in the project management policy.
  1. Integration of learning and knowledge goals into project goals – Adding knowledge goals to every project step can foster systematic reflection about every milestone in a project. This we also include as part of the Governance component of project-based learning. This is the performance management element of Governance. If learning is in the project goals or the project objectives, then the project team will be judged and rewarded by whether they learn, as part of judging and rewarding whether they met their goals. 
Schindler and Epper therefore cover three out of the four enablers for Knowledge management – Roles and Accountabilities, Processes, and Governance. 
The one they do not cover is technology, perhaps because there were few effective Lessons-Management technologies around in 2003. Therefore I would like to propose a 5th enabler, as follows;
  1. Application of an effective lessons management system, including workflow to ensure the lessons reach those who must take action as a result, and a tracking system to track the effectiveness and application of learning.

View Original Source ( Here.

The Risk Radar – what could possibly go wrong?

An analysis of your past lessons can be used to create a Risk Radar for future projects.

Michael Mauboussin, in his book “Think Twice”, talks about the planning fallacy, and compares the inside and outside view.

He points out that people planning a task or a project tend to take the inside view – they think through how they will do the project, they think about the risks and challenges they will face, and how they will overcome them, they add up the times for each step, and use that as their baseline. Almost always they are over-optimistic, unless they also use the outside view, and calibrate their baseline against what usually happens.

Mauboussin says

If you want to know how something is going to turn out for you, look how it turned out for others in the same situation”

To find out what happened to people in similar situations, we can use Lessons Analysis. If you have an effective lesson learning system you may have records from hundreds or thousands of past projects, and you have a huge database of “how things turned out”. Through lessons analysis you can recognise common themes across a number of lessons, and see the challenges and risks that projects are likely to meet, and identify those which are most common and most impactful. You can then use these to create a risk radar for future projects.

In the 90s, I was in charge of a lesson-learning system in Norway. At one point we had lessons from over 300 projects in the system, and we grouped them into a series of “things that commonly go wrong in our projects”.

In Norway, we treated our list as our “Risk radar” – the more of these problem areas that a project faced, the greater the risk to effective project delivery. The top 7 problem areas were as follows (remember these were small office-based projects)

  1. The project team is new, or contains a significant number of new members
  2. There is no clear full-time customer for the project 
  3. Satisfactory completion of the project relies on an external supplier. 
  4. The project is a low priority ‘back-burner’ project. 
  5. The project scope is poorly defined. 
  6. The project relies on an unusual degree of IT support. 
  7. The project involves consultation with large numbers of staff.

Any project where more than one of these was true had “Danger on it’s Radar”.

Such a project therefore needed to pay considerably more attention than usual to project management, and was advised to scan back through the relevant lessons and guidance, and speak to the people involved, in order to find out how the danger can be averted.

A few years later, the organisation applied the same concept to big engineering projects, and came up with what they called the “Train Wreck Indicator” – a similar indicator based on common reasons for failure, determined through lessons analysis. Any project that scored a Red Light on the train wreck indicator was given additional help and oversight in order to help it steer a path through the problem areas, and avoid becoming a train wreck.

Tools such as this help projects gain the Outside View to aid them in their planning.

If you have an effective lessons learned system, and are building up a library of project lessons, then consider a similar analysis, to build your own “Risk Radar“.

 Contact us if you want to know more about our lessons analysis service.

View Original Source ( Here.

A lesson is not just something you learned, but something you can teach

People who have learned from experience must understand their responsibility to teach others.

I often say at the start of Lessons learned meetings, that when identifying and recording lessons we should think of them not as something we have learned, but as something we can teach others.

This is a subtle shift in emphasis form looking inwards, to looking outwards, and from looking backward, to looking forwards. It also identifies the responsibility of the knowledgeable person; a responsibility to others rather than to themselves.

For much of the lessons workshop, the participants are looking back at what happened. “We had a difficult time with the client”, they might decide, and then follow this Observation with a whole set of reminiscences about how difficult the client was, and what trouble it caused.

With good facilitation, they can also reach Insights about why the problems happened.

However the facilitator then has to turn the conversation outwards, and ask – “based on what you have learned from your reflection on the client difficulties, what can we teach the organisation about how to better deal with clients”. The participants need to stop analysing history, and start looking at generic learning they can pass on to others.

That is a critical value-added step, and it is that step, and the subtle mindset shift from passive learners to active teachers, that allows the participants to turn an observation and the subsequence insights, into a Lesson.

View Original Source ( Here.

Knowledge Management in mega-projects

KM in mega-projects is much the same as KM in any project, but at a larger scale and a greater degree of rigour

image from wikipedia

Knowledge Management as applied to projects is a pretty well-understood field (see for example my book on Knowledge Management for Teams and Projects). It consists of a rigorous structure of Learning Before, During and After, and drawing on the knowledge of others in the organisation in order to anticipate, avoid, and (if necessary) solve problems.

So what’s the difference between KM in projects, and KM in mega-projects?  The answer is, Nothing much! Other than the scale, the principles and practices are identical.

The advice below is for the megaproject leadership team.

Setting up the KM framework. The KM system for a mega projects needs to be more robust, and better resourced, than for a normal project. You will need:

  • a KM policy for the megaproject
  • a good and robust KM plan, including a definition of all the unknowns, and how to make them known
  • a dedicated knowledge manager (potentially full time)
  • a lesson management system for the  megaproject
  • a wiki, and blogs, for building knowledge as the project continues
  • a set of KM processes, as described below.

Learning Before. Because the costs, risks and unknowns are far greater in mega-projects, “Learning Before” is especially important. This includes learning from the project management structures of successful mega-projects (according to the book “Mega-projects and Risk”, a lot depends on how the incentives are assigned and how the risks are allocated, for example), and learning from the typical reasons for cost and schedule over-run of megaprojects. One of the biggest causes of overrun is project wishful thinking – ignoring the unknown unknowns. These are things like

  • discovering the soil conditions are far worse than expected
  • finding unknown archaeological sites (Such as the unexpected discovery of 150-year-old revolutionary-era sites and Native American artefacts on the Boston “Big Dig”)
  • changes in government, leading to the need to renegotiate
  • changes in commodity price
You can’t know in advance what these are likely to be, but you can add in contingency, you can make a probabilistic risk, cost and duration estimate with some of these unknown unknowns included, and you can gather enough to knowledge to understand the major categories of risk, and have contingencies to deal with them. Knowledge management and risk management are closely allied in projects. Any megaproject should dedicate as much time as possible to Learning Before – forewarned is fore-armed.
Learning During. Mega projects are incredibly complex, and if mega-projects are to be able to learn, they need a comprehensive and integrated system for “learning During”. Learning events such as After Action Reviews need to be a requirement for all contractors, there need to be Lessons Learned Integrators in all teams and in all contractors, each contractor needs a lessons log, with cross-team lessons escalated to a lessons management system, there needs to be a learning team at project leadership level, and part of their role needs to be to pick up the weak signals and the first inklings that problems need to be fixed. This is a military learning model, and many mega-projects are military in scale. Learning During is not something a mega project can afford to ignore – rapid learning can save you millions – and the megaproject should develop and implement its own internal knowledge management framework, complete with governance.

Learning After. The megaproject needs to hold Retrospects after every major milestone, and the learning needs to be not just about engineering, but about the way the whole project is integrated, the reason for any delays and overruns, and also the softer aspects such as culture, behaviours and communication. It may be politically difficult for megaprojects to produce open, honest and public lessons after the completion of the project, given the implications of liquidated damages, and given the typical ties between megaprojects and politics. That should not stop them from trying, however, especially if the intent is to provide guidance for future megaprojects. Certainly every company involved needs to collect and document their own internal lessons for future use. The megaproject leadership team should even consider the appointment of learning historians, so that the Learning History of the project can be constructed.

Drawing on the knowledge of others. There may be online global communities of practice for megaprojects that you can draw on such as the PMI and the CII, and you can potentially convene an advisory group of past mega-project managers who can act as a sounding board and who can provide advice and experience during the course of the project.

Knowledge management, if correctly applied, can be a major factor in the success of projects, driving down costs, duration and risk.

Where megaprojects are concerned, with their complexities, unknowns, and political pressures, Knowledge Management becomes absolutely essential.

View Original Source ( Here.

Lesson learning as a supply chain

Another reprise from the archives – the idea of lessons being the “car parts” of knowledge

This post is a combination of three ideas, to see if they come up with something new.

  • Idea number 1 – the idea of an organisation as a knowledge factory, sparked by Lord Browne’s quote – “anyone in the organization who is not directly accountable for making a profit should be involved in creating and distributing knowledge that the company can use to make a profit”  

  • Idea number 2 – the idea that corporate process is a compilation or synthesis of all the lessons learned over time  

So the combination idea looks like this;

The inner ring is a supply chain where components are manufactured, and assembled into products (like a car plant, or a construction site).

The outer ring is the lesson learning cycle, one of the procedural loops in Knowledge Management. Please note that this is only one of the many ways in which KM works; this is the systematic push-driven cycle involving the collection of explicit lessons, and there are many other types of interaction in KM (push and pull, connect and collect).

In our analogy, we have lessons from experience being collected, distributed through lesson management, and assembled into continuously improving corporate processes, rather like car parts are created, distributed, and assembled into cars.  The links within this chain are as follows

  1. The raw materials for the supply chain are the experiences of the individuals in the workplace, who are trying to apply the processes in different contexts, in a changing world.
  2. The supplier of the raw materials therefore are the individuals themselves.
  3. Experiences are manufactured into lessons through processes of analysis and discussion – team meetings such as Retrospects, and After Action Reviews. Through discussion and analysis, individual unconscious knowledge is made conscious, and the experiences of many individuals are combined into the lessons of the team or the lessons from an event. These lessons are the components – the car parts within the supply chain. 
  4. Now we get into the Distribution part of the supply chain. We need to get those parts to the assembly plant. This is a part where many Lessons learned systems break down. They leave those parts (lessons) in the warehouse (database), and expect people to come and find them (remember that scene from raiders of the Lost Ark?). We need instead to have active lessons management, to push the lessons to those who need them.
  5. Those who need them are (primarily) the people in charge of corporate process, who need to keep those processes fresh and updated as new learning comes in. The Process owners, or SMEs.
  6. However that is not the end of the story. The assembled knowledge needs to get to the consumer – though the equivalent of car showrooms (community portals), or supermarkets (Intranets) or street markets (wikis).
  7. The consumer is the knowledge worker. They apply the new knowledge, and in doing so, gain new experience. 
And so the cycle begins again.

View Original Source ( Here.

Why so much knowledge sharing, so little knowledge seeking?

Knowledge Management requires knowledge seeking and knowledge sharing. But why so much focus in internal processes on sharing and so little on seeking?

Learning Happens
Learning Happens by shareski, on Flickr

One of the standard models for Knowledge Management in project environments is the idea of “Learning Before, During and After“.

Ideally these three activities should be embedded in project process, so that a project

  1. Starts by reviewing and accessing all the knowledge it needs,
  2. Learns as it goes, improving its processes during the course of the project, and
  3. Identifies, analyses, documents and shares the new knowledge it has gained, for the sake of future projects. 
For the project itself, the most powerful of the three is “Learning Before”. If a project can maximise it’s knowledge up front, especially if the team can discover the things it doesn’t know that it doesn’t know, then success is much more likely. “Learning Before” activities such as Knowledge gap Analysis, KM planning or Peer Assist can overcome some of the more serious cognitive biases for KM, and are the nearest thing to KM Silver Bullets that we have. Learning before activities drive receptivity, increase absorptive capacity, and help teams “want to learn”.  
And yet, when you look at internal company project frameworks, or even at  generic frameworks such as Prince 2 or ISO, there is almost always a requirement for capturing and sharing lessons after the project, and no such requirement for Learning Before. According to our global survey, 68% of companies require their projects to do some sort of Learning After, but only 15% require them to do Learning Before.  Prince 2 has a required, and well documented, step at the end, for creating lessons (although this could be much improved!), but has no step at the project start-up, requiring a search for, and review of, existing knowledge.  
This astounds me.

Why even bother to collect lessons at the end of a project, if nobody reviews them at the start of the next project!

I think the answer is that it is psychologically easier to share than it is to learn.  A project team can feel proud and recognised (even a little smug at times) for sharing lessons, while asking for lessons can feel like an admission of incompetence (“can anyone help me with this?”). 
Learning After is Teaching – Learning Before is Learning, which is Much Harder. Knowledge reuse is more difficult than knowledge sharing, yet that is all the more reason we should make it a focused and deliberate step. 
You get around some of these barriers by introducing non-judgemental techniques such as Peer Assist and Knowledge Management plans, which take the exposure out of asking for help, or seeking for knowledge. And you also address it by developing a culture of Asking, rather than a culture of Sharing.

Please, let’s introduce the full cycle of Learning Before, During and After, and let’s not skip the Before step!

View Original Source ( Here.

How emergency services developed a capability for Lessons Management

Lesson Management is a core component of Lesson Learning. Here are the story of how this capability was developed in Australian emergency services. 

This comes from the recent issue of the Australian Journal of Emergency Management, where Heather Stuart and Mark Thomason describe how Lesson Management was first recognised as important, and then developed across the services.

The need for systematic lesson management was recognised in the aftermath of some serious learning events, for example the following:

The South Australian Country Fire Service (CFS) had developed a lessons capacity following the Wangary Fires in 2005. The Wangary fire and other fires on that day were the most destructive fires, in terms of loss of life and property, that the CFS had seen since the Ash Wednesday fires in 1983. Given the losses, community grief and the Coronial inquest into this event, CFS recognised that a more formal approach into learning from these events was required and that the service owed it to the community to demonstrate improvements as soon as possible. This was the first time that a formal approach had been utilised in CFS for collecting, analysing and theming lessons

Emergency Management Australia – the coordination body for the state emergency services – hosted a conference in 2011 to discuss lesson learning, and found that many of the state bodies were in a similar stage, with the lessons process being driven by one or two passionate individuals.  A scan of the current state found 

  • a strong culture of identifying themes, trends and lessons but not much success at ensuring lessons were learnt by creating lasting behaviour change 
  • no consistent model for capturing, analysing, sharing and implementing lessons leading to poorly defined roles, responsibilities and expectations
  • “blame and shame”, although diminished, was still prevalent in some parts of the sector 
  • a lack of visibility in the process of developing lessons, leading to a perception that personal observations and contributions were not influencing change
  • many champions of learning practice in the field but there was a risk of losing momentum because of the perceived information ‘black holes’ 
  • emergency management agencies (e.g. responder agencies, government departments and non- government partners) were working separately on lessons management, creating silos of knowledge and disconnected learning opportunities 
  • there was a limited understanding of principles and benefits of lessons

The first step to address lesson management across the emergency services was to build a practitioners network, and then to draft a Lesson Management Handbook (one of the best I have seen, by the way). This led to a standard terminology across the states, and increasingly a standard lesson management model (Read about the State of Victoria model here). A common gap was identified – data analysis of lessons collection – and addressed through training, which led to the development of “national lessons”; lessons which appear across all states.

Through this process, lesson management, both within the individual states and across the nation, was born, defined, standardised and deployed.

As the authors conclude:

Learning lessons as a lessons practitioner is greater than the process itself and an individual agency’s activities… The synergies gained through collaboration between lessons practitioners across the emergency management sector has contributed to strengthening the lessons capability in each of the participating agencies and has resulted in greater achievements in this sphere than agencies would have achieved working in isolation.

View Original Source ( Here.