A case study of a failed learning system

When lesson learning failed in the Australian Defence Force, they blamed the database. But was this all that was at fault?

design-databaseHere’s an interesting 2011 article entitled “Defence lessons database turns off users”. I have copied some of the text below, to show that, even thought the lessons management software seems to have been very clumsy (which is what the title of the article suggests), there was much more than the software at fault.

 “A Department of Defence database designed to capture lessons learned from operations was abandoned by users who set up their own systems to replace it, according to a recent Audit report. The ADF Activity Analysis Data System’s (ADFAADS) was defeated by a “cultural bias” within Defence, the auditor found. Information became fragmented as users slowly abandoned the system”.

So although the article title is “defence lessons database turns off users”, the first paragraph says that it was “defeated by cultural bias”. There’s obviously something cultural at work here ……

“Although the auditor found the structure and design of the system conformed to ‘best practice’ for incident management systems, users found some features of the system difficult to use. Ultimately it was not perceived as ‘user‐friendly’, the auditor found. Convoluted search and business rules turned some users against the system”. 

….but it also sounds like a clumsy and cumbersome system

“In addition, Defence staff turnover meant that many were attempting to use ADFAADS with little support and training”.

…with no support and no training.

“An automatically-generated email was sent to ‘action officers’ listing outstanding issues in the system. At the time of audit, the email spanned 99 pages and was often disregarded, meaning no action was taken to clear the backlog”.

There needs to be a governance system to ensure actions are followed through, but sending a 99-page email? And with no support and follow-up?

 “It was common for issues to be sent on blindly as ‘resolved’ by frontline staff to clear them off ADFAADS, even though they remain unresolved, according to the auditor”.

Again, no governance. There needs to be a validation step for actions, and sign-off for “resolution” should not be developed to frontline staff.

 “Apart from a single directive issued by Defence in 2007, use of the database was not enforced and there were no sanctions against staff who avoided or misused it”.

There’s the kicker. Use of the lessons system was effectively optional, with no clear expectations, no link to reward or sanction, no performance management. It’s no wonder people stopped using it.

So it isn’t as simple as “database turned off users”. It’s a combination of

  • Poor database
  • Poor notification mechanism
  • No support
  • No training
  • No incentives
  • No governance
  • No checking on actions

It’s quite possible that if the other items had been fixed, then people might have persevered with the clumsy database, and it’s even more likely that if they built a better database without fixing the other deficiencies, then people still would not use it.  A

What they needed was a lessons management system, not just a database.

So what was the outcome? According to the article,

…..establish a clear role and scope for future operational knowledge management repositories, and develop a clear plan for capturing and migrating relevant existing information ….. prepare a “user requirement” for an enterprise system to share lessons.

In other words – “build a better database and hope they use it” Sigh.

View Original Source (nickmilton.com) Here.

What are the outputs of the KM workstream?

KM organisations need a Knowledge workstream as well as a Product/Project workstream. But what are the knowledge outputs?

I have blogged several times about the KM workstream you need in your organisation; the knowledge factory that runs alongside the product factory or the project factory.  But what are the outputs or  products of the knowledge factory?
The outputs of the product factory are clear – they are designed and manufactured products being sold to customers. The outputs of the project factory are also clear – the project deliverables which the internal or external client has ordered and paid for. 
We can look at the products of the KM workstream in a similar way. The clients and customers for these are knowledge workers in the organisation who need knowledge to do their work better; to deliver better projects and better products. It is they who define what knowledge is needed. Generally this knowledge comes in three forms:
  • Standard practices which experience has shown are the required way to work. These might be design standards, product standards, standard operating procedures, norms, standard templates, algorithms and so on. These are mandatory, they must be followed, and have been endorsed by senior technical management.
  • Best practices and best designs which lessons and experience have shown are currently the best way to work in a particular setting or context. These are advisory, they should be followed, and they have been endorsed by the community of practice as the current best approach.
  • Good practices and good options which lessons from one or two projects have shown to be a successful way to work. These might be examples of successful bids, plans, templates or designs, and they have been endorsed by the community of practice as “good examples” which might be copied in similar circumstances, but which are not yet robust enough to be recognised as “the best”. 
  • More generic accumulated knowledge about specific tasks, materials, suppliers, customers, legal regimes, concepts etc.
The project/product workstream also creates outputs which act as inputs to the knowledge workstream; these are the knowledge deliverables, the lessons which capture hindsight, and the useful iterms which can be stored as good practices and good options. The link between lessons and best practices is described here, and shows how the two workstreams operate together to gather and deliver knowledge to optimise results. 

View Original Source (nickmilton.com) Here.

Observations, Insights, Lessons – how knowledge is born

Knowledge is born in a three-stage process of reflection on experience – here’s how.

I think most people accept that knowledge is born through reflection on experience. The three-stage process in which this happens is the core of how the military approach learning from experience, for example as documented in  this presentation from the Australian Army (slide 12).

The three stages are the identification of  Observations, Insights and Lessons, collectively referred to as OILs. Here are the stages, using some of the Australian Army explanation, and some of my own.

  • Observations. Observations are what we capture from sources, whether they be people or things or events. Observations are “What actually happened” and are usually compared to “What was supposed to happen”. Observations are the basic building blocks for knowledge but they often offer very limited or biased perspective on their own. However storing observations is at least one step better that storing what was planned to happen (see here). For observations to be a valid first step they need to be the truth, the whole truth (which usually comes from multiple perspectives) and nothing but the truth (which usually requires some degree of validation against other observations and against hard data).
  • Insights. Insights are conclusions drawn from patterns we find looking at groups of observations.  They identify WHY things happened the way they did, and insights come from identifying root causes. You may need to ask the 5 whys in order to get to the root cause.  Insights are a really good step towards knowledge due to their objectivity.  The Australian Army suggests that for the standard soldier, insights are as good as lessons. 
  • Lessons.  These are the inferences from insights, and the recommendations for the future. Lessons are knowledge which has been formulated as advice for others, and the creation of lessons from insights requires analysis and generalisation to make the insights specific and actionable . The Australian army defines lessons as “insights that have specific authorised actions attached…. directed to Army authorities to implement the stated action”, and there is a close link between defining an actionable lesson, and assigning an action to that lesson.

This progression, from Observation to Insight to Lesson represents the methodology of learning by reflection. The Retrospect meeting and the (smaller scale) After Action Review both provide a structured discussion format which moves increments of knowledge through the three stages..

In other organisations these three stages are separated. Observations are collected, analysts use these to derive insights, and then an authoritative body adds the action and turns the insights into lessons. My personal preference is to address all three steps as close as possible to the action which is being reviewed, using the same team who conducted the action to take Observations through to Lessons.

But however you divide the process, and whoever conducts the steps, these three stages of Observation, Insight and Lesson are fundamental to the process of learning from experience. 

View Original Source (nickmilton.com) Here.

The "One Year After" knowledge capture event.

Many of us are used to holding knowledge capture events at the end of a project.  There is also merit in repeating this exercise one year (or more) later.

Imagine a project that designs and builds something – a factory, for example, or a toll bridge, or a block of student accommodation. Typically such a project may capture lessons throughout the project lifetime, using After Action Reviews to capture project-specific lessons for immediate re-use, and may then capture end-of-project lessons using a Retrospect, looking back over the life of the project to identify knowledge which can be passed on to future projects. This end-of-project review tends to look at the efficiency of the practices used during the project, and how these may be improved going forward. 
The review asks “Was the project done right, and how can future projects be done better”.  However what the review often does not cover is “Was the right project done?”
At the end of the project the factory is not yet operational, the bridge has only just opened to traffic, and you have just cut the ribbon on the student accommodation block. You do not yet know how well the outcome of the project will perform in practice. 

This is where the One-Year operational lessons review comes in.

You hold this review after a year or more of operation. 
  • You look at factory throughput, and whether the lines are designed well, how they are being used, how effective the start-up process was, whether there are any bottlenecks in dispatch and access, and even whether the factory is in the correct location. 
  • You look at traffic over the bridge – is it at expected levels? Is it overused or underused? Is it bringing in the expected level of tolls? Does the bridge relieve congestion or cause congestion somewhere else? Does the road over the bridge have enough lanes?  Does it ice up in winter?
  • You look at usage of the student accommodation. Is it being used as expected? Are the kitchens big ebnough? Are there enough bike racks? Where is the wear and tear on the corridors? Where are accidents happening? What do the neighbours think?
In this review you are looking not at whether the project was done right, but whether it was the right project (or at least the right design). The One Year operational learning review will generate really useful lessons to help you improve your design, and your choice of projects, in future. 

Don’t stop collecting lessons at the end of the project, collect more once you have seen the results of a year or more of operations.

Contact Knoco for help in designing your lesson learned program.

View Original Source (nickmilton.com) Here.

Why storing project files is not the same as storing project knowledge

There is often an assumption that storing project files equates to managing knowledge on behalf of future projects. This is wrong, and here’s why.

For example, see this video from the USACE Knowledge Management program says “if you digitise your paper files, throw in some metadata tagging, and use our search appliance, finding what you need for your [future] project is easy”. (I have added the word [future] as this was proposed as a solution to the next project now anticipating things in advance).

However there is a major flaw with just digitising, tagging and filing the project documents and assuming that this transfers knowledge, and the flaw is that the project may have been doing things wrong, and almost certainly could have done things better with hindsight. Capturing the files will capture the mistakes, but will not capture the hindsight, which is where the learning and the knowledge resides.

It is that hindsight you need to capture, not the files themselves.

  • Don’t capture the bid package presented to the client, capture what you should have bid, the price you should have quoted, and the package you should have used. All of these things should come from the post-bid win/loss review.
  • Don’t capture the proposed project budget, capture the actual budget, where the cost overruns were, and how you would avoid these next time. This should come from the post-project lessons review.
  • Don’t capture the project resource plan, capture the resource plan you should have had, and the resourcing you would recommend to future projects of this type. This also should come from the post-project lessons review.
  • Don’t capture the planned product design, capture the as-built design, where the adjustments were made, and why they were made. (See  my own experience of working from stored plans and not as-built design which cost me £500 and ten dead trees).
  • And so on. You can no doubt think of other examples.
Capturing the hindsight is extra work, and requires analysis and reflection through Knowledge Management processes such as After Action Review and Retrospect. These processes need to be schedules within the project plan, and need to focus on questions such as 
  • What have we learned?
  • What would we repeat?
  • What would we do differently?
  • What advice and guidance, with the benefit of hindsight, would we give to future projects?
These are tough questions, focused on deriving hindsight (as in the blog picture above). Deriving hindsight is not easy, which is why these Knowledge Management processes need to be given time, space, and skilled facilitation. However they add huge value to future projects by capturing the lessons of hindsight.  Merely filing and tagging the project files is far easier, but will capture none of the hindsight and so none of the knowledge.

Capturing documents from previous projects and repeating what they did will cause you to repeat their mistakes. Better to capture their hindsight, so it can be turned into foresight for future projects. 

View Original Source (nickmilton.com) Here.

How the BBC learned from their Olympic coverage

Here is a case study of one organisation – the BBC – learning from experience. 

The  Olympics is was a massive event, on a scale that is unprecedented in peacetime. It’s the biggest project a country will ever undertake, other than a war. I have already blogged about the Olympic Games KM program, but its not just the Games organisers that need Knowledge Management, it’s everyone involved, especially those involved in new or development areas.

One such area was Digital Broadcasting.  The London Olympics were the Digital Olympics, with more HD broadcasts, web feeds, twitter feeds etc than any other Games before. And to deliver the Digital Games, the country’s main broadcaster, the BBC, needed to develop and apply a raft of new technologies.

 This post, from the BBC Internet Blog,shows how they used lesson-learning, in a structured and planned way, to ensure these products were delivered on time and to specification, and also to ensure that subsequent exercises will learn from this one.

I quote the relevant section

Lessons Learned 
 We captured the lessons from the programme as we went along, from end of sprint retrospectives and the rich data captured in our information systems above. At the end of the Olympics the project managers facilitated workshops to capture additional successes and improvement opportunities and share these with their colleagues.  

From these on-line surveys and interviews with stakeholders, over 300 lessons were captured in our project register. The key lessons touched on above were the importance of organising and planning the work amongst self-directed, multi-disciplinary teams, with a layer of information and communication support provided by the management team. The ability to prioritise the scope and deliver it incrementally with frequent opportunities to test at scale and in a live environment, contributed to the success of a once-in-a-lifetime sporting event for the BBC’s on-line audiences. 

 The experience and lessons learned in delivering this exciting programme will be carried forward by the team members into their next projects, while the environment and process limitations identified, will drive improvements in technology provision and uptake of best practices.

We can see in-project learning (end of sprint retrospectives) and post-project learning (at the end of the Olympics  – workshops to capture additional successes and improvement opportunities) – both activities built into the work program.

We worked with the BBC “live and learn” team about 10 years ago to introduce some of these learning principles, and they have subsequently been MAKE award finalists for many years. This blog shows that KM and learning practices are still alive and thriving at the BBC.

View Original Source (nickmilton.com) Here.

Is Learning from Failure the worst way to learn?

Is learning from failure the best way to learn, or the worst?

Classic Learning
Classic Learning by Alan Levine on Flickr

I was driven to reflect on this when I read the following quote from Clay Shirkey;

Learning from experience is the worst possible way to learn something. Learning from experience is one up from remembering. That’s not great. The best way to learn something is when someone else figures it out and tells you: “Don’t go in that swamp. There are alligators in there.”

Clay thinks that learning from (your own bad) experience is the worst possible way to learn, but perhaps  things are more complex. Here are a few assertions.

  • If you fail, then it is a good thing to learn from it. Nobody could argue with that!
  • It is a very good plan to learn from the failure of others in order to avoid failures of your own. This is Clay’s point; that learning only from your own failures is bad if, instead, you can learn from others. Let them fail, so you can proceed further than they did. 
  • If you are trying something new, then plan for safe failure. If there is nobody else to learn from, then you may need to plan a fail-safe learning approach. Run some early stage prototypes or trials where failure will not hurt you, your project, or anyone else, and use these as learning opportunities. Do not wait for the big failures before you start learning.
  • Learn from success as well. Learn from the people who have avoided all the alligators, not just from the people that got bitten. And if you succeed, then analyse why you succeeded and make sure you can repeat the success.
  • Learning should come first, failure or success second. That is perhaps the worst thing about learning from experience – the experience has to come first. In learning from experience “the exam comes before the lesson.” Better to learn before experience, as well as during and after.  

Learning from failure has an important place in KM, but don’t rely on making all the failures yourself. 

View Original Source (nickmilton.com) Here.

5 reasons why organisations don’t learn lessons.

If lesson learning is so simple, why do organisations so often fail to learn the big lessons?

We seem to be able to learn the little lessons, like improving small aspects of projects, but the big lessons seem to be relearned time and time again. Why is this?

Some of the answers to this question are explored in the article “Lessons We Don’t Learn: A Study of the Lessons of Disasters, Why We Repeat Them, and How We Can Learn Them” by Amy Donahue and Robert Tuohy. In this article they look at lessons from some of the major US emergency response exercises, and find that many of them are repeated time and again.

In particular, repeated lessons are found in the areas of

  • Failed Communications
  • Uncoordinated Leadership
  • Weak planning
  • Resourcing constraints
  • Poor Public relations

In fact these lessons come up so often that staff in disaster response exercises can almost tell you in advance what is going to fail.  People know that these issues will cause problems, but nobody is fixing them.  
You could draw up a similar list for commercial projects and find many of the same headings, with the possible addition of issues such as
  • Scoping and scope control
  • Subcontracting
  • Pricing
Donahue and Tuohy explore why it is so difficult to learn about these big issues, and come out with the following factors:
  1. Lack of motivation to fix the issues. As Donahue and Tuohy explain, 

“Individual citizens rarely see their emergency response systems in action. They generally assume the systems will work well when called upon. Yet citizens are confronted every day by other problems they want government to fix – failing schools, blighted communities, and high fuel prices. Politicians tend to respond to these more immediately pressing demands, deferring investments in emergency preparedness until a major event re-awakens public concern. As one incident commander put it, “Change decisions are driven by politics and scrutiny, not rational analysis.” 

In addition, they identify the sub-issues of a lack of ability to sustain commitment, Lack of a shared vision on what to do about the lessons, and a lack of a willingness of one federal or local body to learn from others.

All of these issues are also seen in commercial organisations. There is a reluctance to make big fixes if it’s not what you are being rewarded for, a reluctance to learn from other parts of the organisation, and difficulties in deciding which actions are valid.

  1. An ineffective lessons capture and dissemination process. Donahue and Tuohy identify the following points:

“While some (AAR) reports are very comprehensive and useful, lessons reporting processes are, on the whole, ad hoc. There is no universally accepted approach to the development or content of reports… often several reports come out of any given incident… agencies or disciplines write their own without consulting each other. These reports differ and even conflict … there is no independent validation mechanism to establish whether findings and lessons are “right” … concern about attribution and retribution is a severe constraint on candour in lessons reporting …  the level of detail required to make a lesson meaningful and actionable is lost … meaning is also diluted by the lack of a common terminology … AARs typically focus on what went wrong, but chiefs want to know what they can do that is right. Reports tend to detail things that didn’t work, without necessarily proposing solutions. … those preparing the reports need to understand not only what happened, but also why it happened and what corrective action would have improved the circumstances. Reports of this depth and quality are relatively rare … many opportunities to learn smaller but valuable lessons are foregone (and) there is no mechanism by which these smaller lessons can be easily reported and widely shared”. 

That’s quite a list, and again we can also see these issues in industry as well. Lesson learning crucially needs

  1. An ineffective lessons dissemination process. Donahue and Tuohy make the following points:

“The value of even well-crafted reports is often undermined because they are not distributed effectively. Most dissemination is informal, and as a result development and adoption of new practices is haphazard. Generally, responders must actively seek reports in order to obtain them. … There is no trusted, accessible facility or institution that provides lessons learned information to first responders broadly, although some disciplines do have lessons repositories. (The Wildland Fire Lessons Learned Center and the Center for Army Lessons Learned are two prominent examples.)”

In fact, the Wildland Fire lessons center and the Center for Army Lessons Learned represent good practice (not just in technology, but in resourcing and role as well) and are examples that industry can learn from. However the issue here is not just dissemination of lessons, but synthesis of knowledge from multiple lessons – something the emergency services generally do not do.

  1. An ineffective process for embedding change. Donahue and Tuohy address this under the heading of “learning and teaching).

“Most learning and change processes lack a formal, rigorous, systematic methodology. Simplistically, the lesson learning and change process iterates through the following steps: Identify the lesson > recognize the causal process > devise a new operational process > practice the new process > embed/institutionalize and sustain the new process.  It is apparent in practice that there are weaknesses at each of these steps….

The emergency response disciplines lack a common operating doctrine…. Agencies tend to consider individual incidents and particular lessons in isolation, rather than as systems or broad patterns of behavior. … Agencies that do get to the point of practicing a new process are lulled into a false sense that they have now corrected the problem. But when another stressful event happens, it turns out this new process is not as firmly embedded as the agency thought … Old habits seem “safer,” even though past experience has shown they do not work. 

Follow-up is inadequate … Lessons are not clearly linked to corrective actions, then to training objectives, then to performance metrics, so it is difficult for organizations to notice that they have not really learned until the next incident hits and they get surprised”.

This is the issue of lesson managament, which represents Stage 2 of lesson learning maturity. Many organisations, such as the ones involved in emergency response, are stuck at stage 1.  Lesson management involves tracking and supporting lessons through the whole lifecycle, from identification through to validated and embedded change.

There really is little point spending time collecting lessons if these lessons are then not managed through to resolution.

  1. A lack of dedicated resources. Donahue and Tuohy again – 

“Commitment to learning is wasted if resources are not available to support the process. Unfortunately, funds available to sustain corrective action, training, and exercise programs are even leaner than those available for staff and equipment”.

Lesson-learning and lesson management need to be resourced. Roles are needed such as those seen in the US Army and the RCAF, or in Shell, to support the process.  Under-resourcing lesson learning is a major reason why lesson learning so often fails.

Conclusions.

Donahue and Tuohy have given us some sobering reading, and provided many reasons why lesson learning is not working for Disaster response. Perhaps the underlying causes are the
lack of treating lesson learning as a system, rather than as a product (ie a report with documented lessons), and a lack of treating lesson learning with the urgency and importance that it deserves.

Make no mistake, many commercial organisations are falling into the same pitfalls that Donahue and Tuohy describe.

If learning lessons is important (and it usually is), then it needs proper attention, not lipservice.

View Original Source (nickmilton.com) Here.

How can we learn lessons when every project is different?

This is another one to add to the “Common Knowledge Management Objections” list, and it’s worth thinking in advance what your counter-argument might be.

It’s a push-back you hear quite often in project organisations:

“We can’t do Knowledge Management, especially lessons learned, as all our projects are different”.

I last herd this from a technology company, and by saying “every project is different”, they mean that “every project has a different client, different product, different technical specifications”.

To some extent, they are correct, and this project variation reduces some of the impact of lesson learning. Certainly lessons add the most value when projects are the most similar. But even when projects change, learning still adds value.

Firstly, even on those technology projects, the process will be the same. 

The process of building and understanding the client requirements, choosing and forming the team, selecting and managing sub contractors, balancing the innovation against the risk, communicating within the team, keeping the client requirements always in mind, managing quality, managing cost, managing time, managing expectations, managing risk, and so on.

There is a huge amount of learning to be done about the process of a project, even when the tasks are different.

Secondly, the other common factor for this technology company was that every project was a variant on their existing product. 

They learned a lot about the way the product worked, and the technology behind the product, with every new project. If this additional knowledge was not captured, then they would have to rediscover it anew every time.  If the knowledge is captured, then each project is an exploration into the technology, and builds the company understanding of the technology so that new products can be developed in future.

So even if every project is different, every project can still be a learning opportunity. 

View Original Source (nickmilton.com) Here.

Sharing knowledge by video – – a firefighting example

The US Wildfire community is an area where Knowledge Management and Lesson Learning has been eagerly embraced, including the use of video.

The need for Knowledge Management and Lesson Learning is most obvious where the consequences of not learning are most extreme. Fire-fighting is a prime example of this – the consequences of failing to learn can be fatal, and  fire fighters were early adopters of KM. This includes the people who fight the ever-increasing numbers of grass fires and forest fires, known as Wildland fires.

The history of lesson learning in the Wildfire community is shown in the video below, including the decision after a major tragedy in 1994 to set up a lesson learned centre to cover wildfire response across the whole of the USA.

The increase in wildland fires in the 21st century made it obvious to all concerned that the fire services needed to learn quickly, and the Wildland Lessons Learned center began to introduce a number of activities, such as the introduction of After Action reviews, and collecting lessons from across the whole of the USA. A national wildfire “corporate university” is planned, of which the Lesson Learned center will form a part.

The wildfire lessons center can be found here, and this website includes lesson learned reports from various fires, online discussions, a blog (careful – some of the pictures of chainsaw incidents are a bit gruesome), a podcast, a set of resources such as recent advances in fire practice, a searchable incident database, a directory of members, and the ability to share individual lessons quickly. This is a real online community of practice.

Many of the lessons collected from fires are available as short videos published on the Wildland Lessons Center youtube channel and available to firefighters on handheld devices. An example lesson video is shown below, sharing lessons from a particular fire, and speaking directly to the firefighter, asking them to imagine themselves in a particular situation. See this example below from the “close call” deployment of a fire shelter during the Ahorn fire in 2011, which includes material recorded from people actually caught up in the situation.

Sometimes lessons can be drawn from multiple incidents, and combined into guidance. Chainsaw refueling operations are a continual risk during tree felling to manage forest fires, as chainsaw fuel tanks can become pressurised, spraying the operator with gasoline when the tank is opened (the last thing you want in the middle of a fire). Lessons from these incidents have been combined into the instructional video below.

This video library is a powerful resource, with a very serious aim – to save lives in future US Wildland fires. 

View Original Source (nickmilton.com) Here.

Skip to toolbar