The NATO lesson learned portal

The video below is a neat introduction to the concept behind the new Lesson Learned Portal at NATO

The video is publically available on the Youtube channel of JALLC, the joint Analysis and Lessons Learned Centre at NATO

The Youtube description is as follows:

The NATO Lessons Learned Portal is the Alliance’s centralized hub for all things related to Lessons learned. It is managed and maintained by the JALLC, acting as NATO’s leading agent for Lessons Learned. 

Observations and Best Practices that may lead to Lessons to be Learned can be submitted to the Portal, and the JALC will ensure that these Observations find their way through the NATO Lessons Learned Process. 

The information shared on the NATO Lessons Learned Portal can help saving lives. The little piece of information you have, may be the fragment missing to understand the bigger problem/solution – make sure you share it.

View Original Source (nickmilton.com) Here.

A case study of a failed learning system

When lesson learning failed in the Australian Defence Force, they blamed the database. But was this all that was at fault?

design-databaseHere’s an interesting 2011 article entitled “Defence lessons database turns off users”. I have copied some of the text below, to show that, even thought the lessons management software seems to have been very clumsy (which is what the title of the article suggests), there was much more than the software at fault.

 “A Department of Defence database designed to capture lessons learned from operations was abandoned by users who set up their own systems to replace it, according to a recent Audit report. The ADF Activity Analysis Data System’s (ADFAADS) was defeated by a “cultural bias” within Defence, the auditor found. Information became fragmented as users slowly abandoned the system”.

So although the article title is “defence lessons database turns off users”, the first paragraph says that it was “defeated by cultural bias”. There’s obviously something cultural at work here ……

“Although the auditor found the structure and design of the system conformed to ‘best practice’ for incident management systems, users found some features of the system difficult to use. Ultimately it was not perceived as ‘user‐friendly’, the auditor found. Convoluted search and business rules turned some users against the system”. 

….but it also sounds like a clumsy and cumbersome system

“In addition, Defence staff turnover meant that many were attempting to use ADFAADS with little support and training”.

…with no support and no training.

“An automatically-generated email was sent to ‘action officers’ listing outstanding issues in the system. At the time of audit, the email spanned 99 pages and was often disregarded, meaning no action was taken to clear the backlog”.

There needs to be a governance system to ensure actions are followed through, but sending a 99-page email? And with no support and follow-up?

 “It was common for issues to be sent on blindly as ‘resolved’ by frontline staff to clear them off ADFAADS, even though they remain unresolved, according to the auditor”.

Again, no governance. There needs to be a validation step for actions, and sign-off for “resolution” should not be developed to frontline staff.

 “Apart from a single directive issued by Defence in 2007, use of the database was not enforced and there were no sanctions against staff who avoided or misused it”.

There’s the kicker. Use of the lessons system was effectively optional, with no clear expectations, no link to reward or sanction, no performance management. It’s no wonder people stopped using it.

So it isn’t as simple as “database turned off users”. It’s a combination of

  • Poor database
  • Poor notification mechanism
  • No support
  • No training
  • No incentives
  • No governance
  • No checking on actions

It’s quite possible that if the other items had been fixed, then people might have persevered with the clumsy database, and it’s even more likely that if they built a better database without fixing the other deficiencies, then people still would not use it.  A

What they needed was a lessons management system, not just a database.

So what was the outcome? According to the article,

…..establish a clear role and scope for future operational knowledge management repositories, and develop a clear plan for capturing and migrating relevant existing information ….. prepare a “user requirement” for an enterprise system to share lessons.

In other words – “build a better database and hope they use it” Sigh.

View Original Source (nickmilton.com) Here.

How the Australian Emergency Services manage lessons

Taken from this document, here is a great insight into lesson management from Emergency Management Victoria. 

 Emergency Management Victoria coosrinates support for the state of Victoria, Australia during emergencies such as floods, bush fires, earthquakes, pendemics and so on. Core to their success is the effective learning of lessons from carious emergencies.
The diagram above summarises their approach to lesson learning, and you can read more in the review document itself, including summaries of the main lessons under 11 themes.
  • They collect Observations from individuals (sometimes submitted online), and from Monitoring, Formal debriefs, After ActionReviews and major reviews.
  • These observations are analysed by local teams and governance groups to identify locally relevant insights, lessons and actions required to contribute to continuous improvement. These actions are locally coordinated, implemented, monitored and reported. 
  • The State review team also take the observations from all tiers of emergency management, and analyse these for insights, trends, lessons and suggested actions. they then consult with subject matter experts to develop an action plan which will be presented to the Emergency Management Commissioner and Agency Chiefs for approval.
  • The State review team supports the action plan by developing and disseminating supporting materials and implementation products, and will monitor the progress of the action plan.

This approach sees lessons taken through to action both at local level and at State level, and is a very good example of Level 2 lesson learning.

View Original Source (nickmilton.com) Here.

What are the outputs of the KM workstream?

KM organisations need a Knowledge workstream as well as a Product/Project workstream. But what are the knowledge outputs?

I have blogged several times about the KM workstream you need in your organisation; the knowledge factory that runs alongside the product factory or the project factory.  But what are the outputs or  products of the knowledge factory?
The outputs of the product factory are clear – they are designed and manufactured products being sold to customers. The outputs of the project factory are also clear – the project deliverables which the internal or external client has ordered and paid for. 
We can look at the products of the KM workstream in a similar way. The clients and customers for these are knowledge workers in the organisation who need knowledge to do their work better; to deliver better projects and better products. It is they who define what knowledge is needed. Generally this knowledge comes in three forms:
  • Standard practices which experience has shown are the required way to work. These might be design standards, product standards, standard operating procedures, norms, standard templates, algorithms and so on. These are mandatory, they must be followed, and have been endorsed by senior technical management.
  • Best practices and best designs which lessons and experience have shown are currently the best way to work in a particular setting or context. These are advisory, they should be followed, and they have been endorsed by the community of practice as the current best approach.
  • Good practices and good options which lessons from one or two projects have shown to be a successful way to work. These might be examples of successful bids, plans, templates or designs, and they have been endorsed by the community of practice as “good examples” which might be copied in similar circumstances, but which are not yet robust enough to be recognised as “the best”. 
  • More generic accumulated knowledge about specific tasks, materials, suppliers, customers, legal regimes, concepts etc.
The project/product workstream also creates outputs which act as inputs to the knowledge workstream; these are the knowledge deliverables, the lessons which capture hindsight, and the useful iterms which can be stored as good practices and good options. The link between lessons and best practices is described here, and shows how the two workstreams operate together to gather and deliver knowledge to optimise results. 

View Original Source (nickmilton.com) Here.

The "One Year After" knowledge capture event.

Many of us are used to holding knowledge capture events at the end of a project.  There is also merit in repeating this exercise one year (or more) later.

Imagine a project that designs and builds something – a factory, for example, or a toll bridge, or a block of student accommodation. Typically such a project may capture lessons throughout the project lifetime, using After Action Reviews to capture project-specific lessons for immediate re-use, and may then capture end-of-project lessons using a Retrospect, looking back over the life of the project to identify knowledge which can be passed on to future projects. This end-of-project review tends to look at the efficiency of the practices used during the project, and how these may be improved going forward. 
The review asks “Was the project done right, and how can future projects be done better”.  However what the review often does not cover is “Was the right project done?”
At the end of the project the factory is not yet operational, the bridge has only just opened to traffic, and you have just cut the ribbon on the student accommodation block. You do not yet know how well the outcome of the project will perform in practice. 

This is where the One-Year operational lessons review comes in.

You hold this review after a year or more of operation. 
  • You look at factory throughput, and whether the lines are designed well, how they are being used, how effective the start-up process was, whether there are any bottlenecks in dispatch and access, and even whether the factory is in the correct location. 
  • You look at traffic over the bridge – is it at expected levels? Is it overused or underused? Is it bringing in the expected level of tolls? Does the bridge relieve congestion or cause congestion somewhere else? Does the road over the bridge have enough lanes?  Does it ice up in winter?
  • You look at usage of the student accommodation. Is it being used as expected? Are the kitchens big ebnough? Are there enough bike racks? Where is the wear and tear on the corridors? Where are accidents happening? What do the neighbours think?
In this review you are looking not at whether the project was done right, but whether it was the right project (or at least the right design). The One Year operational learning review will generate really useful lessons to help you improve your design, and your choice of projects, in future. 

Don’t stop collecting lessons at the end of the project, collect more once you have seen the results of a year or more of operations.

Contact Knoco for help in designing your lesson learned program.

View Original Source (nickmilton.com) Here.

Why storing project files is not the same as storing project knowledge

There is often an assumption that storing project files equates to managing knowledge on behalf of future projects. This is wrong, and here’s why.

For example, see this video from the USACE Knowledge Management program says “if you digitise your paper files, throw in some metadata tagging, and use our search appliance, finding what you need for your [future] project is easy”. (I have added the word [future] as this was proposed as a solution to the next project now anticipating things in advance).

However there is a major flaw with just digitising, tagging and filing the project documents and assuming that this transfers knowledge, and the flaw is that the project may have been doing things wrong, and almost certainly could have done things better with hindsight. Capturing the files will capture the mistakes, but will not capture the hindsight, which is where the learning and the knowledge resides.

It is that hindsight you need to capture, not the files themselves.

  • Don’t capture the bid package presented to the client, capture what you should have bid, the price you should have quoted, and the package you should have used. All of these things should come from the post-bid win/loss review.
  • Don’t capture the proposed project budget, capture the actual budget, where the cost overruns were, and how you would avoid these next time. This should come from the post-project lessons review.
  • Don’t capture the project resource plan, capture the resource plan you should have had, and the resourcing you would recommend to future projects of this type. This also should come from the post-project lessons review.
  • Don’t capture the planned product design, capture the as-built design, where the adjustments were made, and why they were made. (See  my own experience of working from stored plans and not as-built design which cost me £500 and ten dead trees).
  • And so on. You can no doubt think of other examples.
Capturing the hindsight is extra work, and requires analysis and reflection through Knowledge Management processes such as After Action Review and Retrospect. These processes need to be schedules within the project plan, and need to focus on questions such as 
  • What have we learned?
  • What would we repeat?
  • What would we do differently?
  • What advice and guidance, with the benefit of hindsight, would we give to future projects?
These are tough questions, focused on deriving hindsight (as in the blog picture above). Deriving hindsight is not easy, which is why these Knowledge Management processes need to be given time, space, and skilled facilitation. However they add huge value to future projects by capturing the lessons of hindsight.  Merely filing and tagging the project files is far easier, but will capture none of the hindsight and so none of the knowledge.

Capturing documents from previous projects and repeating what they did will cause you to repeat their mistakes. Better to capture their hindsight, so it can be turned into foresight for future projects. 

View Original Source (nickmilton.com) Here.

Army definitions in Lesson Learning

The Army talk about building up lessons through Observations and Insights. But what do these terms mean?

Chinese character dictionaryLesson learning is one area where Industry can learn from the Military. Military lesson learning can be literally a matter of life and death, so lesson learning is well developed and well understood in military organisations.

The Military see a progression in the extraction and development of lessons – from Observations to Insights to Lessons – and we see a similar progression within the questioning process in After Action Reviews and Retrospects.

On Slide 7 of this interesting presentation, given by Geoff Cooper, a senior analyst at the Australian Centre for Army Lessons Learned, at the recent 8th International Lessons Learned Conference, we have a set of definitions for these terms, which are very useful.

They read as follows (my additions in brackets)

Observation. The basic building block [for learning] from a discrete perspective. 

  • Many are subjective in nature, but provide unique insights into human experience.
  • Need to contain sufficient context to allow correct interpretation and understanding.
  • Offer recommendations from the source
  • [they should be] Categorised to speed retrieval and analysis

Insight. The conclusion drawn from an identified pattern of observations pertaining to a common experience or theme.

  • Link differing perspectives and observations, where they exist.
  • Indicate recommendations, not direct actions,
  • Link solid data to assist decision making processes
  • As insights relay trends, they can be measures

Lesson. Incorporates an insight, but adds specific action and the appropriate technical authority.  

Lesson Learned. When a desired behaviour or effect is sustained, preferably without external influence.

What Geoff is describing is a typical military approach to lesson-learning, where a lessons team collects many observations from Army personnel, performs analysis, and identified the Insight and Lesson. As I pointed out in this post, this is different from the typical Engineering Project approach, where the project team compare observations, derive their own insight, and draft their own lesson.

The difference between the two approaches depends on the scale of the exercise. In the military model there can be hundreds of people who contribute observations, while in a project, it’s usually a much smaller project team (in which case it makes sense to collect the observations and insights through discussion). If you are using the military model, these definitions will be very useful.

View Original Source (nickmilton.com) Here.

How the BBC learned from their Olympic coverage

Here is a case study of one organisation – the BBC – learning from experience. 

The  Olympics is was a massive event, on a scale that is unprecedented in peacetime. It’s the biggest project a country will ever undertake, other than a war. I have already blogged about the Olympic Games KM program, but its not just the Games organisers that need Knowledge Management, it’s everyone involved, especially those involved in new or development areas.

One such area was Digital Broadcasting.  The London Olympics were the Digital Olympics, with more HD broadcasts, web feeds, twitter feeds etc than any other Games before. And to deliver the Digital Games, the country’s main broadcaster, the BBC, needed to develop and apply a raft of new technologies.

 This post, from the BBC Internet Blog,shows how they used lesson-learning, in a structured and planned way, to ensure these products were delivered on time and to specification, and also to ensure that subsequent exercises will learn from this one.

I quote the relevant section

Lessons Learned 
 We captured the lessons from the programme as we went along, from end of sprint retrospectives and the rich data captured in our information systems above. At the end of the Olympics the project managers facilitated workshops to capture additional successes and improvement opportunities and share these with their colleagues.  

From these on-line surveys and interviews with stakeholders, over 300 lessons were captured in our project register. The key lessons touched on above were the importance of organising and planning the work amongst self-directed, multi-disciplinary teams, with a layer of information and communication support provided by the management team. The ability to prioritise the scope and deliver it incrementally with frequent opportunities to test at scale and in a live environment, contributed to the success of a once-in-a-lifetime sporting event for the BBC’s on-line audiences. 

 The experience and lessons learned in delivering this exciting programme will be carried forward by the team members into their next projects, while the environment and process limitations identified, will drive improvements in technology provision and uptake of best practices.

We can see in-project learning (end of sprint retrospectives) and post-project learning (at the end of the Olympics  – workshops to capture additional successes and improvement opportunities) – both activities built into the work program.

We worked with the BBC “live and learn” team about 10 years ago to introduce some of these learning principles, and they have subsequently been MAKE award finalists for many years. This blog shows that KM and learning practices are still alive and thriving at the BBC.

View Original Source (nickmilton.com) Here.

5 reasons why organisations don’t learn lessons.

If lesson learning is so simple, why do organisations so often fail to learn the big lessons?

We seem to be able to learn the little lessons, like improving small aspects of projects, but the big lessons seem to be relearned time and time again. Why is this?

Some of the answers to this question are explored in the article “Lessons We Don’t Learn: A Study of the Lessons of Disasters, Why We Repeat Them, and How We Can Learn Them” by Amy Donahue and Robert Tuohy. In this article they look at lessons from some of the major US emergency response exercises, and find that many of them are repeated time and again.

In particular, repeated lessons are found in the areas of

  • Failed Communications
  • Uncoordinated Leadership
  • Weak planning
  • Resourcing constraints
  • Poor Public relations

In fact these lessons come up so often that staff in disaster response exercises can almost tell you in advance what is going to fail.  People know that these issues will cause problems, but nobody is fixing them.  
You could draw up a similar list for commercial projects and find many of the same headings, with the possible addition of issues such as
  • Scoping and scope control
  • Subcontracting
  • Pricing
Donahue and Tuohy explore why it is so difficult to learn about these big issues, and come out with the following factors:
  1. Lack of motivation to fix the issues. As Donahue and Tuohy explain, 

“Individual citizens rarely see their emergency response systems in action. They generally assume the systems will work well when called upon. Yet citizens are confronted every day by other problems they want government to fix – failing schools, blighted communities, and high fuel prices. Politicians tend to respond to these more immediately pressing demands, deferring investments in emergency preparedness until a major event re-awakens public concern. As one incident commander put it, “Change decisions are driven by politics and scrutiny, not rational analysis.” 

In addition, they identify the sub-issues of a lack of ability to sustain commitment, Lack of a shared vision on what to do about the lessons, and a lack of a willingness of one federal or local body to learn from others.

All of these issues are also seen in commercial organisations. There is a reluctance to make big fixes if it’s not what you are being rewarded for, a reluctance to learn from other parts of the organisation, and difficulties in deciding which actions are valid.

  1. An ineffective lessons capture and dissemination process. Donahue and Tuohy identify the following points:

“While some (AAR) reports are very comprehensive and useful, lessons reporting processes are, on the whole, ad hoc. There is no universally accepted approach to the development or content of reports… often several reports come out of any given incident… agencies or disciplines write their own without consulting each other. These reports differ and even conflict … there is no independent validation mechanism to establish whether findings and lessons are “right” … concern about attribution and retribution is a severe constraint on candour in lessons reporting …  the level of detail required to make a lesson meaningful and actionable is lost … meaning is also diluted by the lack of a common terminology … AARs typically focus on what went wrong, but chiefs want to know what they can do that is right. Reports tend to detail things that didn’t work, without necessarily proposing solutions. … those preparing the reports need to understand not only what happened, but also why it happened and what corrective action would have improved the circumstances. Reports of this depth and quality are relatively rare … many opportunities to learn smaller but valuable lessons are foregone (and) there is no mechanism by which these smaller lessons can be easily reported and widely shared”. 

That’s quite a list, and again we can also see these issues in industry as well. Lesson learning crucially needs

  1. An ineffective lessons dissemination process. Donahue and Tuohy make the following points:

“The value of even well-crafted reports is often undermined because they are not distributed effectively. Most dissemination is informal, and as a result development and adoption of new practices is haphazard. Generally, responders must actively seek reports in order to obtain them. … There is no trusted, accessible facility or institution that provides lessons learned information to first responders broadly, although some disciplines do have lessons repositories. (The Wildland Fire Lessons Learned Center and the Center for Army Lessons Learned are two prominent examples.)”

In fact, the Wildland Fire lessons center and the Center for Army Lessons Learned represent good practice (not just in technology, but in resourcing and role as well) and are examples that industry can learn from. However the issue here is not just dissemination of lessons, but synthesis of knowledge from multiple lessons – something the emergency services generally do not do.

  1. An ineffective process for embedding change. Donahue and Tuohy address this under the heading of “learning and teaching).

“Most learning and change processes lack a formal, rigorous, systematic methodology. Simplistically, the lesson learning and change process iterates through the following steps: Identify the lesson > recognize the causal process > devise a new operational process > practice the new process > embed/institutionalize and sustain the new process.  It is apparent in practice that there are weaknesses at each of these steps….

The emergency response disciplines lack a common operating doctrine…. Agencies tend to consider individual incidents and particular lessons in isolation, rather than as systems or broad patterns of behavior. … Agencies that do get to the point of practicing a new process are lulled into a false sense that they have now corrected the problem. But when another stressful event happens, it turns out this new process is not as firmly embedded as the agency thought … Old habits seem “safer,” even though past experience has shown they do not work. 

Follow-up is inadequate … Lessons are not clearly linked to corrective actions, then to training objectives, then to performance metrics, so it is difficult for organizations to notice that they have not really learned until the next incident hits and they get surprised”.

This is the issue of lesson managament, which represents Stage 2 of lesson learning maturity. Many organisations, such as the ones involved in emergency response, are stuck at stage 1.  Lesson management involves tracking and supporting lessons through the whole lifecycle, from identification through to validated and embedded change.

There really is little point spending time collecting lessons if these lessons are then not managed through to resolution.

  1. A lack of dedicated resources. Donahue and Tuohy again – 

“Commitment to learning is wasted if resources are not available to support the process. Unfortunately, funds available to sustain corrective action, training, and exercise programs are even leaner than those available for staff and equipment”.

Lesson-learning and lesson management need to be resourced. Roles are needed such as those seen in the US Army and the RCAF, or in Shell, to support the process.  Under-resourcing lesson learning is a major reason why lesson learning so often fails.

Conclusions.

Donahue and Tuohy have given us some sobering reading, and provided many reasons why lesson learning is not working for Disaster response. Perhaps the underlying causes are the
lack of treating lesson learning as a system, rather than as a product (ie a report with documented lessons), and a lack of treating lesson learning with the urgency and importance that it deserves.

Make no mistake, many commercial organisations are falling into the same pitfalls that Donahue and Tuohy describe.

If learning lessons is important (and it usually is), then it needs proper attention, not lipservice.

View Original Source (nickmilton.com) Here.

The 11 steps of FEMA’s lesson capture process

The US Federal Emergency Management Agency (FEMA) has a pretty good process for capturing and distributing lessons. Here are the 11 steps.

FEMAEvery Emergency Services organisation pays close attention to Lesson-Learning  (see for the approach taken by the Wildland Fire Service). They know that effective attention to learning from lessons can save lives and property when the next emergency hits.

The lesson learning system at FEMA was described in an appendix to a 2011 audit document  and showed the following 11 steps in the process for moving from activity to distributed lessons and best practices.  Please note that I have not been able to find a more recent description of the process, which may have changed in the intervening 7 years.

FEMA Remedial Action Management Program Lessons Learned and Best Practices Process

  1. Team Leader (e.g., Federal Coordinating Officer) schedules after-action review
  2. After-action review facilitator is appointed
  3. Lesson Learned/Best Practice Data Collection Forms are distributed to personnel
  4. Facilitator reviews completed forms
  5. Facilitator conducts after-action review
  6. Facilitator reviews and organizes lessons learned and best practices identified in after-action review
  7. Facilitator enters lessons learned and best practices into the program’s database
  8. Facilitator Supervisor reviews lessons learned and best practices
  9. Facilitator Supervisor forwards lessons learned and best practices to Program Manager
  10. Program Manager reviews lessons learned and best practices
  11. Program Manager distributes lessons learned and best practices to Remedial Action Managers
This is a pretty good process.
However despite this good process, the audit showed many issues, including 
  • a lack of a common understanding of what a good lesson looks like; the examples shown are mainly historical statements rather than lessons, and this example from the FEMA archives has the unhelpful lesson “Learned that some of the information is already available information is available”
  • a lack of consistent application of the after action review process (in which I would include not getting to root cause, and not identifying the remedial action),
  • a lack of use of facilitators from outside the region to provide objectivity, 
  • limited distribution of the lesson output (which has now been fixed I believe, and 
  • loss of their lessons database when the server crashed (which has also been fixed by moving FEMA lessons to the Homeland Security Digital Library).

So even a good process like the one described above can be undermined by a lack of governance, a lack of trained resources, and a poor technology. 

View Original Source (nickmilton.com) Here.

Skip to toolbar