Adaptive Learning Isn’t New

“…in just a matter of a few short days, people knew what they did when they came in…”

Now we have better digital tools to configure and deploy Instruction, than we did back in the day. We can pre-test much easier.

But many of us were very conscious of the need to avoid training people on things they already knew from prior education and experience – or didn’t need due to their specific job assignments. Like the late Geary A. Rummler, PhD.

Rummler addressed this in 1981 at a 1-day workshop at Motorola – that I attended – one week before my official start date.

This video is 44:28 minutes in length. For his comment relevant to this post, see it at about the 20:50-minute mark.

He said: “…in just a matter of a few short days, people knew what they did when they came in…”

Handout from that 1981 session:

https://eppicinc.files.wordpress.com/2017/06/1981-mtec-g-a-rummler-session.pdf

###

Go To the Source or Get Informed By the Expert Sources?

A long time ago – back in the early 1980s – I concluded that even if I did site tours and interviews and document reviews – I still wouldn’t have a nuanced enough understanding of the Performance I was trying to effect.

The TQM World calls those site visits, Gemba Walks.

Wikipedia

Genba is a Japanese term meaning “the actual place”. Japanese detectives call the crime scene genba, and Japanese TV reporters may refer to themselves as reporting from genba. In business, genba refers to the place where value is created; in manufacturing the genba is the factory floor.

The gemba walk, much like management by walking around (MBWA), is an activity that takes management to the front lines to look for waste and opportunities to practice genba kaizen, or practical shop floor improvement. An important difference with MBWA is that Gemba Walks are not done randomly, but with a clear goal and often frequency and structure.

Many of my projects – designing Curriculum Architecture Designs – addressing complex and long-cycle performance (5 to 3 years for new product development in the auto industry, for example) weren’t really conducive to going to take a look-see, and then I’d interview top performers about what I had witnessed. And then, I’d read whatever documents that the client or my interviewees provided.

Back in 1979, I started using a Facilitated Group Process (FGP) in my Development efforts – and I soon started using that for Analysis and Design efforts. I wrote about that first experience – here.

Now – it’s ALL ABOUT THE DATA. However one acquires it or develops it – and the Data’s Accuracy, Completeness, and Appropriateness – and its Credibility with my project’s client and stakeholders – so it shouldn’t matter how it was obtained.

But if time is of the essence – and when isn’t it? – then a Facilitated Group Process is the way to go IMX.

I can accomplish more in one 3-day meeting with the right people than I could in 3 weeks or 3 months using the traditional methods of observations, interviews, and document reviews.

I’ve written about this in many books – but here is my 2020 mini-book on my approach:

The Facilitated Group Process in L&D : Magnifying Proven Practices in performance-based Instructional Development Projects

Do you use some form of Group Process when you do Analysis – Design – Development of Instructional Content?

So, I would rather Facilitate the right sources than go to the source – or sources – as one Gemba might not be sufficient to truly understand the performance and performance context that might have tremendous variation across them all.

###

The Focus of Analysis for Performance-Impacting L&D

There are other aspects to L&D/Instructional Analysis – but if you don’t uncover these four – you’re unlikely to Impact Performance back on the job for your Learners.

There are many approaches to Instructional/L&D Analysis. Find one that works for you.

Outcomes

Get specific when you discuss Outcomes. These may be derived by determining the stakeholders for the Outputs and Processes/Tasks and defining their Requirements (which may be a mix of Needs and Wants).

Outputs

People are on the payroll to produce Outputs, and they are Inputs downstream.

Tasks

Tasks may be performed by Machines and/or People. From an L&D perspective, we are concerned with the overt Behavioral Tasks, and the covert Cognitive Tasks. We need to help people learn what to do, and how to think about what they do.

Enablers

We also need to define the Enablers of the Processes – the Process Itself, including the Tasks and Outputs, the Inputs, and the Performance Context(s), and all of the Environmental Enablers and Human Enablers required.

See my mini-book on Instructional Analysis: The Facilitated Group Process in L&D : Magnifying Proven Practices in performance-based Instructional Development Projects

See all 30+ of my books – here.

###

When You Cannot Afford It All

1- Assess the Risks and Rewards at Stake & 2- Prioritize Addressing & 3- Assign Mode & 4- Assign Development

There’s (again) a lot of attention being paid to User Generated Content – and my advice is to slow down and approach what you produce (if anything) and who get’s the assignment much more carefully.

You are, after all, investing Shareholder Equity for a Return.

And I’m thinking both First Costs and Returns – and Life Cycle Costs and Returns. You should too.

1- Assess the Risks & Rewards at Stake for Conformance and Non-Conformance

Risks and Rewards are two sides of the same coin, IMO. Use the approach and language of Risk Assessment used elsewhere in your Enterprise, and/or the preferences of your Leadership. DO NOT simply use what I share here.

Either you are starting to consider an L&D effort to deal with a Performance Problem or an Opportunity – or there are New Hires that need basic and advanced development in order to do their jobs as Performers back on the job.

But not everything Performance-wise has the same level of Risks and Rewards at Stake. And I think that the final assessment of Risks and/or Rewards belongs to the clients and stakeholders – so you must engage them from the start or certainly before you finish such an Assessment.

The TQM – Total Quality Management movement of the late 70s and early 80s taught me to consider the COC and the CONC – the Costs of Conformance (the I in ROI) and the Costs of Non-Conformance (the R in ROI).

And I would always suggest that you start with the CONC (the R) or what Tom Gilbert called the PIP – Performance Improvement Potential. Lead your pitch or report with, “Here’s the value of the situation “as is” – and if we do nothing about it.” That’s either a huge issue, a BIG DEAL, or small potatoes, as we in the USA might call it.

Then after establishing the Value of the Potential Return – I’d address the Investment costs to address it – as well as an estimate of how much of the R we might be able to affect.

That helps you point to where this Problem or Opportunity exists on the left side of my graphic below.

Where are you on the Risk/Reward Continuum?

2- Assess What the Performance Context Allows or Demands

Next, your Assessment or Analysis should have established what the Performance Context demands or allows, back on the job for the Learners who are Performers.

Does the Performance Context always allow for a Referenced Performance Response – where Performers can look up (reference) their response to the situation they find themselves in? Then your situation can be addressed by the column on the left in my graphic above, and you can provide the Learner/Performers with a Performance Guide.

What I am calling Performance Guides has been known previously as Guidance, Job Aids, EPSS – Electronic Performance Support Systems, Informal SOPs-Standard Operating Procedures, Performance Aids, Quick-Reference Guides, Performance Support, and WorkFlow Learning – and perhaps other names/labels that I have missed over the past 5 decades.

Look at that graphic above again now, please. And the two columns on the left and right.

Or, can they Reference their Response most of the time – but not all of the time? If that’s the case – the situation then calls for a Memorized Performance Response.

And if the Performance Context demands – all of the time – a Memorized Performance Response – then you must provide a Learning Experience – and if the job itself doesn’t provide enough applications and reinforcing and corrective feedback – then you’ll probably need to provide Spaced Learning mechanisms that will keep the Memorized Performance Response memorized and at the ready for whenever it’s demanded in the Performance Context.

3- Respond with Performance Guides and/or Learning Experiences

Sometimes you can get by with a Standalone Performance Guide.

Other times you might need to embed that Performance Guide into a Learning Experience to help ensure that the Learner/Performer really knows how to use it – and will be able to consistently demonstrate their Performance Competence, back on the job.

And other times, you will need to provide a Learning Experience that will ensure that the Memorized Performance Response required in the Performance Context is indeed memorized, and again, if the job itself doesn’t provide enough applications and reinforcing and corrective feedback – then you’ll probably need to provide Spaced Learning mechanisms that will keep the Memorized Performance Response memorized and at the ready for whenever it’s demanded in the Performance Context.

I addressed Performance Guides in this book from 2020: Push-Pull Performance Enablement & Guidance Systems: A performance-based Twist on Knowledge Management Systems

4- Assign Production of Performance Guides and/or Learning Experiences To L&D Professionals or Users

Who should produce these two types of outputs, Performance Guides and/or Learning Experiences? L&D Professionals who know and employ the Learning Sciences via their Analysis, Design, and Development efforts? Or Users who know how to do the job?

As always, it depends. And again, let’s consider the Risks and Rewards at Stake.

If the Risks & Rewards at Stake are High, I’d have the L&D Professionals develop and Test-Test-Test the Content.

If the Risks & Rewards at Stake are Medium, I’d have the Master Performers (a.k.a.: Top Performers) develop and then Test-Test-Test their Content.

If the Risks & Rewards at Stake are Low, I’d leave the Learning to Informal (Trial & Error) means, and/or Social Learning means.

My worry about User Generated Content is that research, by Richard E. Clark, EdD, professor emeritus from the University of Southern California, and others shows that experts (and all of us, really), automate most of our knowledge.

It becomes un-conscious or non-conscious. And unavailable to us via our sources. That’s why Single-Source approaches to L&D development are quite problematic.

Those who “can do the work” cannot fully “describe what they know and do when they do the work.”

Experts – and we typically call them SMEs – Subject Matter Experts – but I prefer Master Performers (MPs) and Other Subject Matter Experts (OSMEs) – will miss close to 70% of what a novice needs to know in order to make the cognitive tasks – the covert thinking decisions – in their WorkFlow – that we cannot observe or measure.

And we cannot observe what they are thinking – and they cannot tell us, fully.

That’s an issue, IMO. A huge issue. The kind of issue where a huge red flag should be flown and cover everyone’s field of view to stop them from proceeding on their merry way to content development.

And those Experts will also miss up to 50% when describing their behavioral tasks – the overt, doing tasks that we can observe and measure.

The good news, according to Dr. Clark, is that each Expert has automated a different 30% or 50% – and if we engage up to 5 Experts, we’ll get closer to 85% of completion.

I don’t know about you – but I have always striven to have my Instructional Content – including both Performance Guides and Learning Experiences – be as Accurate, Complete, and Appropriate – as possible.

And I saw early in my career that during Developmental Testing, and Pilot Testing, that my Instructional Content would become more and more complete – from the feedback provided by those participating in the Developmental Testing and Pilot Testing efforts.

So if the Risks and/or Rewards at Stake are High – then I wouldn’t want to put out Instructional Content that was Incomplete.

And if the Risks and/or Rewards at Stake are Medium – then I wouldn’t mind putting out Instructional Content that was Incomplete and could be fixed over time through usage and feedback systems from the Users and Others.

My 2021 book addresses how I have been dealing with Cognitive Task Analysis – ThoughtFlow Analysis: The 3 Ds of ThoughtFlow Analysis: Task Analysis for Instructional Development

Circling Back to Assessing Risks & Rewards

One Key Issue for you and your clients and stakeholders is defining what constitutes High, Medium, and Low Risks and Rewards.

In my view, those are strickly Client and Stakeholder decisions. I wouldn’t want to even hazard an initial guess or position on that.

There will almost always be information and insights unavailable to L&D folks, that Leadership is privy to – and that would push a particular Performance to one of the three segments, High, Medium, and Low.

And what might seem to be High to us, might be seen as Low if we’d only known that that part of the business is being sold off soon, or that the current processes and technology will soon be displaced by newer processes and technology – but those are closely guarded secrets that our Leadership doesn’t want out in the marketplace yet, where our competitors might then react quicker to minimize whatever advantage we might gain from those changes or we cause concerns for our customers.

Assessing Risks and Rewards is the province of Leadership – and our Clients and Stakeholders.

And that’s why I believe we should be more formally engaged with our leadership – which is something I addressed in this book from 2022: Aligning & Architecting performance-based Learning & Development: Target High Stakes Performance for Improvement and Greater Returns on the Investments of Shareholder Equity

###

Video: Performance Analysis Webinar

For – ATDCFL 2021-01-13

This video is 60:59 minutes in length.

This is my 60-minute webinar for the Central Florida Chapter of ATD – Association for Talent Development – delivered on January 13, 2021 on my approach to Performance Analysis.

This followed up on a webinar I had done for the chapter 3 months earlier on Curriculum Architecture Design.

Performance Analysis is one of four analysis methods I use in ISD efforts: 1- Target Audience Analysis, 2- Performance Analysis, 3- Knowledge/Skill Analysis, 4- Existing Content Assessment (for its ReUse potential).

That earlier webinar from October 2020 is on YouTube here: https://youtu.be/FNCXxXCA0WY

My 2020 book: Conducting performance-based Instructional Analysis : In Every Phase of an Instructional Development Effort

###

Perform an Adequate Analysis Effort

Produce Outputs that are Worthy in that they meet the Stakeholders’ Requirements.

The downstream Customer, for example, needs Analysis data that will inform their Design & Development needs.

THAT then depends on your Design & Development Processes.

But there may be other Stakeholders as well.

An Analysis effort that doesn’t uncover these basics, plus who is in the Target Audience, and what might they already know, or not, is not adequate.

Check out my 2022 mini-book on this: Performance-Based Instructional Analysis: Magnifying Proven Practices in performance-based Instructional Development Projects.

###

Video – Guy Speaking on Curriculum Architecture Design at Eli Lilly in 1995

This video is 122:39 minutes in length.

I’ve done 76 of these projects as a consultant since 1982 – the last one in 2019 for Sales representatives.

Bonus Video: 2020 Webinar for ATD Central Florida:

performance-based Curriculum Architecture Design via a Facilitated Group Process

This video is 58:35 minutes in length.

My 2022 book on Curriculum Architecture Design – now also known as Instructional Architecture: Performance-Based Instructional Architecture: Magnifying Proven Practices in performance-based Instructional Development Projects

###