By: Sue Ebbers, Ph.D.
Originally on published October 13, 2020
Updated on September 15, 2023
You likely possess a wealth of knowledge about your organization, because you work in it every day. Anyone involved in this competency-modeling process must have a full understanding of where your organization fits into its industry sector. They must also grasp how your workforce interacts with outside stakeholders. The research and interviews that are completed help to achieve a successful gap analysis, so that findings can be synthesized into targeted improvement. This is part 2 of a 3-part blog article series that further discusses how to achieve a workforce alignment to results.
When an outside performance improvement consultant is hired, they sometimes begin with little to no knowledge about the products being manufactured by the client, its organization, its retail partners and its industry segment. However, through extensive research with subject matter experts (SMEs), members of the project team quickly become experts in their own right so they can help deliver the results desired.
Consider first: how many SMEs will need to be interviewed and involved in this process? You certainly want to make sure you have a diverse, and representative body of experts, so that your research isn’t biased. And it is helpful to see where each of them sits in the five stages of readiness for change, because that can illuminate the broader company culture you face.
In the example discussed in part 1 of this series, there were 10 expert sales associates within their North American operations identified to serve as SMEs. Then basic research was completed on processes, terminology, and other important industry information from each of their perspectives.
The underlying goal of the research and subsequent interviews is to get a clear picture of each SME’s journey to becoming an expert sales associate. It should include a thorough understanding of:
what the SMEs wish they could have learned at the beginning of their careers to ramp up more quickly,
the dynamics of the sales office which impeded or supported their effectiveness,
customer characteristics, and
the role that training did or did not play in each of them becoming so effective in their sales careers.
How To Conduct The Initial Interviews
Key to maximizing understanding of any skill set is starting with the high performer.
You listen to the perspective and experience of a SME on how they perform a specific set of competencies, as well as how they started their career.
You then compare expert-level performance to a beginner’s thinking and performance.
Research requires due diligence, as you work thoughtfully and thoroughly to explore all the nooks and crannies of a skillset. Unexpected, yet highly relevant skillset areas that will yield substantive insights, may even be unearthed. Veering off-script can sometimes be a useful tactic for ‘mining’ surprising gems of valuable knowledge that otherwise would have been missed, and can lead to transformative results.
How To Analyze The Interviews
SME interviews are typically exhaustive at 90 minutes and will yield valuable data. Each set of notes should be color coded by name, then qualitative analysis is used to break down each interview into a number of relevant categories, grouping comments from all interviewees within those categories.
By taking this approach, you will find that a number of categories emerge. The categories that emerged after the interviews in the project we are discussing here, included:
By iteratively refining these extensive lists, you are able to illuminate a clear picture of the various skills, sub-skills, prerequisite skills, related knowledge and attitudes that a sales associate must have to do their job effectively, whether their level is considered beginning, intermediate, or expert.
Although it might be a slower process than a standard quantitative analysis, the effort often yields a surprising number of hidden consistencies in the workforce due to the comprehensive approach and detail captured. The resulting draft list of competencies may even launch a second phase of research in the field.
How To Perform Deep Research, Field Interviews And Observations To Inform The Gap Analysis
Tested theories develop after an hypothesis is proven multiple times. Similarly, a set of competencies that is culled from a small sample set of SMEs must be tested before potentially applying them universally across a population. For this example project, Change By Design set the draft lists aside to dive deeply into research, using the acquired terminology to uncover any discrepancies. Leaving no stone unturned, the research included:
Reading trade journals, related websites and chat rooms to better understand the client’s industry as a larger body of interconnection.
Reading content within the company’s public website and focusing on associated essential services.
Reading the entire repair manual for all company products to gauge the depth of knowledge necessary for each level of sales associate.
Observing sales experts who were performing their roles at seven different regional locations throughout North America.
Interviewing and observing more than 25 members in different job classes during location visits to understand their impact on the participant sales force.
Engaging in think-aloud protocols with sales associates at various levels of expertise to obtain a detailed understanding of the cognitive processes associated with all necessary skills at each of three levels of sales associate (beginner, intermediate, expert).
A second round of qualitative analysis activity of all notes from the research, field observations and interviews yielded a complete list of all sales associate tasks, knowledge and attitudes at the beginner, intermediate and expert levels. The next step was to validate the competency model in two stages:
First with all sales associates within North American sales offices; and
Second through an expert panel focus group.
How To Validate The Model And Prepare For The Gap Analysis
Once you have gathered the top-level perspective from experts, it’s important to canvas a wider population to validate your model. In this example, three surveys were prepared through a synthesis of research to validate findings across the broader sales associate community throughout North America. Evidence-based surveys like these can be used to improve customer satisfaction, but in this case, each survey represented sales associate skills at either the beginner, intermediate, or expert levels. Respondents were asked to:
Identify if each task listed was one in which they engaged in their sales work; and
State if each task correctly aligned with a listed competency and competency definition.
Each survey respondent was also asked to rank each Knowledge (K), Skill (S), or Attitude (A) by level of difficulty (1-5), level of importance (1-5), and frequency of use (1-5). These were denoted as KSAs.
Through this process, you will find out if the post-survey analysis strongly supports your earlier findings. But that is not the end of it, because there is one more necessary step. Because this model will serve as the basis for the eventual design and development of different and extensive curricula, you should now convene an expert panel review over a handful of days that considers:
all domains and their definitions,
all competencies and their definitions, and
all KSAs and their descriptions.
The result will be a fully validated competency model. It is similar to, but more comprehensive than, a job-task analysis. Most importantly, it provides stable groundwork for the second of three major tasks in your project: to conduct an audit review of all existing sales associate training and determine what KSAs within the competency model are covered, as well as whether or not they are adequately covered. The training enables you to determine recommendations for improvement, and this audit review is known as a “Gap Analysis.”
How To Complete A Gap Analysis
There is an ethical responsibility to deliver what is required to support an organization’s effective functioning in alignment with key performance indicators, so they avoid damaging assumptions. Through skillful application of best practice instructional design theories, models and strategies, the resulting training can and should positively impact organizations in alignment with organizational results. Therefore, any dollars that are spent on the design and development of quality instruction must actually build necessary skills, along with associated knowledge and attitudes, that learners may carry into their day-to-day work activities.
For organizations with existing curricula, it is critical that you determine the viability of current training products in terms of effective learning, learning transfer, and organizational impact. Otherwise, the potential impact of new learning solutions that are introduced may be partially lessened or even fully negated, based on learner confusion. Effective, competency-focused training principles should always be applied to avoid unforeseen problems down the line that reduce performance.
Any training that falls short of achieving the necessary learning goals of an organization should be eliminated from a competency-based curriculum, and replaced with well-designed courses that meet these criteria. Otherwise, your employees may be introducing preventable challenges to workforce alignment.
For example, Change By Design clients in the energy sector have realized great value by investing in a series of extremely well-designed, quality-focused ISO standard courses, because the cost of poor quality was so high. There is always an ROI consideration, and the cost of transforming an organization’s training program must be carefully compared to the benefits gained from a well-designed set of courses aligned to job requirements. Not to mention the risk from anti-discrimination litigation, which is inherent for many large companies whose workforce includes staff that require training accessibility accommodations.
These requirements are found within the competency model and are in alignment with desired organizational outcomes. Courses may be delivered through any number of media, including classroom or face-to-face, eLearning on a learning management system (LMS), virtual Instructor-Led Training (vILT), blended, and even simulation-based and virtual reality formats.
For this project, Change By Design spent considerable time evaluating the company’s learning portfolio for sales associates, which included eLearnings, videos and instructor-led (face-to-face) training. The examination identified learning objectives for each piece of training, evaluated against these eight criteria:
For each course or module you develop, be sure to provide a summary recommendation and explanation of why the company should:
Keep the course ‘as is.’
Make modifications to the course.
Eliminate the course from the list of offerings.
After the audit against the eight criteria, a client is able to recognize how many of their trainings can be utilized ‘as is’ to meet their learning goals.. then arrange to address what’s missing. In this project, Change By Design recommended that the client’s existing several-day Instructor-Led Training (ILT) be delivered as a series of eLearnings prior to an ILT program where the focus would then be on consolidating their skill set through an extensive table-top simulation. We also highlighted numerous existing eLearnings that fully failed to meet the client’s requirements. Leadership could readily see the gap within the current learning curriculum and pursued re-alignment with the competency model, then pursue cost-effective solutions.
Read part 3 of this blog series, which details how to design curriculum architecture for multiple levels of learners, as well as a summary conclusion of this case study.
Click here to re-read part 1 of this blog series (introduction)
Related Articles For Further Reading
Consider subscribing to our email newsletter today to automatically receive the next issue in your inbox, including our most recent authored articles.