Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

In Evaluation Blunders and Missteps to Avoid article , discusses four of the mos

ID: 395723 • Letter: I

Question

In Evaluation Blunders and Missteps to Avoid article, discusses four of the most common training evaluation mistakes. Which mistake would you discuss in depth with a colleague that was having issues with implementing an evaluation plan for a large training program? Write one page or as much as you can.

We asked just a few questions to verify our suspicion; they had not pinpointed the specific company metrics they hoped this large invest- ment would positively impact. Of even more concern, they had not identified exactly what the managers involved in the program should be doing to influence those metrics, nor had they prepped senior managers to coach and monitor performance. Ultimately, they had created a “nice-to-have” program containing a laundry list of development activities targeted to nothing in particular.

We had no choice but to tell this well- meaning consultant that there was little we could do to help them other than recommend that, as quickly as possible, they create an ef- fective program plan that includes metrics and performance standards, and see if there is anything they can salvage from the current misguided program.

To avoid this pitfall, programs should begin with a focus on the Level 4 results you need
to accomplish. This automatically focuses ef- forts on what is most important. Conversely, if you follow the common, old-school approach to planning and implementing your training program, by first thinking about how you will evaluate Level 1 (reaction), then Level 2 (learn- ing), then Level 3 (behavior), it’s easy to see why few people get to Level 4 in this fashion.

Set yourself apart from and ahead of the crowd by using the four levels upside down; start every project by first considering the key company metrics you plan to influence and articulate how this will contribute to the Level 4 result of your organization. Then think about what really needs to occur on the job to produce good results (Level 3). Consider next what training or other support is required for workers to perform well on the job (Level 2). And finally, consider what type of training will be conducive to imparting the required skills successfully (Level 1).

Mistake #2: Spending the majority of evaluation resources on Levels 1 and 2 The 2016 ATD report Evaluating Learning: Getting to Measurements That Matter polled 199 learning professionals who revealed that

they invest nearly 70 percent of their training evaluation resources in Levels 1 and 2. Sadly, this statistic did not improve from ATD’s previous report in 2009. This old-school ap- proach of spending heavily on effective training leaves few resources for the more important job of ensuring training effective- ness at Levels 3 and 4.

Generally speaking, Level 3 is the most im- portant level to not only evaluate, but also invest in for any important program. Without on-the-job application, training has no hope of contributing to organizational results and, therefore, is of little value to the organization. If your program is important enough to have a Level 3 plan, then it is also important enough to have evaluation of Level 4 results.

Level 1 is the least important level. Of course you want to know that the training was well-received, but consider how much of
a resource investment it is worth to gather this data. The investment should be quite small. Focus on formative methods that oc- cur during the program itself, and only formally evaluate the few key items you plan to analyze and use.

The evaluation of Level 2 is important to ensure that participants leave a training pro- gram prepared with the required knowledge and skill. However, proper Level 2 evaluation can be built right into the design of a program and should, therefore, not become an evalua- tion resource priority.

Now that you’ve saved resources for Lev- els 3 and 4, what level of effort and resource investment can you expect to devote to each? Level 4 is actually the simplest and least re- source intensive to evaluate. If something is
a true Level 4 result, it is important enough that someone in the organization is already measuring and monitoring it, and it is simply
a matter of obtaining the data. What is more difficult is to find the connection between training, on-the-job performance, and organi- zational results. In many evaluation plans, the missing link is Level 3.

A strong Level 3 plan recommended for im- portant initiatives will be resource intensive; however, evaluating Level 3 is not as expensive

The four levels is the most common training evaluation approach, but that doesn’t mean it has been implemented correctly over time. On the contrary, misconcep- tions and misapplication have reduced the simple effectiveness of the model. Here is a summary of the most common training evaluation mistakes.  

Are you making them?  

Mistake #1: Addressing evaluation requirements after program launch

Many training professionals mistakenly design, develop, and deliver a training program, and only then start to think about how they will evaluate its effectiveness. Using this approach nearly guarantees that there will be little or no value to report.  

We received a phone call from a consultant a few years back. He was quite proud to tell us about the multimillion-dollar leadership development program he had created for a large corporation. He worked with the company to define its needs before de- veloping the three-year program, which was nearing the end of the first year. He was contacting us to find out if we wished to join the project as evaluation consultants, because they had data from the first year of program participation.  

as some would think. When tools and systems are constructed at the same time as the pro- gram itself, and ultimately viewed as part
of the program, this simply reallocates the resources from instructional design to perfor- mance support.

Mistake #3: Relying solely on standardized surveys
Some believe in the existence of a miracle survey that will give you all of the training evaluation data you need. Don’t buy it. For mission-critical programs, it is important to employ multiple evaluation methods and tools to create a credible chain of evidence showing that training improved job performance and contributed measurably to organizational re- sults. For less important programs, you will want to be equally careful about selecting the few evaluation items you require.

Surveys, particularly those administered and tabulated electronically, are a wonderfully efficient means of gathering data. However, response rates tend to be low, and there is a limit to the types of information that can be gathered. It is so easy to disseminate these surveys that they are often launched after ev- ery program, no matter how large or small. The questions are not customized to the pro- gram or the need for data, and people quickly pick up on the garbage in-garbage out cycle. This creates survey fatigue and makes it less likely that you will gather meaningful data for any program.

For mission-critical programs in particular, gather both quantitative (numeric) and quali- tative (descriptive) data. Open-ended survey questions can gather quantitative data to some degree, but adding another evaluation method provides better data. For example, a post-program survey could be administered and results analyzed. If a particular trend is identified, a sampling of program participants could be interviewed and asked open-ended questions on a specific topic.

Depending on the rigor required by your stakeholders, you may be able to obtain good interview data by simply calling or briefly vis- iting training participants and asking them

a question. Don’t be too intimidated to inte- grate this human element into your program evaluation data.

An often-overlooked source of evaluation data is formative data. Build touch points into your training programs for facilitators to so- licit feedback, and ask your facilitators for their feedback via a survey or interview after the program.

Mistake #4: Not using collected data

Have you ever inherited an office that has a precariously tall stack of papers in one corner, perhaps next to a stuffed full file cabinet? Wendy did, and upon closer inspection, she saw that it was years and years of old training evaluation forms.

LEVEL 3 IS THE MOST IMPORTANT LEVEL TO NOT ONLY EVALUATE, BUT ALSO INVEST IN FOR ANY IMPORTANT PROGRAM.

The Kirkpatrick Model

Level 4: Results—The degree to which targeted program outcomes occur and contribute to the organization’s highest- level result

Level 3: Behavior—The degree to which participants apply what they learned during training when they are back on the job

Level 2: Learning—The degree to which participants acquire the intended knowledge, skills, attitude, confidence, and commitment based on their participation in the training

Level 1: Reaction—The degree to which participants find the training favorable, engaging, and relevant to their jobs

FOR MISSION-CRITICAL PROGRAMS IN PARTICULAR, GATHER BOTH QUANTITATIVE (NUMERIC) AND QUALITATIVE (DESCRIPTIVE) DATA.

Besides a poor document retention sys- tem, the bigger problem this indicated was that the evaluations had not been properly analyzed, and findings were not appropriately integrated into program enhancements and performance support efforts. When Wendy asked around the department, multiple indi- viduals commented that there was never time or resources to tabulate the evaluations, so
a quick flip through them by the facilitator
or the boss was all that ever really happened. “Someday” someone would enter the data into some type of system so that it could be quan- tified and analyzed, but “someday” had not occurred for years.

When you survey a group of individuals, you are making an implicit agreement with them that you will act on their aggregated feedback. Continuing to disseminate surveys when the participants can clearly see that you are doing nothing with the data will quickly create the expectation that nothing ever will happen with their feedback, and they will stop giving it.

At that same organization, Wendy was asked to create an evaluation form to use after a week-long event in which new products and updates were launched to the sales and cus- tomer service team. She included questions about the program quality and content, the meeting facilities, and how the participants felt about selling the new products.

Wendy was present when the vice presi- dent of marketing, who organized the event, first reviewed the evaluations. His commentary went something like this:

“Joe said that the new product will not sell in his market because the color scheme doesn’t work with Midwestern homes. Maybe that’s why his sales are so low.”

“Sue complained about the food. We can’t make everyone happy.”

“A few people said it was too cold ... there’s nothing we can do about the temperature of a hotel ballroom.”

You get the point. He had a reason to dis- miss every comment for one reason or another. Future meetings didn’t change, nor did the questions on the evaluation form. No response to legitimate product concerns, such as inap- propriateness of a product for a given market, was issued. The result? Each event received less and less feedback, or the infamous line down the side of the page to select “all fives.” The
vice president of marketing was satisfied with this; he assumed that meant that everyone was happy, or at least they were off his back.

Start with the end

You should be interested in truly learning from your program participants, and want to con- tinually improve your programs to assist them in successfully performing in their jobs. Make sure you can and do review any evaluation data you receive, and make a point to show how it’s being used.

If you start your programs with the end in mind and build meaningful evaluation into the plan, you have taken important first steps to creating and demonstrating the value of train- ing to your organization or clients.

Explanation / Answer

I feel mistake three leads to garbage in and garbage out which can destory the purpose of the program.

A quistionairre for any survey is prepared based on the requirements to achieve the specific goal. here it should be properly defined which data you have to get and how to analyse that data. problems in getting standardized data are you may not find the exact information that you needed to analyze. in the customized data collection process the sequence questions also effects the process, whereas it it not the same with standardized methods.

problems with standardized data collection:

1. respondents may not be aware of the reasons for the survey. respondents have to know, for what we are collecting data . then only they can provide accurate data.

2. chances of getting unclear data or irrelevant data in this process is high.

3. when you have lots of unnecessary data, it is difficult to identiy which one is useful and which one is not.

4. reliability: most of the times it is possible that the response depends on the framing of questions. the way you ask the questions can change the response. so the data we get from standardized process may not reliable.

5. sensitivity: standardized surveys can not descriminate between good interfaces and bad ones.

6. conversion : conversion of the data into required form to analyze it will be difficult in this process.