Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

In his article, Bias Could Affect the Credibility of Your Results , what does Do

ID: 395289 • Letter: I

Question

In his article, Bias Could Affect the Credibility of Your Results, what does Donald McCain mean by "the evaluation challenge is one of balance?" Discuss what you can do to address bias. Write 1 page or as much as you can.

The evaluation challenge is one of balance. You must do enough analysis to meet your own needs while also meeting the needs of your cli- ent. Too much evaluation is a waste of effort; not enough inhibits good decision making. This is why evaluation planning is so important. As part of the design process, you determine the initial business metric (the data you will track, such as number of sales, number of defects,

or turnover rates), what evaluation level of information to gather, when to gather that in- formation, and how it will be used.

But when it comes to evaluation, the origi- nal development of your research methods and instruments is subject to several types of bias. Although it is difficult to address bias, it is not impossible. First, recognize that bias exists and then take action to minimize it.

Sampling bias

The first type of bias is sampling bias. It is easy to send surveys or interview certain par- ticipants whom you know and like and who are favorably disposed to the program. There is also a tendency to send the information
to recent participants, who are usually still enthusiastic about the course and the oppor- tunity to implement their action plans. The realities of the environment have not damp- ened their spirits.

These practices can result in tainted data by introducing sampling bias. You should always conduct surveys or interviews with partici- pants selected on a random basis.

Insufficient sample size

The second bias comes from not having a large enough sample. Sampling generally applies to Level 3 and 4 evaluations. You could draw your sample from the entire population; in this case, all the participants taking a course.

Alternatively, if a course has several deliv- eries, you could randomly select particular deliveries to evaluate. Depending on the audience size, you may want to randomly se- lect participants from each delivery to be evaluated. For example, if you designed and delivered a sales training course that is deliv- ered twice a month with an average class size of 24 participants, you could randomly select two or three of the 24 deliveries. Then, you could randomly select participants from each delivery. Make sure you statistically determine the sample size required to provide reliable data, because a sample that is too small is not repre- sentative of the entire population or group.

Observation bias

The observation technique is not without bias problems. The more visible the observation process is, the less reliable the data.

Participants will perform differently if they know they are being observed, and an observer who is not trained or given proper instruments adds to the unreliability of the data. Therefore, conduct your observations in the least obtru- sive manner possible while still getting the information you need. Provide training for the observers and use some sort of instrument, such as a checklist, to aid in the observation.

Bias in interviews and focus groups

Interviews and focus groups can provide high-quality information. To be most useful and to avoid bias, the interview design must ensure that:

• the sample is representative of the population

• the participants understand the questions • the participants are willing participants

(their participation is not mandatory)
• the interviewer is trained in interviewing techniques and knows how to record the

information accurately
• there is a protocol for consistency in

questioning
• there is a method to objectively evaluate

the results of the interview.

Restriction of range or range error

Some respondents to a survey or question- naire may engage in the error of restriction or range. This occurs when the respondent, or rater, restricts her ratings to a small section of the rating scale. This could be positive (leni- ency) or negative (severity).

In some cases, this phenomenon is an un- conscious bias on the part of the rater. In othercases, the rater may like (or dislike) going to training. If the rater was required to attend the training course, that could lead to a restric- tion of range on the negative side. These issues can be addressed when the instrument is being completed. If the evaluation is taking place with the participants present, you can have a brief discussion about this. Or you could include a brief discussion about such rater errors in the instruction section of the instrument.

Bias of central tendency

Some people hesitate to commit to either end of the scale and just indicate responses near the middle. This is called the bias of central tendency.

For example, if you have a rating scale of 1 to 5, some raters tend to use the middle value of 3. You can address this by developing a scale with no middle value (such as 1 to 4).

Emotional bias

This type of bias affects Level 1 evaluation to the greatest extent. It occurs when the par- ticipants allow their feelings (like or dislike) for the facilitator to bias their ratings. Liking and disliking are emotions that are directed toward an object or person. In this case, the object of the liking or disliking could be the program or the facilitator. If these emotions go unchecked, they can contaminate the ratings.

This type of bias is difficult to address.

One solution is to provide interim evaluations to allow participants to express themselves. Any overtly biased perspectives (positive or negative) could then be addressed during the training course.

Delivery bias

Delivery bias refers to a bias against online and virtual classroom delivery. There is a myth that the training provided through these delivery formats cannot be evaluated. However, this is not true.

Level 1 end-of-course surveys can be quickly tabulated and Level 2 evaluations benefit from online knowledge tests, projects, and case stud- ies. Course designers can build application exercises that provide feedback for transfer into the program. Online technology also can be used to assess on-the-job use through surveys and interviews or focus groups. Facilitators also should be able to analyze the impact and return on investment based on the metrics identified in the planning stage.

The communication plan

After taking steps to discern the types of bias that might be present, remember to acknowl- edge them in your communications. That way, you can build your credibility by taking an ap- propriate, conservative approach when you present the results of your evaluation. Biased information or the failure to acknowledge sources of bias can taint your results and calls into question the credibility of the evalu- ation effort.

Who gets the results? How do you commu- nicate the results to those interested parties? How much information should you com- municate? These are all questions related to developing a communication plan. After you’ve completed the report, you need to make deci- sions regarding who gets what information and in what format. Let’s take a closer look at the components of the communication plan. Audience. Identify the individual(s) to receive a communication. The people in your communi- cation audience could include your client, your line management, participants, facilitators, and others.

Message. Determine what content needs to be included for each audience. Your cli- ent may just want an executive summary, but your manager may want the entire report. The message should answer the what ques- tion on the evaluation plan—what do you want to know from evaluating a course or program? Vehicle. Determine how the message will be communicated. You may provide a written ex- ecutive summary and presentation to your client. Your manager may want the complete report plus a briefing. Participants may only receive a summary. The information could be presented in person, in a written format as a report, or as an electronic document distrib- uted by email. The vehicle needs to match the audience preference.
Desired result. Do you want a response from the material? What do you want the reader to do as a result of receiving the message? This should address the why question from your evaluation plan—why did you want to obtain these results?
Timing. The communication timing needs to align with when your client, program manager, or talent development manager needs to make decisions regarding the training course. Frequency. How often are you going to com- municate with the audience? Will you be making a report after each delivery? Will you be making quarterly reports? Clients may want an update with their quarterly plan- ning sessions. Delivering information using technology can reduce the time and effort required to distribute the information.

Person responsible. The evaluators are gen- erally responsible for doing the research and analysis and writing the report, while their managers may present the executive summary to the client. Others may be involved in devel- oping media, editing, and so forth to support the communication.

The communication plan is not the end of the process. The reader often has questions that must be answered, which gives you the chance to develop those relationships further and demonstrate how the training initiative can add value to the client.

Quality and value

Evaluation requires some planning and diligence, but it allows you to make a difference in the quality and value of the learning expe- riences you provide for your clients as you enhance your credibility. Evaluation allows you to document the value that your pro- grams provide for your organization.

Explanation / Answer

Evaluation consists of assessment of the overall impact of an activity with regards to its effectiveness in achieving the desired outcome it was undertaken for. However, the outcome of result of many activities maybe intangible and very difficult to analyse or assess rationally. Evaluation of many activities encounter similar obstacles at many times, due to the outcome not being quantifiable. Also, the process of evaluation itself being a complicated and lengthy process requiring large inputs in the form of Manpower, time and money, leads to either ignoring evaluation or performing incomplete assessments which maybe irrelevant. This kind of assessment is essentially ineffective in providing an accurate rational output regarding the overall impact of the activity with regard to improvement and outcomes. In his article bias could affect the credibility of your results, Donald McCain states that "the evaluation challenge is one of balance". He means that until and unless the evaluation is done in a logical and rational manner on the basis of adequate data analysis which needs to be identified for accuracy and relevance to ensure that the business metrics the evaluation is based on provide the right foundational support for the analytical process. The relevance and accuracy of the entire process of evaluation depends upon the accuracy off the data and efficacy with which the data is utilised and gathered. For the entire process to remain logical and rational it is essential that no bias be allowed to impact the accuracy of the data rendering the entire process inefficient. Therefore, it is essential that all data be monitored and controlled for minimal impact from bias during the entire process.

One of the major areas where bias needs to be controlled is in the sampling data which is very important for facing a decision regarding the relevance of an action and whether it should be undertaken. It is important that surveys and interviews are not send to known participants or information regarding the response be hinted about. This results in the base data being inaccurate and the very beginning of the process of initiating an action maybe an erroneous decision which cannot generate the desired results and outcome. Since it is so important that the sampling data be accurate it is essential that the size of the sample be large enough to provide and accurate outcome by covering all possible instances of variability and clearly representing them within the data. Potato which is based on observation it is essential that data collection be done by a trained expert who can understand visual body language cues and avoid bias.

Interviews and focus groups are a major source for high quality data which may be relevant provided that bias is avoided by ensuring sample size is accurately representative the questions are framed in a manner to be easily understandable, the participation is voluntary, the interviewer has the required skills efficient recording of information as well as extracting the relevant information. The interview should be able to follow rational methodology to maintain objectivity in the questioning as well as the evaluation of the results. Restriction of range error is phenomenon which is an unconscious psychological bias on the part of the rater and is difficult to control but can be identified during analysis and adequate measures undertaken to negate it. This restriction can be of different kinds with the respondents unconsciously tending to favour one portion of the scale either positive, negative or central. Central tendency can be addressed by keeping the scale even and not having a central number. Emotional bias is again difficult to address so it may be identified for negation during analysis of the data. Delivery bias results from the mode of provision of the training for example online and virtual delivery. This bias easily addressable bye education of the participants on the advantages of such delivery programs.

Therefore by taking adequate care during data collection and analysis most bias can be avoided or eliminated resulting in clean data which is accurate and presents effective results on analysis.