By: Keaton Fletcher
A paper recently published in the International Journal of Selection and Assessment by a team of researchers including WSC Advisory Council Member, Deborah Rupp, focuses on an increasingly popular tool that organizations are using to select individuals for hiring or promotion, assessment centers. Assessment centers require participants to engage in a range of prescribed tasks designed to elicit the competencies that will be needed on the job. Trained raters then evaluate the individual across each of these competencies based on what was shown during the task. Despite being resource intensive, assessment centers are useful tools for organizations as they have been shown to predict future job performance, and historically have been viewed as a fairer and/or less biased method of evaluation than other potential tools organizations could use. However, some evidence (Dean, 2008) suggests that assessment centers may not be as free of bias as one might hope. There have been consistent findings to suggest a leniency effect (i.e., willingness to provide higher scores) towards white individuals and women. There has also been some evidence (Schmitt, 1993) to suggest assessment centers may be susceptible to a similar-to-me effect, in that the rater scores participants who are similar to the rater on demographic variables (e.g., race, gender) more favorably. Although combining ratings across multiple raters may help reduce the impact of these issues, Thornton, Rupp, Gibbons, and Vanhove (2019) argued that it is imperative to know if these biases are present even prior to aggregating across raters.
Using data from 189 police officers who were participating in an assessment center for a promotion, Thornton and colleagues (2019) explored how the leniency and similar-to-me effects might appear in the real world. For this particular assessment center, participants completed three different tasks for three separate pairs of judges. The judges were advanced police officers who had received training on the assessment center and were given materials design to minimize potential bias. The results of the study are generally encouraging, suggesting that bias was minimal when it was present. The authors did find some evidence for a leniency effect for white participants across all tasks, and inconsistent findings regarding leniency for women, or similar-to-me effects. Certainly this does not suggest that all assessment centers are free of bias, but should provide some hope that given the appropriate design, rater training and materials, an assessment center may have minimal bias. In minimizing the bias in assessment centers, organizations can increase the diversity in their internal pool of candidates at each level of the organization, creating the potential for succession planning that may facilitate organizational diversity from the executive level, down.