Tag Archives: Assessment

Hire Conscientious People

Businesspeople shaking hands | Credit: Minerva Studio

I found, at Half Price Books, a great little gem of a book (and for only a few dollars!) titled, The Truth About Managing People by Stephen P. Robbins. In it, professor Robbins shared a very useful and applicable tip when hiring people: When in doubt, hire conscientious people!

The APA Dictionary of Psychology (VandenBos, 2007) defines conscientiousness as: the tendency to be organized, responsible, and hardworking, construed as one end of a dimension of individual differences (conscientiousness vs. lack of direction) in the big five personality model.

According to Robbins (2008), findings from numerous research studies reveal that “only conscientiousness is related to job performance” (p. 22).

“Conscientiousness predicts job performance across a broad spectrum of jobs—from professionals to police, salespeople, and semi-skilled workers. Individuals who score high in conscientiousness are dependable, reliable, careful, thorough, able to plan, organized, hardworking, persistent, and achievement-oriented. And these attributes tend to lead to higher job performance in most occupations (Robbins, 2008, p. 22).”

Of course, this does not mean that you ignore other characteristics or that other characteristics aren’t relevant for certain jobs. It’s also not very surprising that individuals low in emotional stability will typically not get hired or, when they do, they usually don’t last very long in their jobs (Robbins, 2008).

Written By: Steve Nguyen, Ph.D.
Leadership Advisor & Talent Development Consultant

References

Robbins, S. P. (2008). The Truth About Managing People (2nd Ed.). Upper Saddle River, NJ: FT Press.

VandenBos, G. R. (Ed.). (2007). APA dictionary of psychology. Washington, DC: American Psychological Association.

Psychopathology, Assessments of Personality, and I-O Psychology

Embed from Getty Images
In the latest issue of Industrial and Organizational Psychology: Perspectives on Science and Practice, one of the focal articles talked about maladaptive personality at work. In the article, Nigel Guenole (2014) discussed the DSM-5’s newest changes to the personality disorder diagnosis. He presented a model of maladaptive trait, along with objections to inventories measuring maladaptive personality. Under the section titled “Important Considerations in the Assessment of Maladaptive Personality at Work,” Guenole listed five barriers to explain why I-O psychologists have been reluctant to examine maladaptive trait model and its corresponding changes in the newest DSM-5.

I will very briefly list the five barriers and then add one important concern I have that was not mentioned on the list.

  1. Legal Concerns – “concerns that use of maladaptive inventories might infringe rights protected by law” (p. 91).
  2. Social Responsibility Concerns – “concern of the social impact of the use of maladaptive personality as a prehire screen” (p. 93).
  3. Small Validities – “the new taxonomic model of personality pathology is redundant if measures of the Big Five are already used in assessment and would therefore have no incremental validity” (p. 91).
  4. Construct Redundancy and Lack of Incremental Validity – “personality tests show low validities generally and are not predictive of performance” (p. 91).
  5. Maladaptive Personality Inventories Are Easily Faked – there is a concern about faking on the maladaptive inventories.

Guenole (2014) ended the article by stating that “industrial psychologists need to be faster in their response to recent developments in clinical psychology to develop a full picture of personality at work” (p. 94)

While these five concerns may be valid, a major concern I have (as a former mental health counselor) and one that I did not see mention is potential violation of American Psychological Association Ethical Code, specifically APA Code 2.01 Boundaries of Competence.

The APA Code of Ethics states that psychologists should provide services in areas in which they are competent (based on education, training, experience, etc.) and if they do not possess such a level that they should seek out additional education, training, etc. to become competent or that they should refer these clients (individuals or businesses) to another professional who is more competent.

APA Code 2.01 Boundaries of Competence states that psychologists are to “provide services, teach, and conduct research with populations and in areas only within the boundaries of their competence, based on their education, training, supervised experience, consultation, study, or professional experience” [(APA Ethical Code, 2002, 2.01(a)]. In addition, when called upon to provide services which are new or beyond their level of competence, they are to “undertake relevant education, training, supervised experience, consultation, or study” [(APA Ethical Code, 2002, 2.01(c)]

Here is an example of an ethical situation an I-O psychologist might find him/herself in:

Summary: An I-O psychologist (not trained to administer and interpret a personality test) hired a clinical psychologist (who is trained) to administer and interpret a personality test. However, due to some financial reasons, the services of the clinical psychologist was discontinued and the I-O psychologist continued testing and interpreting the personality assessments, beyond the boundaries of his training and competence.

Ethical Issue: Performing assessments (or services) to which one has not received training and which are beyond his/her level of professional competence.

APA Code: APA Code 2.01 Boundaries of Competence states that psychologists are to “provide services, teach, and conduct research with populations and in areas only within the boundaries of their competence, based on their education, training, supervised experience, consultation, study, or professional experience” [(APA Ethical Code, 2002, 2.01(a)]. In addition, when called upon to provide services which are new or beyond their level of competence, they are to “undertake relevant education, training, supervised experience, consultation, or study” [(APA Ethical Code, 2002, 2.01(c)].

Resolution: To avoid this ethical dilemma, I-O psychologists should get training in the administration and interpretation of the personality assessment(s). A professional does not need to be a clinical psychologist to administer personality assessments. However, one does need to receive appropriate training to ensure that he/she is competent in administering and interpreting these assessments [(APA Ethical Code, 2002, 2.01(c)]. Examples of training might include: taking a graduate-level assessment course or getting trained by a mentor who is competent and who regularly administer and interpret assessments.

One Final Comment: Even with the appropriate training to ensure competency in administering and interpreting personality assessments, when it comes to assessment of psychopathology and mental health issues, it might be wise for I-O psychologists to refer clients who need such services to counseling and clinical psychologists because psychologists in those areas of psychology are much better trained in mental illness and providing counseling and therapy. They have a firm grasp of the DSM-5, and they are generally much better trained and experienced in both assessing and addressing psychopathology and mental health.

I have shared this before in discussing coaching and mental illness, but it is certainly applicable here in our discussion about psychopathology, assessments of personality, and whether it makes sense for I-O psychologists to also jump in. I really like the following quote so I’ll leave the reader with this:

“Any diagnosis, treatment, ways to help or exploration of underlying issues is the province of mental health specialists and is best avoided” (Buckley, 2010, p. 395).

Written By: Steve Nguyen, Ph.D.
Leadership and Talent Consultant

References

American Psychological Association. (2002). Ethical principles of psychologists and code of conduct. American Psychologist, 57, 1060-1073. Also available: http://www.apa.org/ethics/code/index.aspx

Buckley, A. (2010). Coaching and Mental Health. In E. Cox, T. Bachkirova, & D. Clutterbuck (Eds.), The complete handbook of coaching (pp.394-404). Thousand Oaks, CA: Sage.

Guenole, N. (2014). Maladaptive personality at work: Exploring the darkness. Industrial and Organizational Psychology: Perspectives on Science and Practice, 7(1), 85-97.

Assessments with Only Face Validity

Photo: Eyes behind

I visit LinkedIn often and enjoy reading and participating in some of the discussions in the industrial-organizational psychology groups. The other day, a question came up regarding whether an instrument/assessment/tool with only “face validity” without other types of validity is valuable to clients in a business setting.

People often bring up face validity, but “face validity relates more to what a test appears to measure to the person being tested than to what the test actually measures” (Cohen & Swerdlik, 2009, p. 174). This distinction is very important, as you’ll see.

I am not an expert in psychometrics, but I believe there are some important things to consider (beyond the face validity or “face value”) when evaluating an assessment, whether it’s for a business, educational, or personal use. These things are: content, criterion, and construct validity; reliability; and norming.

VALIDITY – Is the test measuring what it says it measures?

Face validity relates more to what a test appears to measure to the person being tested than to what the test actually measures” (Cohen & Swerdlik, 2009, p. 174). A test that seems, on the face of it, to measure what it claims to measure, has good “face validity” in the eyes and mind of the testtaker/respondent. In other words, if I believe that a test I’m taking looks legitimate, it will give me confidence in the test and help keep me motivated as I’m taking it. In terms of selling a test, a test/assessment with good face validity helps convince potential buyers (e.g., supervisors, HR staff, executives, etc.) to “buy in.”

However, what many do not realize is that even if a test lacks face validity, it can still be relevant and useful, even if (without good face validity) it might be poorly received by testtakers. “Ultimately, face validity may be more a matter of public relations than psychometric soundness, but it seems important nonetheless” (Cohen & Swerdlik, 2009, p. 176).

Content validity means the content of the test looks like the content of the job. “For an employment test to be content-valid, its content must be a representative sample of the job-related skills required for employment” (Cohen & Swerdlik, 2009, p. 176). Content validity has to do with whether or not the test ‘covered all bases.’ For instance, a test of American history that has only questions (items) about the Civil War has inadequate content validity because the questions would not be representative of the entire subject of American history (i.e., the Civil War was a significant but small part of the entire history of the United States) (Vogt & Johnson, 2011). Another example of a test that is not content valid would be a depression inventory that only asks questions about feelings of sadness. Again, this is illustrates inadequate content validity because there are other aspects of depression that need to be considered (i.e. energy level, concentration ability, and weight gain/loss, etc.) (Barber, Korbanka, Stradleigh, & Nixon, 2003).

Criterion-related validity is the ability of a test to make accurate predictions. For example, the degree to which a student’s SAT score predicts his college grade is an indication of the SAT’s criterion-related validity (Vogt & Johnson, 2011).

Construct validity is the degree to which variables on a test accurately measure the construct. According to Cohen and Swerdlik (2009): “Construct validity is a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct. A construct is an informed, scientific idea developed or hypothesized to describe or explain behavior” (p. 193). Examples of constructs are intelligence, job satisfaction, self-esteem, anxiety, etc. “The researcher investigating a test’s construct validity must formulate hypotheses about the expected behavior of high scorers and low scorers on the test. These hypotheses give rise to a tentative theory about the nature of the construct the test was designed to measure. If the test is a valid measure of the construct, then high scorers and low scorers will behave as predicted by the theory” (Cohen & Swerdlik, 2009, p. 193).

Dr. Wendell Williams, in his book called “Superselection: The art and the science of employee selection and placement” (2003), offered a nice breakdown of the different types of validity as it relates to determining whether test scores are related to job performance (p. 4):

  • Content validation: Does the content of the test resemble the content of the job?
    • Example: A test requiring an applicant to type a letter is considered content valid because passing the test demonstrates an applicant’s typing ability.
  • Criterion validation: Do higher test scores predict higher job performance?
    • Example: The test would satisfy the requirements for criterion-related validity if higher scores on that typing test were associated with better performance on the job.
  • Construct validation: Does the test measure deep-seated mental constructs that are associated with job performance?
    • Dr. Williams said construct validity in employee selection testing is difficult to identify or interpret and warns against relying on or using it. For instance, he offered the construct of attitude. “If you discovered that ‘attitude’ had something to do with keyboarding skills, you could give each applicant an attitude test. However, you would have the burden of proving that attitude (a construct) predicted typing ability” (Williams, 2003, p. 4).

RELIABILITY

Reliability – Is the instrument consistent, stable, repeatable? Reliability is measured in a number of different ways including test-retest reliability, split-half reliability, alpha reliability and inter-rater (coder) reliability. Reliability is the ability of a test or assessment to consistently measure the topic or construct under study at different times and across different populations (Hinton, Brownlow, McMurray, & Cozens, 2004). For example, a bathroom scale that gives you a different reading of your weight each time you step on it is not reliable.

NORMING

“In a psychometric context, norms are the test performance data of a particular group of testtakers that are designed for use as a reference when evaluating or interpreting individual test scores” (Cohen & Swerdlik, 2009, p. 111).

Here’s a good example from Cohen and Swerdlik (2009) to illustrate this idea of norming and generalizability:

“For example, a test carefully normed on school-age children who reside within the Los Angeles school district may be relevant only to a lesser degree to school-age children who reside within the Dubuque, Iowa, school district. How many children in the standardization sample were English speaking? How many were of Hispanic origin? How does the elementary school curriculum in Los Angeles differ from the curriculum in Dubuque? These are the types of questions that must be raised before the Los Angeles norms are judged to be generalizable to the children of Dubuque” (p. 117).

When I worked for a school system overseas on a small Pacific Island, the school psychologists would give the students U.S.-developed educational assessments that were not “normed” with Pacific Island students. So what they did was add a small “warning” statement on their reports to let everyone know. Of course, none of the students and almost none of the parents understood what this meant. What’s even more tragic is that I never saw attempts by these psychologists to help train educators on reading and interpreting these assessments. So the majority of the teachers also didn’t understand this either.

It is very tempting to take an assessment, receive back a nice-looking report, with beautiful graphs and professional-sounding words (often computer-generated), and think the assessment is “valid” or “good.”

As Wendell Williams, MBA, Ph.D. (an industrial psychologist who develops and validates tests) warns: We need to be extra careful to ensure that a test/assessment meets professional standards, or else we’re basically giving back scores that are worthless junk to people who will completely trust the results.

The bottom line: Legally, ethically, and statistically make sure you use a test that has good psychometric properties, beyond just a pretty face (i.e. face validity). A test with weak/inadequate content, criterion, and/or construct validity; poor reliability; and inadequate norming is useless.

References

Barber, A., Korbanka, J., Stradleigh, N., & Nixon, J. (2003). Research and statistics for the social sciences. Boston: Pearson.

Cohen, R. J., & Swerdlik, M. E. (2009). Psychological testing and assessment: An introduction to tests and measurement (7th ed.). New York, NY: McGraw-Hill.

Hinton, P. R., Brownlow, C., McMurray, I., & Cozens, B. (2004). SPSS explained. New York: Routledge.

Vogt, W. P., & Johnson, R. B. (2011). Dictionary of statistics & methodology: A nontechnical guide for the social sciences (4th ed.). Thousand Oaks, CA: Sage.

Williams, R. W. (2002, August 29). Separating Good Sense From Nonsense: Who Can You Believe?. Retrieved from http://www.ere.net/2002/08/29/separating-good-sense-from-nonsense-who-can-you-believe/

Williams, R. W. (2003). Superselection: The art and the science of employee selection and placement. Acworth, Georgia: ScientificSelection Press.

Williams, R. W. (2005, October 6). Anatomy of a Test Vendor. Retrieved from http://www.ere.net/2005/10/06/anatomy-of-a-test-vendor/

Williams, R. W. (2005, November 3). Is This Test Validated for Your Industry?. Retrieved from http://www.ere.net/2005/11/03/is-this-test-validated-for-your-industry/

Williams, R. W. (2007, May 18). Validating a Personality Test. Retrieved from http://www.ere.net/2007/05/18/validating-a-personality-test/

Williams, R. W. (2007, October 31). Good Test? Bad Test? Retrieved from http://www.ere.net/2007/10/31/good-test-bad-test/

Williams, R. W. (2008, December 10). Dissecting the DISC. Retrieved from http://www.ere.net/2008/12/10/dissecting-the-disc/

Williams, R. W. (2010, February 10). Promises, Promises: How to Identify a Bad Hiring Test (Part I of II). Retrieved from http://www.ere.net/2010/02/10/tests/

Williams, R. W. (2010, February 11). Promises, Promises: How to Identify a Bad Hiring Test (Part II of II). Retrieved from http://www.ere.net/2010/02/11/promises-promises-how-to-identify-a-bad-hiring-test-part-ii-of-ii/

Williams, R. W. (2010, June 24). Uncovering Test Secrets, Part 1. Retrieved from http://www.ere.net/2010/06/24/uncovering-test-secrets-part-1/

Williams, R. W. (2010, June 25). Uncovering Test Secrets, Part 2. Retrieved from http://www.ere.net/2010/06/25/uncovering-test-secrets-part-2/

Williams, R. W. (2012, February 1). Bad Tests and Fake Bird Seed. Retrieved from http://www.ere.net/2012/02/01/bad-tests-and-fake-bird-seed/