Can Sales Assessments Actually Predict On the Job Success?

I received a call from the HR VP of a large company that was very interested in using our assessments to select new salespeople.  I said the company was interested but the HR VP was out to prove how much he knew about assessments.  He was more interested in n’s, r’s, alphas, deltas, standard deviations and construct validity.  Unless you’re a PhD. or statistician, most of you won’t know what those things are and to all but a few who understand them, you don’t need to.

Stathead was hell-bent on learning about the technical nature of how the assessment works, how it was created, how it was validated and its impact on protected minorities.  While this is important, it can be very misleading.  As you will read below, a test can meet all of those criteria and not help with selection at all!

Sadly for his company, in desperate need of this tool, he didn’t care at all about the most important element of the assessment – how it’s predictive ability made it different from all the assessments he knew about – personality and behavioral styles assessments that are so widely available. Unlike nearly all of the other assessments on the market, our assessment actually predicts, with tremendous accuracy, who will succeed in a sales position at your company.  I’ll explain.

Take any personality or behavioral styles assessment, and you will likely see one where the individual findings like, “John gets along well with people”, will be quite accurate. While the individual findings in most of these assessments are accurate, almost none of them were created for the purpose of assessing sales candidates or evaluating sales forces.  Rather, they are usually assessments that were adapted for this use.  Adapted means they include some of the traits for which they assess, like Drive or Extroversion, since successful salespeople have those traits. But the traits they measure don’t predict sales success.  They are, as I said, just some of the traits that successful salespeople have and it’s important to note that unsuccessful salespeople usually have those traits too.

Our assessment also has accurate individual findings like, “John doesn’t ask enough questions”,  but notice that they are sales specific findings versus personality traits. So difference number one is that while the other assessments provide accurate traits, ours provides accurate sales findings.

Our assessment is different, not only because it was built SPECIFICALLY for sales use, but because of its predictive nature.  Our assessment actually predicts, with accuracy, whether a sales candidate will succeed in the specific sales role that you need filled.  While other assessments present a collection of findings, you must interpret them, draw a conclusion and then guess whether John will succeed.  Our assessment actually predicts whether John will succeed, not just in sales, but selling your specific product or service at your company, in your industry and with the particular role you have in mind.  It predicts whether John will succeed selling into your marketplace, calling on your decision makers against your competition with all of its particular challenges.  That’s an enormous difference.  That’s why it’s so accurate.

When we evaluate a sales force, one of the things we predict with accuracy is growth potential – whether each of your salespeople will improve and by how much.

Complicating the matter on assessments is the issue of validation – the demonstration or proof that the assessment is consistent and reliable in its findings, where all the n’s, r’s, alphas, and standard deviation come into play.  Validation is important  because an assessment can’t be used in the work place unless it has been validated.

There are several ways to validate an assessment.  They range from methods as simple as face validity, to content validity, to criterion related validity, to construct validity and all the way up to predictive validity, and more.

On the simple end of the spectrum, Face Validity means that at face value, the questions in the assessment seem to test for what the assessment claims it’s testing for.  Geez!

Content Validity is the degree to which a test is a representative sample of the content of whatever objectives or specifications the test was originally designed to measure.  More Geez!

And yes, there are assessments on the market that don’t prove any more than that!

Criterion Validity uses strategies that focus on the correlation of the test being correlated with a well-respected outside measure(s) of the same objectives or specifications (like a manager’s review).

Construct Validity is the experimental demonstration that a test is measuring the construct (attribute) it claims to be measuring.

Finally, Predictive validity is the degree of correlation between the scores on a test and some other measure that the test is designed to predict (like on the job performance).  This is the most time-consuming and costly form of validation and this is the method we use.  In the case of the predictive validity of our sales candidate assessment, a year after salespeople have been assessed, 92% of those we recommended who were also hired were performing in the top half of their sales organizations.

That’s quite a profound difference from an assessment that says “John gets along well with people”.

I know this was a long post but I thought it was important to get this information out.

© Copyright 2007 Objective Management Group, Inc.