Using Bio-Data for Selection
Share
Looking for love in all the wrong places

Some of you might have heard or read about Google and its bio-data applicant screening process. As cited in a recent New York Times article, its basic approach is supposed to be simple:

  • Survey current employees on a variety of characteristics and traits, including teamwork, biographical information, past experiences, and accomplishments.

  • Statistically determine which of these many traits differentiates employee performance.

  • Develop an online survey to gather intensive bio-data questionnaire.

  • Score applicant responses based on the number of performance indicators each candidate possesses.
All in all, this is supposed to be a more "scientific" approach to hiring. Don't rush for the bio-data survey solution yet. At the end of this article, you can decide for yourself whether this sounds like a good process (and be thankful you were not the one who convinced management bio-data was a good idea).

Bio-Data?

Biographical data or "bio-data" surveys are well-researched in the literature. They work on the same principle as behavioral interviewing: what was done in the past predicts what will be done in the future. The big differences between bio-data questionnaires and behavioral interviews is that a good behavioral interview is backed by a thorough job analysis, interviewers can ask follow-up and clarification questions, and multiple interviewers coordinate the information.

A bio-data form depends entirely on the people who created the items, the scoring algorithm, trained analysts who look for trends, and the specific position. More about these later.

Both bio-data questionnaires and behavioral interviews are self-reported information subject to applicant creativity and being in touch with reality. In general, they both have about the same degree of predictive accuracy. Let's peel back the bio-data onion.

Our pet whimpers. Dr. Dolittle is on vacation so we don't know when it started, where it hurts, or whether there are other symptoms. We just know Fluffy is in pain.

Low employee performance is similar. We can evaluate employee satisfaction, voluntary turnover, training success, or terminations. But these are all end results. They don't actually tell us the root-cause of the performance problem.

Traditionally, performance problems can be traced back to bad management (i.e., incompetent managers, conflicting goals), unpleasant working conditions (i.e., wages, benefits, environment, and insufficient resources), lack of training, and/or poor job skills. There are many reasons why people under-perform.

Finding out the root of the problem is the most important part of developing a pre-hire test. If we don't know the root cause of low performance, or the root cause of high performance, any hiring solution will be half-baked because it won't address the issue.

What's Performance?

How do we measure employees' performance? Is it measured by judging whether a company's growing fast? Is it measured by whether a company is profitable? There are too many environmental factors for us to assume that employees are the only growth factor.

How about performance appraisals? We all know performance appraisals suffer from the "no one here is perfect," "everyone here is perfect," or "forced-rank" syndrome. In addition, performance appraisals tend to be part fact and part management opinion. Basically, we can never really know what performance appraisals measure. Two managers may rate the same employee differently.

Let's say we have objective productivity data available such as units per hour, mistakes per 1,000, cold calls per month, cross-selling revenue, or customer service surveys. These are better indicators of performance because they are less likely to be affected by other factors, but we still have to account for things that might influence them.

Did the machinery malfunction or was it newly renovated? Were mistakes suddenly calculated on a different basis? Was the territory newly acquired or was there a company promotional campaign?

You get the picture. Accurately defining performance and controlling for outside factors is absolutely critical. Otherwise, you run the risk of measuring garbage.

What Can Current Employees Tell Us?

Assuming that performance data and root-cause data are under control, let's look at current employees.

Current employees are a great deal alike. That is, they are all "good enough" to stay hired. The differences between high- and low-performing current employees (assuming we are exceptionally clear on the definition of performance) are generally very small. So small, in fact, that performance differences might be due to pure chance (now, wouldn't that mess up the recipe for success?). Applicants, on the other hand, are very different.

In addition to the applicant-employee difference, not all jobs have the same skill requirements. Does it come as any surprise that singing in the glee club may have nothing to do with administrative skills? I know sales managers who only hire salespeople if they played athletics in high school (the poor-man's bio-data test). Fifty percent consistently fail within the first year. That's no better than chance. Did the athletics bio-data question work? You do the math.

Pick up any good book on bio-data and you'll see that trustworthy bio-data scores are exquisitely sensitive to positions. In other words, salespeople, first-line managers, and administrative support all might have completely different bio-data profiles associated with job performance (there's that p-word, again).

High performers are usually specialized beasts who do not conform to any norm. They are usually so good that they operate on automatic pilot; or they cut corners to achieve their goal. I recall a marketing manager who stole his prior employer's product secrets and used them to reduce development time. There's a good high-performance role model? Right?

You may think that you should figure out what your corporate culture is, and then examine whether applicants fit that. But companies are not static. They start as small enterprises founded by highly motivated entrepreneurial folks who dine on the vending machine goodies, shave in the bathrooms, and sleep on cots.

After a while, the free-wheeling entrepreneurial environment changes into a bureaucracy, then it changes again, and so forth. Anyone who recalls the rise in dot-com businesses, or remembers how big business fares when leadership changes, knows that today's culture-fit may not last.

I once worked for a company that hired smart, highly motivated people for plant start-ups. Two years later, the plant management complained they had "all leaders and no followers!" Be careful what you measure. You just might get it.

Statistical Sense and Nonsense

Statistics are dumb…but useful. They can tell if two numbers are correlated; but they cannot tell if one number causes the other.

This is really important if you want to develop a test that predicts job performance.

I can statistically show that blue eyes and blond hair are correlated, but we all know that blue eyes do not cause blond hair. Jan Lethen, a statistics professor at Texas A&M, cites more correlations as an example of statistical nonsense: shark attacks are correlated with ice cream sales; skirt lengths with stock prices; and cavities with vocabulary size.

When a broad sample of items are given to a broad sample of people and statistically analyzed, some correlations are inevitable. But if the items do not cause the behavior, they are bogus. They end up screening out qualified people and screening in unqualified ones.

Other problems include sample sizes. Statistics represent general trends between two groups, each of which must have the same bell-curve. Bell-curves need about 25 people at the minimum. They really work when the numbers get closer to 250.

Remember our employee-applicant difference discussion? The employee-skills bell curve would be shaped more like a finger. An applicant bell-curve would be shaped more like a ripe pimple. Comparing data with different bell-curve shapes can add major error to the numbers.

When Does Bio-Data Work?

Bio-data questionnaires provide the best results when the following criteria are met: a tight-knit group of similar jobs; a tight-knit definition of job performance; a skilled analyst interviews multiple people looking for causal bio-data items; bio-data items are administered to a large number of current employees and analyzed for performance differentiation; the test is given to a large number of applicants who are hired regardless of their scores; and after a period of adjustment, bio-data scores and job performance are statistically compared.

So, I ask. Just how predictive does the Google approach sound to you?

By the way, I don't want to publish wrong-headed information. And I know how reporters distort facts to make a good story. So I welcome anyone from Google to post (or Todd Carlisle to address when he speaks at ERE's San Diego conference in April) to clarify or address how they worked through these scientific questions.

----------
This article originally appeared on ERE.net:
http://www.ere.net/articles/db/E5CB2DBBC6EF42B9938F0CFD846005A4.asp
----------