Close
Learning Methods
Classroom
A traditional classroom couples on-site learning with the added value of face-to-face interaction with instructors and peers. With courses and exams scheduled worldwide, you will be sure to find a class near you.
Interaction
Highly Interactive
On-going interaction with instructor throughout the entire classroom event
Interaction with peers/professionals via face-to-face
Components (May Include)
Onsite
On-site instructor-led delivery of course modules, discussions, exercises, case studies, and application opportunities
Supplemental learning elements such as: audio/video files, tools and templates, articles and/or white papers
E-course materials available two weeks prior to the course start date; printed course materials ship directly to the event location
Duration
One + Days
Varies by course ranging from one to multiple days
Technical Needs
Specific requirements are clearly noted on the course page
Virtual Classroom
Ideal for those who appreciate live education instruction, but looking to save on travel. A virtual classroom affords you many of the same learning benefits as traditional–all from the convenience of your office.
Interaction
Highly Interactive
On-going interaction with instructor throughout the entire virtual classroom event
Interaction with peers/professionals via online environment
Components (May Include)
Live online instructor-led delivery of course modules, discussions, exercises, case studies, and application opportunities
Supplemental learning elements such as: audio/video files, tools and templates, articles and/or white papers
E-course materials available up to one week prior to the course start date. Recorded playback and supplemental materials available up to seven days after the live event.
Duration
Varies by course ranging from one to multiple sessions
Technical Needs
Adobe Flash Player
Acrobat Reader
Computer with sound capability and high-speed internet access
Phone line access
E-Learning
A self-paced, online learning experience that allows you to study any time of day. Course material is pre-recorded by an instructor and you have the flexibility to view content modules as desired.
Interaction
Independent Learning
Components (May Include)
Pre-Recorded
Pre-recorded course modules
Supplemental learning elements such as: audio/video files, online quizzes
E-course materials start on the day of purchase
Optional purchased print material ships within 7 business days
Duration
120 Days - Anytime
120-day access starts on the day of purchase
Direct access to all components
Technical Needs
Adobe Flash Player
Acrobat Reader
Computer with sound capability and high-speed internet access
Close
Contact Sponsor
E-Reward
Online
Paul Thompson
Phone: 1 44 01614322584
Contact by Email | Website
Close
Sorry, you can't add this item to the cart.
You have reached the maximum allowed quantity for purchase in your cart or the item isn't available anymore.
Product successfully added to your cart!
Price
View your cart
Continue shopping
Please note our website will be down this Friday, November 5 from 9pm ET – 11pm ET for routine maintenance. We apologize for any inconvenience.
WORKSPAN
REWARDING READS |

Sound HR Management Utilizes Both Analytics and Practitioners

Image

(C) Portra / iStock

"Rewarding Reads” is a space for articles and personal essays meant to be thought-provoking and informative for human resources professionals, from sharing the “human” perspectives on workplace issues to book reviews of business titles we find inspiring. Have an essay or blog post to share? Contact us at workspan@worldatwork.org.

We have all read at least one “____ For Dummies” book. When faced with a daunting technological challenge, we frantically seek basic survival information. The reader is not dumb, just a little short on the knowledge and skill required to confront whatever the challenge is.

Image

The literature currently suggests that if someone is not doing analytics, they are hopelessly incapable of making business decisions. Compiling the latest turnover report is an exercise in analytics. But doing analytics should be a means to understanding, not an end in itself. Analytics is one of the four primary sources of evidence that can inform decisions, along with scientifically sound research, practitioner knowledge and stakeholder perceptions. Some degree of analytical competence is a requirement for professionals today, but not everyone needs to be a data scientist to be competent.

While attending a seminar done by a highly qualified data scientist, I noted that he was a big fan of the Random Forest technique used in analytical work. Because of my education and experience, I am cautious of exposing my total ignorance about a topic, so I did a stealthy Google check on my phone. The definition was:

“Random Forests are an ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes or mean/average prediction of the individual trees.”

Despite understanding and using regression analysis I found myself no better off than before reading that definition. So, I assumed I would never need to do a Random Forest analysis and checked my emails, while waiting for a topic I knew something about.

What Is Necessary?

One of the challenges associated with using analytics is deciding what is needed. If I am evaluating the effectiveness of a recruiting and selection process, I certainly would like to know how it has worked. A simple correlation test that measures the relationship between the criteria used in selection, and both the retention and the performance of hires, would provide evidence useful in evaluating the selection model. If I found that certain criteria used for selection correlated highly with the desired outcomes, this is evidence that using that criteria was helping the organization to make good decisions. I could do a single factor test with each of the criteria used to see if each correlated with positive outcomes. Or I could enter all the criteria into a multiple factor regression model to see which of them contributed the most. These are simple statistical tests that can be done with readily available software.

Being able to predict the performance likely to be exhibited by a candidate for employment is almost always valuable. And there is a lot of research on that topic. Intelligence (G) has been shown to correlate with performance, but the correlation is not strong. Conscientiousness is shown to correlate with performance, but the correlation is not strong. By using both factors in a multiple factor test, the correlation becomes very strong. This is understandable, since some very smart people don’t achieve much because they are not conscientious, and some moderately intelligent people are successful because of their persistence.

A more nuanced prediction can be made by considering the nature of the role the candidate would play. If a high threshold of native intelligence is required (e.g., research scientist), this may make G more impactful than conscientiousness in selection, although both are of course desirable. If the job requires a high level of persistence but less advanced intelligence (e.g., sales representative) conscientiousness becomes relatively more important as a predictor.

In addition to discovering the best prediction, one would want to develop a selection process that was appropriate. For example, if a candidate will have to work closely with a variety of current employees, it may be prudent to involve those people in the screening process. Even if it requires more time and seemingly qualified candidates are lost during the process, it’s possible to evaluate the success/failure rate at each stage. By focusing on where candidates were lost, corrective action can be taken.

A simple model like the one below would require measurements at each of the stages and the results could provide insights into what might be done to lessen the losses.

Image










Business knowledge is helpful in defining the issues one wishes to address using analytics. It allows one to decide how sophisticated the tools must be to get at the information needed. If the organization has just implemented a new compensation plan and wants to know the level of acceptance by employees, it can conduct an employee attitudes survey. Perhaps just asking “what is your reaction to the plan?” may be enough. A twenty-point scale that ranges from “hate it” to “love it” could be used as the response scale. But when compiling the results, it is common to report a simple average (mean) to reflect employee views. But if a more nuanced indication is needed, such as knowing how many hated it, how many loved it and how many were less extreme in their views, the data could be arrayed in a frequency distribution (called a histogram), such as the one below.

Image















It is apparent that much information is lost by only reporting the single average. There appears to be a “bi-modal” distribution of responses, which requires further examination. If the positive responses (the ones to the right of the average) are more often from older/longer service employees, the cause of that should be evaluated.

Perhaps a service award program was replaced with an incentive plan that only rewards those who perform well. At this point, technology is of less use and the knowledge of an HR practitioner needs to take over. The reasons for the pattern need to be identified, to determine if they might indicate a problem with the new program. Yet, for further examination to be done, there must be a way to determine who was positive and who was negative. Otherwise, the data only identifies that there are polar views. Even though anonymity might increase response rates, it would still be possible to build measures into the response that would preclude identifying individuals (i.e., using age ranges and seniority ranges).

Anticipation of how the data will be used and what information is required needs to guide the survey design. This will be facilitated by having a practitioner involved with the design, since a data scientist might not consider the need to do more in-depth analysis.

The planning phase of analytics is where practitioner knowledge is needed to guide the selection and use of analytical tools. One of the greatest dangers with looking for correlations in databases is that it’s likely that many will be found and some will be meaningless or lead you astray. When designing a research study, good practice demands that a hypothesis (or several) be created beforehand and the study must be structured to test the hypothesis. Identifying business issues and framing hypotheses is the responsibility of the practitioner, since those specializing in analytics may not possess the necessary knowledge.

It’s important to make the distinction between correlation and causation. Just because grade point average in school correlates with retention of new hires does not mean retention is caused by the grade point average. It may be that both are caused by something else, such as conscientiousness. In order to establish that A causes B, three conditions must be met: 1) There needs to be a high correlation, 2) Changes in A must precede changes in B, and 3) There must not be other feasible causes. This is the reason studies done at a single point in time are limited in what can be concluded from their results. Without longitudinal measurements, the “must precede” requirement cannot be tested.

There have been claims made that high pay for executives causes improved organizational performance. This has been based on correlation analysis. But it could also be argued that financially successful organizations are able to afford higher pay, which reverses the direction of causation. For decades researchers have been unable to find strong correlations between employee satisfaction and productivity. When researchers reversed the premise, it was found that improved performance was correlated with increased satisfaction. Too many claims in the practitioner literature fall into the trap of ignoring causal direction.

A common use of analytics is determining whether an intervention produces positive results. A group incentive plan is usually installed with the intention of motivating collaborative behavior and focusing people on unit performance. Analysis should be aimed at measuring if the desired results did occur. The chart below tracks performance for the periods after the installation of the plan.

Image
















Although performance increases steadily after the plan installation, the chart tells us nothing about whether the plan made a difference. Realizing this data was gathered on performance before plan installation is shown in the chart below.

Image
















This additional information indicates that the plan had no effect, since performance was already on the upswing. On the other hand, if the pattern is like the one in the display below, a totally different conclusion would be reached.

Image
















The plan designer could now be confident that the plan at least contributed to improvement, even if other things might have had a positive impact. It would be helpful to create a way to establish how much of the difference was attributable to installing the plan. If the business climate changed significantly (e.g., herd immunity was reached due to widespread vaccine inoculations ending the pandemic lockdowns), that might have been at least part of the reason for improved performance.

How Refined Does Measurement Need to Be?

One of the scales used to measure pain I always thought to be simplistic is the “on the scale from 1 to 10, how intense is your pain?” If my response would have been a 10, I probably would be physically unable to respond to the question and I would be too embarrassed to admit it was a 1 yet I was still seeking relief. This scale is an extension of a satisfaction index developed decades ago by a researcher that showed relatively simple scales can suffice.

Image










A more “sophisticated” scale is the Likert scale shown below.

Image










The type of scale used should be determined by the need for precision and the ability to truly differentiate between levels. Overly precise scales frustrate respondents because they cannot differentiate between adjacent values. I once consulted with a state government that had a job family for general clerk that had nine levels. When I used the job descriptions to fill in a matrix that defined the nature of work, degree of autonomy, impact and required qualifications, it became evident that it was impossible to differentiate across levels and we revised the definitions and ended up with three levels. Pretending individual judgments can be made that precise does not make it so. These scales were akin to measuring a cloud with a micrometer.

But there is also danger in using an overly simple measure. Everyone tracks turnover, but is 29% turnover in the IT function a problem? One of the requirements for making that determination is deciding whether turnover is affecting performance. McDonald’s may survive 200% annual turnover rates among hourly employees, but if restaurant managers turn over at that rate, there will probably be alarm bells going off. Is zero turnover the good news or the bad news? And what type of turnover? The chart below breaks turnover statistics down by type. Turnover that is determined to be dysfunctional will be a concern, while little sleep is lost when the organization terminates someone for sustained poor performance. This analysis shows that of the 29% total turnover, 10% is dysfunctional. This is the number that will warrant management attention and what, if anything, should be done.

Image














Once the strength of correlation between things is determined, it must be decided whether it is strong enough to fit the requirements. When measuring the correlation between relative internal values (measured using job evaluation) and external values (measured using market data), I often find the correlation to be .90 to .95. That could be viewed as high, but cynics might point out that it is poor for 5–10% of the jobs. Another measurement is the organization’s current pay posture relative to market. If an organization is paying “around market” levels (10% + or –), management can view that as sufficiently close. Random error makes precise fixes unrealistic. Telling executive management that the organization is 4.6% below market is suggesting the measure is more precise than the approximations that go into reporting in surveys.

An experienced practitioner knows that matching the organization’s jobs to the benchmark jobs in a survey requires individual judgment and perfect equivalence is rare. So “close enough” can result in concluding the organization is “within the range of competitive market levels.”

Finding Common Ground

I have made peace with the reality that I do not know how to use the Random Forest technique and may never be able to. Practitioners should forgive themselves for not being able to talk shop with decision scientists on all topics. On the other hand, they must be able to define business issues and formulate a hypothesis for the quants to test, leaving them to practice their craft. They must provide parameters that make things like the level of precision needed clear. And it is often useful to conduct a dialogue with those doing the analysis that enables them to understand how it is relevant to real-world issues and why they are important to address. Running numbers as a form of recreation might work for some but it does not hold up under scrutiny when it must be justified based on value to the organization.

Practitioners need to increase their understanding of things such as what research tells us and what types of analytics should be built into key business processes. Working together with decision scientists increases the probability that analytics will produce what is needed. If that means practitioners need to take some statistics and quantitative methods course work, the investment should be made.

And decision scientists shouldn’t be turned loose to use all their best tools if there is not laser focus on the outcomes needed. Investing in helping the quants understand the business will contribute to outcomes that are useful.

About the Author

Robert Greene Bio Image

Robert J. Greene is the CEO of Reward Systems Inc.


About WorldatWork

WorldatWork is a professional nonprofit association that sets the agenda and standard of excellence in the field of Total Rewards. Our membership, signature certifications, data, content, and conferences are designed to advance our members’ leadership, and to help them influence great outcomes for their own organizations.

About Membership

Membership provides access to practical resources, research, emerging trends, a professional network, and career-building education and certification. Learn more and join today.