Close
Learning Methods
Classroom
A traditional classroom couples on-site learning with the added value of face-to-face interaction with instructors and peers. With courses and exams scheduled worldwide, you will be sure to find a class near you.
Interaction
Highly Interactive
On-going interaction with instructor throughout the entire classroom event
Interaction with peers/professionals via face-to-face
Components (May Include)
Onsite
On-site instructor-led delivery of course modules, discussions, exercises, case studies, and application opportunities
Supplemental learning elements such as: audio/video files, tools and templates, articles and/or white papers
E-course materials available two weeks prior to the course start date; printed course materials ship directly to the event location
Duration
One + Days
Varies by course ranging from one to multiple days
Technical Needs
Specific requirements are clearly noted on the course page
Virtual Classroom
Ideal for those who appreciate live education instruction, but looking to save on travel. A virtual classroom affords you many of the same learning benefits as traditional–all from the convenience of your office.
Interaction
Highly Interactive
On-going interaction with instructor throughout the entire virtual classroom event
Interaction with peers/professionals via online environment
Components (May Include)
Live online instructor-led delivery of course modules, discussions, exercises, case studies, and application opportunities
Supplemental learning elements such as: audio/video files, tools and templates, articles and/or white papers
E-course materials available up to one week prior to the course start date. Recorded playback and supplemental materials available up to seven days after the live event.
Duration
Varies by course ranging from one to multiple sessions
Technical Needs
Adobe Flash Player
Acrobat Reader
Computer with sound capability and high-speed internet access
Phone line access
E-Learning
A self-paced, online learning experience that allows you to study any time of day. Course material is pre-recorded by an instructor and you have the flexibility to view content modules as desired.
Interaction
Independent Learning
Components (May Include)
Pre-Recorded
Pre-recorded course modules
Supplemental learning elements such as: audio/video files, online quizzes
E-course materials start on the day of purchase
Optional purchased print material ships within 7 business days
Duration
120 Days - Anytime
120-day access starts on the day of purchase
Direct access to all components
Technical Needs
Adobe Flash Player
Acrobat Reader
Computer with sound capability and high-speed internet access
Close
Contact Sponsor
E-Reward
Online
Paul Thompson
Phone: 1 44 01614322584
Contact by Email | Website
Close
Sorry, you can't add this item to the cart.
You have reached the maximum allowed quantity for purchase in your cart or the item isn't available anymore.
Product successfully added to your cart!
Price
View your cart
Continue shopping
WORKSPAN
VOICES IN THE PROFESSION |

Characteristics of Sound Research

Research findings are one of evidence for decision making — if it is sound research. The practitioner literature often contains claims that “research shows…” but upon scrutiny, the findings are found to be irrelevant to the practitioner’s world or lacking validity. Using unsound research can produce bad decisions.

For research to be useful to practitioners, it must be valid both internally and externally. Internal validity is present when a research study is conducted based on the scientific method and follows sound principles. There is a great deal of lab research relating to workforce management topics that is internally valid, but those studies often lack external validity (generalizability). The finding that people seemingly will throw tennis balls at targets in a lab setting longer if they are not paid than if they are has been used to support the notion that extrinsic rewards can negatively impact intrinsic rewards. But does that finding hold true in the real world of work, where people do things they do not enjoy or find satisfying because they need to support themselves? Throwing tennis balls at targets bears little resemblance to work performed in industry and offering token rewards (or not) to lab subjects is not equivalent to the extrinsic rewards offered in the workplace.

Benchmarking is commonly used to discover which practices appear to be effective in other organizations. One caveat with benchmarking: What works for one organization may not work for another, no matter how much they seem to be alike based on surface level characteristics.


So if generalizability is absent, practitioners must be cautious in using those findings to make crucial workforce management decisions. And the claim that a specific study “proves” a theory is wrong… all it can do is support a theory. A theory remains accepted until it is shown to be wrong (falsified).
 

Leveling the Field

Because the work environment is such a complex context and people are complex subjects, it often is difficult to conduct field research. Performing studies in a tightly prescribed and controlled setting is easier. Academics often require students to respond to questionnaires or participate in studies. These are examples of convenience studies. Although there may be learning when these studies are conducted, caution must be taken when assuming results in the field will resemble those experienced in such a highly controlled environment.

Despite the challenges, there is a considerable amount of “field research” done. For example, HR practitioners often are charged with determining whether their organization’s pay structures and levels are competitive with relevant labor markets. Compensation surveys are the most common tool used to benchmark against other organizations. Having done hundreds of these surveys, I am certain of their value if they are done well. But caution must be taken when deciding the quality of the information received. For example, many surveys conducted during the past decade reported that competitive pay levels were going up at 2.5% to 3% each year, at a time when many organizations had frozen or even reduced pay. This result often was computed by dividing the average pay from organizations reporting in the current year by the average pay from organizations reporting in the prior year. Regrettably, this produces a statistical artifact due to sample inconsistencies.

As an adviser to large surveys, I have access to data that allow me to do another calculation that bases the rate of change in pay only on data from organizations reporting in both years. And the result in several recent years was 1% to 1.5% pay increases rather than 2.5% to 3%. Upon further analysis, I was able to determine that organizations freezing pay, reducing pay and/or cutting headcount were less likely to participate in the current survey than those that had raised pay. That seriously inflated the reported rate of change in surveys not controlling for sample change. Upon reflection, one can understand why an organization freezing or reducing pay would not spend the time to participate in the survey and pay for data that they would not act on.
 

Mind the Gap

Those conducting surveys should also be concerned about whether the sample of data received is similar to the universe of the population in question. For example, what can be assumed about the accuracy of an employee satisfaction survey if 40% of the population responded? Even when a survey uses random sampling because the population is too large, the critical question is: Do the responses from the 40% responding reflect what the responses would have been if all those surveyed had responded? A technique called nonresponse analysis can be used to see if those responding and those not responding differ significantly in meaningful ways. The benefits of a random sample are lost if the gap between the respondents and the universe is significant.

Considerable research evidence shows that context has a major impact on how a strategy or program will work. Culture, internal and external realities and other contextual factors count. What works is what fits the context.


Employee attitude surveys are widely used. An example would be a survey attempting to find out how much value employees place on a benefits package. If a large percentage of older/ longer-service employees responded while younger people did not, the results may be misleading. And averaging the opinions on the individual programs may result in the loss of useful intelligence. Younger employees may be more concerned about parental leave benefits than older employees, while older employees may place more value on the 401(k) retirement program. So the aggregated data might show a higher value placed on the 401(k) program than parental leave, even though this result was only valid for a particular segment of the employee population. Careful planning must go into survey design and sample selection if the results are to be useful.
 

Context, Context, Context

Benchmarking is commonly used to discover which practices appear to be effective in other organizations. One caveat with benchmarking: What works for one organization may not work for another, no matter how much they seem to be alike based on surface-level characteristics. Considerable research evidence shows that context has a major impact on how a strategy or program will work. Culture, internal and external realities and other contextual factors count. What works is what fits the context.

So noting that a number of organizations are adopting (or considering) the “new best thing” that is widely reported in the pop literature can cause sweaty palms and an urge to immediately adopt that thing. But a thinking practitioner will consider how likely it would be to work in his or her organization. The conclusion should be the product of serious reflection on context. Even similar-sized organizations offering similar products to similar markets can have a significantly different context. Overlooking those differences can result in misguided decisions.

This is not to say that new approaches should not be carefully assessed. Continuous environmental scanning is often the best source for identifying what competitors are doing. One should reflect why something worked in a competitor organization and whether crossing contexts will likely significantly alter the results. Even when all the articles in trade publications report successes, one might ask whether those organizations experiencing failures are publicizing their mistake. This reality is one reason so many fads take hold. The practitioner literature is biased and therefore should be suspect.

Evidence-based management is becoming widely accepted and practiced. But evidence can mislead as well as inform. Research is a valuable form of evidence. However, if it lacks internal and external validity or is poorly designed and executed, it may lead to erroneous conclusions and bad decisions.



Robert Greene

Robert J. Greene, Ph.D., CCP, CBP, GRP, SPHR, CPHRC, SHRM-SCP, is CEO of Rewards System Inc., a consulting principal in Pontifex and a faculty member in DePaul University’s master of business administration and master of science in HR degree programs. He was the first recipient of the Keystone Award. He can be reached at rewardssystems@sbcglobal.com.