A study often begins with a simple question. Researchers are motivated to find answers to the question and add to the overall knowledge on a topic.
However, once they publish their findings, you might hear other researchers say that they are sceptical of the results because they may be biased. What exactly are these researchers concerned about and why?
In a research sense, bias does not refer to an intentional attempt to mislead. Rather, it refers to flaws in the design, conduct and analysis that creep into the study that results in a systematic shift in the findings.
Bias can be introduced at any stage of research — from the initial stages when researchers are collecting data, to the analysis of results, to the publication of studies. Bias can also refer to things that happen before the study has started (for example, the construction of questions to include is often biased by previous research); or things that happen (or don't) when the study ends — such as publication bias (see below).
Here are some common forms of bias that occur:
Suppose you want to examine what young people think about their risk of getting injured at work. Ideally, you would ask this question to a random group of young workers. However, due to the difficulty in finding young workers, you select only those young workers who visit a young workers' safety website. Your selection would be biased. It is likely that this group has a better knowledge of workplace safety, or is more concerned about getting injured at work because they have visited a website with safety information. Based on the response from this group you might conclude that young people think their risk of getting injured at work is high. But because of the selection bias, this finding might be higher than the actual views of young workers in general.
Often researchers are concerned about how conditions — such as unemployment — affect people over time. You might have a large, diverse sample of workers from the population at the start of your study. Let's say you want to see how stress levels are related to unemployment, so you survey these workers. However, over time those people who are unemployed for a long period might move, perhaps to find work elsewhere. As a result, they might not be included in a follow-up measurement a year later. Because these workers are no longer in your study, it may impact on your results. This is called attrition bias.
Sometimes it's difficult for researchers to measure what they plan to. They might use a proxy or substitute for what they really want to measure. For instance, it might be difficult for researchers to go into a company and ask to measure workplace injuries, so they might use the company's lost-time work injury claims as a proxy for workplace injuries. In this situation, researchers might end up with a less accurate measure that may lead to different results.
Researchers may conduct an analysis that does not consider or adjust for another potential explanation for the findings. One example would be an analysis of young workers' injury risk that does not account for how long they've worked or for the hazards in their workplace. Inexperience in general or high-hazard working conditions can also affect the risk of injury.
This is a type of bias in which researchers only submit studies with results that they think are likely to be published in scientific journals. It can also occur when editors of these journals accept or reject articles for publication based on the direction or strength of the findings. For instance, a study that shows an intervention works might be selected over a study that shows it has no effect.
Bias can occur in almost any study, although researchers first try and limit the possibility of bias. However, sometimes this is not possible, so the researcher's job is to better understand and report how the bias they encountered might impact on their results.
Source: At Work, Issue 49, Summer 2007: Institute for Work & Health, Toronto