Many of you may have heard the term meta-analysis either in this blog or other places on the web. Because of the amount of data and the power of our analysis software, these kinds of analyses are done fairly frequently in a lot of topics. So what is a meta-analysis? Essentially, it is a statistical way of adding a bunch of different studies together that are looking at the same thing to determine what the real effects are. Let me use an example.

If you are looking at 2 studies (or 2 articles reporting on studies) that are looking at the same question but come to different results, what can you use to determine which is the most valid? There are a few general rules of thumb. If one of them comes from a more notable research institution, it may be better because these schools typically have stricter institutional controls. Another important factor is sample size and population. If the sample is entirely college students, there are reasons why you might not trust that finding as much as if the sample was more diverse. Also, if one study had 50 people and the other had 500, then you might trust the larger one more.

It may seem obvious, but why actually do we trust the more diverse or the larger studies? Studies where they find effects even with diverse samples suggests that the effect is likely to be more prevalent. Diversity always adds some amount of variation to human subjects research. In a study I am running, we are using computers and we found that it was a good idea to limit the age of participants because some participants had much more trouble since they were not as familiar with computers as the younger participants. So, reducing the diversity of the sample can let researchers narrow in on results they are interested in. Sample size effects the likelihood of finding an effect to begin with. As the sample size increases, a number called the standard error decreases in the analyses. This means that the analyses can become more confident of the effects each variable has.

What a meta-analysis is, is a tool that lets researchers combine multiple studies together. Through that process, the sample size gets bigger which allows us to be more confident and, due to the aggregation process, the sample also becomes more diverse because studies will have used different kinds of people and possibly different methods in carrying out the experiment. Meta-analyses can be done incorrectly and can be misleading, but a good rule of thumb would be to trust a meta-analysis about a topic more than any single study.

Extra Fun Facts: File-Drawer Effect

The process of doing a meta-analysis of course adds some difficulties for the researcher in trying to 'wash out' the potential added noise (a term for unintended variance) from the analysis. There are many possible problems such as the 'file-drawer effect'. It is well-known that a lot of the work that scientists do never gets published. A big factor of this is non-significant effects. If you run an experiment, for example, and do not find what you are looking for, you may assume that you did something wrong. One professor I had mentioned that he ran one study over 3 times, never quite finding the effects he was interested in. Because of this, he never published any of the studies. [Later on he did a small meta-analysis of just these experiments and found that there was a small effect that he was only able to see when adding all of the data he had collected together.]

There are two main reasons for the file-drawer effect. Researchers may be embarrassed or not see value in proclaiming to the world that they found nothing (significant effects are 8 times more likely to be submitted), and academic journals are hesitant to publish articles without significant effects for the primary variables of interest (7 times less likely to be published). There are some legitimate reasons for this hesitancy. A study can fail to find effects for a lot of different reasons (actually no effect, poor design, too small sample size, inappropriate analysis, etc.), but there are fewer conditions under which a study will find effects when there are none. Therefore, if you did a meta-analysis only using the data that were published, there may be an over-representation of the actual effect than in reality. If you are trying to determine the average grade for the class but only included students that made above a certain grade or attended every class session, you will get an average that is likely to be different from the actual average. There are various and sometimes complex ways that researchers try to deal with these problems but it is always a concern.

* I used Field & Gillett (2010) "How to do a Meta-Analysis" significantly in this post.

If you are looking at 2 studies (or 2 articles reporting on studies) that are looking at the same question but come to different results, what can you use to determine which is the most valid? There are a few general rules of thumb. If one of them comes from a more notable research institution, it may be better because these schools typically have stricter institutional controls. Another important factor is sample size and population. If the sample is entirely college students, there are reasons why you might not trust that finding as much as if the sample was more diverse. Also, if one study had 50 people and the other had 500, then you might trust the larger one more.

It may seem obvious, but why actually do we trust the more diverse or the larger studies? Studies where they find effects even with diverse samples suggests that the effect is likely to be more prevalent. Diversity always adds some amount of variation to human subjects research. In a study I am running, we are using computers and we found that it was a good idea to limit the age of participants because some participants had much more trouble since they were not as familiar with computers as the younger participants. So, reducing the diversity of the sample can let researchers narrow in on results they are interested in. Sample size effects the likelihood of finding an effect to begin with. As the sample size increases, a number called the standard error decreases in the analyses. This means that the analyses can become more confident of the effects each variable has.

What a meta-analysis is, is a tool that lets researchers combine multiple studies together. Through that process, the sample size gets bigger which allows us to be more confident and, due to the aggregation process, the sample also becomes more diverse because studies will have used different kinds of people and possibly different methods in carrying out the experiment. Meta-analyses can be done incorrectly and can be misleading, but a good rule of thumb would be to trust a meta-analysis about a topic more than any single study.

Extra Fun Facts: File-Drawer Effect

The process of doing a meta-analysis of course adds some difficulties for the researcher in trying to 'wash out' the potential added noise (a term for unintended variance) from the analysis. There are many possible problems such as the 'file-drawer effect'. It is well-known that a lot of the work that scientists do never gets published. A big factor of this is non-significant effects. If you run an experiment, for example, and do not find what you are looking for, you may assume that you did something wrong. One professor I had mentioned that he ran one study over 3 times, never quite finding the effects he was interested in. Because of this, he never published any of the studies. [Later on he did a small meta-analysis of just these experiments and found that there was a small effect that he was only able to see when adding all of the data he had collected together.]

There are two main reasons for the file-drawer effect. Researchers may be embarrassed or not see value in proclaiming to the world that they found nothing (significant effects are 8 times more likely to be submitted), and academic journals are hesitant to publish articles without significant effects for the primary variables of interest (7 times less likely to be published). There are some legitimate reasons for this hesitancy. A study can fail to find effects for a lot of different reasons (actually no effect, poor design, too small sample size, inappropriate analysis, etc.), but there are fewer conditions under which a study will find effects when there are none. Therefore, if you did a meta-analysis only using the data that were published, there may be an over-representation of the actual effect than in reality. If you are trying to determine the average grade for the class but only included students that made above a certain grade or attended every class session, you will get an average that is likely to be different from the actual average. There are various and sometimes complex ways that researchers try to deal with these problems but it is always a concern.

* I used Field & Gillett (2010) "How to do a Meta-Analysis" significantly in this post.