Friday, March 12, 2010

Attenuation of Effect Sizes

So here's what I wanted to present at BYU last week but we hadn't finished our analysis yet. At AERA we're presenting a new meta-analysis about the quality of the research done in PBL. Quick rundown--still looking at student learning outcomes comparing PBL with traditional learning. We coded for research design, the degree to which the study reported validity of their measures, reliability of their measures, and the internal threats to validity present in each study.

The stand-out finding is reliability:

When studies report no reliability information on their measures, effect sizes are .20--a small effect favoring PBL that is pretty close to the overall mean for the past several meta-analyses done. When they engage in strong reliability reporting (meaning something along the lines of a
cronbach's alpha for their actual sample rather than falling back on data from someone else's study) then effect sizes jump to .47, a medium effect.

True random designs show larger effect sizes that favor PBL over traditional learning too.



The consistent trend seems to be that we are hamstringing the PBL literature base with weak research designs and little attention to measurement. When we pay attention to those things, and presumably reduce measurement error and a priori group differences--PBL shows improved student outcomes. Almost double what we find as a norm.

Figure design shamelessly stolen from Brett Shelton.

No comments: