Monday, November 15, 2010

3k Washingtons, the price of open access

We've been asking about sustainability of open access for quite some time. Springer has the answer. If I pay a mere $3,000 my latest article (co-authored with Dr. Gulfidan Can) could be made available for non-commercial use via the creative commons license (with commercial use rights reserved by Springer).

They do allow for institutional reserve or personal web site productions of pre-print versions of the article . . . 12 months after it comes out.

You are working yourself out of having a respected journal. You will loose to publications that make articles available to everyone because they'll get more citations. You need a new business model. Think services, think access to full data, think high resolution graphics, partner articles with your textbooks about statistics (a link that says--do you want to know more about Cronbach's Alpha? or see other forms of reliability) or qualitative analyses (do you want to know how to do a comparative case study?). Think advertising for statistical or qualitative analysis packages. You can live off your current g index for a little while, and find some solace in the fact that academia moves at a glacial pace but the days of your current business model are numbered and the time to make changes is now. I don't think that adding a $3,000 check box for creative commons is going to cut it, although I respect that you are trying.

You want to draw a line in the sand for contributions via copyediting and formatting--terrific. Let us have our pre-print manuscripts from day 1. Or earlier for that matter. Our acceptance notification for this article came in July. We're going on 4 months of possible citations flushed away and the clock is still running. You want an edge on your competition--provide links to our pre-prints before the issue comes out. How about having citations for articles still in press as a way of pumping up the old g index?

First of all, call the $3k what it is, it's an opportunity cost--it's what you think you will loose in revenue from this one article because libraries may cancel their subscription to the journal as a whole if they can get the article for free. The actual cost of making this open access as opposed to making it available only to subscribers is negligible. In addition to providing services you could also lower your costs. Get it out of your head that your are in the business of printing journals. You are in the business of publishing journals. Drop the hard copy, kill the infrastructure, stop renting that warehouse, kill all your shipping costs, stop giving your open access competitors such a cost advantage.

Print based journals are uniquely positioned to dominate this landscape, they have the best editors with the best reviewers, their journals have the best reputations (for now). They need to leverage that in a way that puts them on top of where academic research is headed: open access.

Friday, March 12, 2010

Attenuation of Effect Sizes

So here's what I wanted to present at BYU last week but we hadn't finished our analysis yet. At AERA we're presenting a new meta-analysis about the quality of the research done in PBL. Quick rundown--still looking at student learning outcomes comparing PBL with traditional learning. We coded for research design, the degree to which the study reported validity of their measures, reliability of their measures, and the internal threats to validity present in each study.

The stand-out finding is reliability:

When studies report no reliability information on their measures, effect sizes are .20--a small effect favoring PBL that is pretty close to the overall mean for the past several meta-analyses done. When they engage in strong reliability reporting (meaning something along the lines of a
cronbach's alpha for their actual sample rather than falling back on data from someone else's study) then effect sizes jump to .47, a medium effect.

True random designs show larger effect sizes that favor PBL over traditional learning too.



The consistent trend seems to be that we are hamstringing the PBL literature base with weak research designs and little attention to measurement. When we pay attention to those things, and presumably reduce measurement error and a priori group differences--PBL shows improved student outcomes. Almost double what we find as a norm.

Figure design shamelessly stolen from Brett Shelton.