Not reproducible?

No Gravatar

I have written before about the approval process for certain drugs.   (Here’s but one example.) Drugs that deal with ailments that are not subject to perfect clinical testing.   Drugs that deal with psychological ailments have more lenient standards for approval than other drugs.  No, not that they may be approved if they are unsafe- but the proof of improvement in conditions is more lax.

I start my blog with that because there is a controversy over the reproducibility of the results of certain publications.  And, if one is not meticulous in reading (and certainly due to the reporting of same), the wrong impression may result.

Yes, these studies were published in prestigious journals.  Science is one of the two pre-eminent American basic science journals; Nature is the British (European) journal that was also mentioned, also a most prestigious place to find one’s article in print.

The issue is 21 experiments detailed in these two journals between 2010 and 2015.  Of these, only  the results from 13 could be duplicated; the other 8 failed replication- and put the findings of those publications in question.  And, even when reproducibility was obtained (in those 13 instances), the improvements or effects were aggrandized in those publications.

These results were published in Human Behavior (a Nature publication) by Drs. C.F. Camerer, A. Drebner, F. Holzmeister, T-H Ho, J. Huber, M. Johannesson, M. Kirchler, G. Nave, B.A. Nosek, T. Pfeiffer, A. Altmeid, N. Buttrick, T. Chan, Y. Chen, E. Forsell, A. Gampa, E. Heikensten, L. Hummer, T. Imai, S. Isaksson, D. Manfredi, J. Rose, E-J Wagenmakers, & H Wu.  These 25 authors from 13 institutions around the world were behind the two year effort Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015”.

This study will fuel the demagogues who are against the truth of climate change, vaccine safety, and species extinction, claiming this proves those studies must be wrong, too.  Except these 21 studies in question  all involve what folks like to call “social sciences”.  (It is also why I have always been against that term- it confuses too many people into thinking much of these studies are totally scientific in nature.)

Moreover, these failures were problems with experimental design (which is, as I have often seen, a true issue with ‘social science’ research) and statistical analyses.

Moreover, when these analysts (Camerer et. al.) asked 200 well-regarded peers to guesstimate which studies would replicate and to what extent the results would match those reported, the results from the guesstimates and the actual results were amazingly correlative.  The peers predicted a replication rate and correct results amazingly accurate (within 1.5% of the observed rate of 61.9%).  Obviously, trained peers have wonderful BS detectors.

200 peers figure it out
From https://www.nature.com/articles/s41562-018-0399-z

The study only went as far as 2015, because that was the time by which journals like Science made the decision to impose more reproducibility standards on published articles and to require data sharing for others to analyze the findings themselves.  Nature is now demanding more information about experimental design and analysis.  (Editors have also stated that some studies fail to replicate because the protocol may not have been as detailed as necessary and the sample populations or collections may have been differentBoth of these are, indeed, factual.)

But, everyone should understand that the “social sciences”, which involve human interactions and reactions, are always more subject to bias (even if not so intended) than pure science or math- or even engineering.

Roy A. Ackerman, Ph.D., E.A.

 

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter
Share

2 thoughts on “Not reproducible?”

  1. Speaking as someone who majored in a “social science” (and yes I always wondered why they were called “science” – I get the social part) I agree it causes a lot of confusion. You just can’t use the “scientific method” in the same way. However, I wonder how susceptible these trained peers are to fake news?

Comments are closed.