Category: Reproducibility

Could gambling save science?

U.S. economist Robin Hanson posed this question in the title of an article published in 1995. In it he suggested replacing the classic review process with a market-based alternative. Instead of peer review, bets could decide which projects will be supported or which scientific questions prioritized. In these so-called “prediction” markets, individuals stake “bets” on a particular result or outcome. The more people trade on the marketplace, the more precise will be the prediction of outcome, based as it is on the aggregate information of the participants. The prediction market thus serves the intellectual swarms. We know that from sport bets and election prognoses. But in science? Sounds totally crazy, but it isn’t. Just now it is making its entry into various branches of science. How does it function, and what does it have going for it? Continue reading

Your Lab is Closer to the Bedside Than You Think

 

With a half-page article written about him and his study, an Israeli radiologist unknown until then made it into the New York Times (NYT 2009). Dr. Yehonatan Turner presented computer-tomographic scans (CTs) to radiologists and asked them to make a diagnosis. The catch: Along with the CT a current portrait photograph of the patient was presented to the physicians. Remember, radiologists very often do not see their patients, they make their diagnosis in a dark room staring at a screen. Dr. Turner in his study used a smart cross-over design: He first showed the CT together with a portrait photograph of the patient to one group of radiologists. Three months later the same group had to make a diagnosis using the same CT, but without the photo. Another group of radiologists were first given only the CT and then, three months later the CT with photo. A further control group examined only the CTs, as in routine practice. The hypothesis: When a radiologist is exposed to the individual patient, and not only to an anatomical finding on a scan, she will be more conscious of her own responsibility, hence findings will be more thorough and diagnosis more accurate. And in fact, this is what he found. The radiologists reported that they had more empathy with the patient, and that they “felt like doctors”. And they spotted more irregularities and pathological findings when they had the CT and photo in front of them than when they were only looking at the CT (Turner and Hadas-Halpern 2008).

So how about showing researchers in basic and preclinical biomedicine photos of patients with the disease they are currently investigating in a model of the disease? Continue reading

Predators in the (paper) forest

 It struck at the end of July.  A ‘scandal’ in science shook the Republic.  Research  by the NDR  (Norddeutscher Rundfunk), NDR (Westdeutscher Rundfunk) and the Süddeutsche Zeitung revealed that German scientists are involved in a “worldwide scandal”.  More that 5000 scientists in German universities, institutes and federal authorities had, with public funds, published their work in on-line pseudoscientific publishing houses that do not comply with the basic rules and for assuring scientific quality.  The public and not just a few scientists heard for the first time about “predatory publishing houses” and “predatory journals”.

Predatory publishing houses, whose presentation in phishing mails is quite professional, offer scientists Open Access (OA) publication of their scientific studies at a cost, whereby they imply that their papers will be peer reviewed.   No peer review is carried out, and the articles are published on the web site of these “publishing houses”, which however are not listed in the usual search engines such as PubMed.  Every scientist in Germany finds several such invitations per day in his or her e-mails.  If you are a scientist and receive none, you should be worried about it. Continue reading

No scientific progress without non-reproducibility?

Slides of my talk at the FENS Satellite-Event ‘How reproducible are your data?’ at Freie Universität, 6. July 2018, Berlin

 

  1. Let’s get this out of the way: Reproducibility is a cornerstone of science: Bacon, Boyle, Popper, Rheinberger
  2. A ‘lexicon’ of reproducibility: Goodman et al.
  3. What do we mean by ‘reproducible’? Open Science collaboration, Psychology replication
  4. Reproducible – non reproducible – A false dichotomy: Sizeless science, almost as bad as ‘significant vs non-significant’
  5. The emptiness of failed replication? How informative is non-replication?
  6. Hidden moderators – Contextual sensitivity – Tacit knowledge
  7. “Standardization fallacy”: Low external validity, poor reproducibility
  8. The stigma of nonreplication (‘incompetence’)- The stigma of the replicator (‘boring science’).
  9. How likely is strict replication?
  10. Non-reproducibility must occur at the scientific frontier: Low base rate (prior probability), low hanging fruit already picked: Many false positives – non-reproducibility
  11. Confirmation – weeding out the false positives of exploration
  12. Reward the replicators and the replicated – fund replications. Do not stigmatize non-replication, or the replicators.
  13. Resolving the tension: The Siamese Twins of discovery & replication
  14. Conclusion: No scientific progress without nonreproducibility: Essential non-reproducibility vs . detrimental non-reproducibility
  15. Further reading

Open Science Collaboration, Psychology Replication, Science. 2015 ;349(6251):aac4716

Goodman et al. Sci Transl Med. 2016;8:341ps12.

https://dirnagl.com/2018/05/16/can-non-replication-be-a-sin/

https://dirnagl.com/2017/04/13/how-original-are-your-scientific-hypotheses-really

When you come to a fork in the road: Take it

It is for good reason that researchers are the object of envy. When not stuck with bothersome tasks such as grant applications, reviews, or preparing lectures, they actually get paid for pursuing their wildest ideas! To boldly go where no human has gone before! We poke about through scientific literature, carry out pilot experiments that surprisingly almost always succeed. Then we do a series of carefully planned and costly experiments. Sometimes they turn out well, often not, but they do lead us into the unknown. This is how ideas become hypotheses; one hypothesis leads to those that follow, and voila, low and behold, we confirm them! In the end, sometimes only after several years and considerable wear and tear on personnel and material, we manage then to weave a “story” out of them (see also). Through a complex chain of results the story closes with a “happy end”, perhaps in the form of a new biological mechanism, but at least as a little piece to fit the puzzle, and it is always presented to the world by means of a publication. Sometimes even in one of the top journals. Continue reading