Category: Uncategorized
Don’t ask what your Experiment can do for You: Ask what You can do for your Experiment!
I was planning to highlight physics as a veritable model, as champion of publications culture and team science from which we in the life sciences could learn so much. And then this: The Nobel Prize for physics went to Rainer Weiss, Barry Barish and Kip Thorne for the “experimental evidence” for the gravitation waves foreseen in 1919 by Albert Einstein. Published in a paper with more than 3 000 authors!
Once again the Nobel Prize is being criticized: That it is always awarded to the “old white men” at American universities, or that good old Mr. Nobel actually stipulated that only one person per area of research be awarded, and only for a discovery in the past year. Oh, well…. I find more distressing that the Nobel Prize is once again perpetuating an absolutely antiquated image of science: The lone research geniuses, of whom there are so few, or more precisely, a maximum of three per research area (medicine, chemistry, physics) have “achieved the greatest benefits for humanity”. Awarded with a spectacle that would do honor to a Eurovision Song Contest or the Oscar Awards. It doesn’t surprise me that this is received enthusiastically by the public. This cartoon-like image of science has been around since Albert Einstein at the latest. And from Newton up to World War II, before the industrialization and professionalization of research, this image of science was justified. What disturbs me is that the scientific community partakes so fulsomely in this anachronism. You will ask why the Fool is getting so set up again –it’s harmless in the worst case, you say, and the winners are almost always worthy of the prize? And surely a bit of PR can do no harm in these post-factual times where the opponents of vaccination and the climate-change deniers are on the rise? Continue reading
“Next we….” – The history and perils of scientific storytelling
We scientists are pretty smart. We pose hypotheses and consequently confirm them in a series of logically connected experiments. Desired results follow in quick succession; our certainty grows with every step. Almost unfailingly the results have statistical significance, sometimes to the 5 % level, sometimes the p-value also has a whole string of zeros. Some of our experiments are independent of each other, some are dependent, for they use the same material, e.g. for molecular biology and histology. Now we turn tired but happy to the job of illustrating and writing up our results. Not only had we had a hand in the initial hypothesis, now confirmed. No, our luck was all the lovelier when we saw that the chain of significant p-values remained unbroken. That is comparable to the purchase of several lottery tickets which one after the other turn out to be a winner. If we then manage to convince the reviewer, our work will be printed just as it is. Continue reading
Excellence vs. Soundness: If we want breakthrough science, we need to fund ‘normal science’
I recently read “Excellence R Us”: University research and the fetishisation of excellence. by Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel Paul O’Donnell & Damian Pattinson. This excellent (!) article, and Germany’s upcoming third round of the ‘Excellence strategy’ incited me to the following remarks on ‘Excellence’
So much has already been written on Excellence. (In)famously, in German Richard Münch’s 500 page tome on the “Academic elite”. In it he characterized the concept of excellence as a social construct for the distribution of research funds and derided the text bubbles socialized under that concept. He castigated each and every holy cow in the scientific landscape of Germany, including the Deutsche Forschungsgemeinschaft (DFG, Germany’s leading funding organization). Upon its publication in 2007 shortly after the German Excellence initiative was launched for the first time, Münch’s book filled with indignation the representatives of what he disparaged as ‘cartels, monopolies and oligarchies’, and a mighty flurry rustled through the feuilletons of the republic.
Today, on the eve of the 3rd round of the Excellence initiative (now: Excellence Strategy), only the old timers remember that, and that is precisely the reason why I am going to tackle the topic once again and fundamentally. And because I believe there is a direct connection between the much-touted crisis of the (life) sciences and the Excellence rhetoric. Continue reading
Start your own funding organization!
It’s noon on Saturday, the sun is shining. I am just evaluating nine applications of a call for applications by a German Science Ministry (each approximately 50 pages). Fortunately, last weekend I was already able to finish evaluations of four applications for an international foundation (each approximately 60 pages). Just to relax I am working on a proposal of my own for the Deutsche Forschungsgemeinschaft (DFG) and on one for the European Union. I’ve lost my overview as to how many article evaluations I have agreed to do but have not yet delivered. But tomorrow is Sunday, I can still get a few done. Does this agenda ring a bell with you? Are you one of the average scientists who according to various independent statistics spend 40% of their work time reviewing papers or bids? No problem, because there are 24 hours in every day, and then there are still the nights too, to do your research.
I don’t want to complain, though, but rather make you a suggestion as to how to get more time for research. Interested? Careful, it is only for those with nerves of steel. I want to break a lance for a scattergun approach and whet your appetite for this idea: We allot research money not per bid but have it given to all as a basic support. With the tiny modification, that a fraction of the funds received must be passed on to other researchers. You think it sounds completely crazy, as in NFG (North Korean Research Community)?
For risks and side effects consult your librarian
Scarcely noticed by the scientific community in Germany, an astounding development is taking place: The alliance of German scientific organizations with the Conference of University Rectors (DEAL Consortium) is flexing its muscles at the publishing houses. We are witnessing the beginning of the end of the business model current in scientific publishing: an exodus out of institutional library subscriptions to journals and into open access to all to scientific literature (OA), financed by a once-only article publishing charge (APC). The motive for this move is convincing: Knowledge financed by society must be freely accessible to society, and the costs for accessing scientific publications have risen immensely, increasing every year by over 5% and all but devouring the last resources of the universities.
The big publishing houses are merrily pocketing fantastic returns for research that is financed by taxes and produced, curated, formatted, and peer reviewed by us. These returns run at a fulsome 20 – to 40%, which would probably not be legal in any other area of business. At the bottom of this whole thing is a bizarre swap: With our tax money we are buying back our own product – scientific knowledge in manuscript form — after having handed it over up front to the publishers. It gets even wilder: The publishing houses give us back our product on loan only, with limited access, without any rights over the articles. The taxpayer, having paid for it all, cannot access it, meaning that not only Joe Blow the taxpayer is left standing in the cold, but with him practicing medical doctors or clinicians, and scientists outside of the Universities. Continue reading
How original are your scientific hypotheses really?
Have you ever wondered what percentage of your scientific hypotheses are actually correct? I do not mean the rate of statistically significant results you get when you dive into new experiments. I mean rather the rate of the hypotheses that were confirmed by others, or that postulated a drug that was in fact effective in other labs or even patients. Nowadays, unfortunately, only very few studies are independently repeated (more on that later) and even therapies long established are often withdrawn from the market as ineffective or even harmful. You can only hope to approach a rate of “success”, and that is exactly what I will now attempt to do. You are wondering why I am posing this apparently esoteric question: It is because knowing approximately how high the percentage of hypotheses is that actually prove to be correct would have wide-reaching consequences for evaluating research results for your own results, and for those of others. This question has an astonishing but direct relevance in the discussion relating the current crisis in biomedical science. Indeed, a ghost is haunting biomedical research!
Continue reading
And now for something completely different: Sequential designs in preclinical research
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. The aim of our recent article in PLOS Biol is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain.
PLoS Biol. 2017 Mar 10;15(3):e2001307. doi: 10.1371/journal.pbio.2001307.
Lost access to Elsevier journals?
The negotiation of a new German-wide arrangement with Elsevier and other publishers for online access to journals has stalled, as Elsevier did not agree to the terms proposed by a consortium of many German universities. Meanwhile, many of these Universities have cancelled their subscriptions with Elsevier. Scientists trying to access Elsevier journals now need their credit cards in standby to access (their) articles! Although this may be frustrating, I find that it is good news that the struggle of scientists for open access to their work is coming to a head. Many researchers for the first time start to grasp the business model behind scientific publishing! They naively thought ‘open access’ means that while working on their computers within the intranet of their universities a simple click gives them free access to any paper they want. They might now join the still small band of ‘activists’ fighting for novel publishling models. Two of those activists, Romain Brette and Björn Brembs have recently provided very useful resources for concerned researchers (So Your Institute Went Cold Turkey On Publisher X. What Now?), and a vision of the post-journal world as well as thoughtful suggestions on how we can ‘help move science to the post-journal world‘.
Could a neuroscientist understand a computer?
… is the title of a recent article by Jonas and Kording, published in Plos Comp Biol and featured in the Economist. The Economist summarizes their findings by stating that ‘testing the methods of neuroscience on computer chips suggests they are wanting’, and on the magazine cover labels neuroscience’s toolkit as ‘faulty’.
Jonas and Kording used a simple microchip (one used in ‘prehistoric’ game computers like the Atari) and asked the question whether the chip could be ‘understood’ by applying the same approaches applied by the large scale human brain projects. These multibillion consortia work under the premise that the human brain works like a supercomputer – doesn’t it process information and use electrical currents? So if you understand the wiring diagram (‘the connectome’) and the firing of electrical signals through it, you would be able to model its working principle. What you need is just lots of data, and heavy computing. Jonas and Kording used this approach and checked whether it allowed them to understand how the game chip works. Since we already know how it works (because it was engineered in the first place) we can test how far this approach takes us. They even threw in ‘interventions’, very similar to how modern neuroscience started, when neurologists like Paul Broca used structural lesions (e.g. after infarction) in their patient’s brains to make inferences about the functions of specific brain regions. So what happens to Donkey Kong if you tinker with a few transitors on the chip, and what does it tell you about their function? If you follow Jonas and Kording, not much. They conclude that current analytic approaches in neuroscience ‘may fall short of producing meaningful understanding of neural systems, regardless of the amount of data’.
So the methods used by the BRAIN Initiatitive or the Human Brain Project may be wanting – but what if it is even worse, and their basic tenet (‘The human brain is a computer’) is wrong, and the hype around those projects is not only methodologically but also conceptually flawed? In a recent post in AEON, Robert Epstein argues that ‘your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer’. Click here to follow his argument why it is silly to believe that brains must be information processors just because computers are information processors.
A Laboratory Critical Incident and Error Reporting System for Experimental Biomedicine
Errare humanum est – To err is human. Biomedical research, a human enterprise, is no exception in this regard. Ever more sophisticated methodologies probing how complex organisms function in health in disease invite errors on all levels – from designing experiments and studies to the collection of data and the reporting of results. The stakes are high, in terms of resources spent, and professional rewards to be gained for individuals.
Recent concerns about the reliability and reproducibility of biomedical research have focused on weaknesses in planning, conducting, analysing, and reporting research. Clearly, the discussion is revolving around factors which negatively impact on the quality of research – and which may be remedied by structured measures to improve research quality. However, the potential contribution of errors to the disappointingly low level of reproducibility and predictiveness of biomedical research, and how scientists deal with these errors, has not yet been considered.
In a PLOS Biology article which appeared this week we propose the implementation of a simple and effective method to enhance the quality of basic and preclinical academic research: critical incident reporting (CIR). CIR has become a standard in clinical medicine but to our knowledge has never been implemented in the context of academic basic research. We provide a simple, free, open-source software tool for implementing a CIR system in research groups, laboratories, or large institutions (LabCIRS). LabCIRS was developed, tested, and implemented in our multidisciplinary and multiprofessional neuroscience research department. It is accepted by all members of the department, has led to the emergence of a mature error culture, and has made the laboratory a safer and more communicative environment. Initial concerns that implementation of such a measure might lead to a “surveillance culture” that would stifle scientific creativity turned out to be unfounded.
A demo version and source code of LabCIRS can be found via the supplement of the article.