With a half-page article written about him and his study, an Israeli radiologist unknown until then made it into the New York Times (NYT 2009). Dr. Yehonatan Turner presented computer-tomographic scans (CTs) to radiologists and asked them to make a diagnosis. The catch: Along with the CT a current portrait photograph of the patient was presented to the physicians. Remember, radiologists very often do not see their patients, they make their diagnosis in a dark room staring at a screen. Dr. Turner in his study used a smart cross-over design: He first showed the CT together with a portrait photograph of the patient to one group of radiologists. Three months later the same group had to make a diagnosis using the same CT, but without the photo. Another group of radiologists were first given only the CT and then, three months later the CT with photo. A further control group examined only the CTs, as in routine practice. The hypothesis: When a radiologist is exposed to the individual patient, and not only to an anatomical finding on a scan, she will be more conscious of her own responsibility, hence findings will be more thorough and diagnosis more accurate. And in fact, this is what he found. The radiologists reported that they had more empathy with the patient, and that they “felt like doctors”. And they spotted more irregularities and pathological findings when they had the CT and photo in front of them than when they were only looking at the CT (Turner and Hadas-Halpern 2008).
So how about showing researchers in basic and preclinical biomedicine photos of patients with the disease they are currently investigating in a model of the disease?
A photo mounted beside the computer, the PCR cycler or the microscope, the rat? With the photo of a stroke patient for the stroke researcher, or a diabetic patient for the diabetes researcher. A bit like the scare photos on cigarette packs, but bigger. After all, there is hardly a scientific article, even by the most hard-boiled basic researcher that doesn’t start with: “Disease X is the most common cause of Y…”, or “Worldwide X people suffer from …”, and the discussion ends with “..our results could provide the basis for a more effective treatment of …” etc. In biomedical research there is almost always a reference to the clinical importance of the researcher’s own work. Were we to believe the publications and the applications to research foundations, improving the treatment of the disease and its burden on patients are among the main motivating factors for the author’s or applicant’s work. And that, even though few researchers have any direct contact to the people who have the disease, unless it affects by chance any friends, acquaintances or family members. Exceptions to this are of course doctors in university clinics, who frequently switch after their clinical shift from the ward to the laboratory. The former have little contact with the patients, while the doctors often have deficits in scientific methodology.
Could it be that biomedical science is sounder and more robust when it becomes clear to the researcher that his/her own research can have direct consequences for patients? That this is not just a matter of a publication in a prestigious journal – or a few lines in a CV? Rigor, robustness, reproducibility and predicitiveness are currently not in the focus of reviewers and journals, and rarely have major impact on the decision to publish a preclinical study. Whether the claims of clinical impact are founded in robust experimentation of high internal and external validity is rarely considered. Why? Because the clinical impact of preclinical research is completely decoupled from the clinical realm, it does not happen under the eyes of those who work with mice and cells in culture. Nor do those conducting clinical trials regularly show up in laboratories, or talk to those working there.
Think of a spectacular finding in a mouse model that is labelled a possibly important new mechanism in a disease like cancer. Like the ones we read every week in Nature Medicine, Science Translational Medicine, JCI, etc. Admittedly the group size in the mouse experiment with n=8 might have been rather small; and the outcome was not assessed in a blinded fashion; and a few times the results were not what the investigators hoped for – but hey, that was due to the wrong antibodies, so these findings were not included in the publication. No worries: with p<0.05, the results were statistically significant and properly published, with peer review and everything. The patient here seems to be far, far away, despite reference to him in the article’s introduction and in the reference in the discussion pointing out the novel treatment now projected, for a disease which is right up to the preset uncurable. The translational mantra at its best…
But perhaps such findings constitute the basis for further studies. Thus contributing to a portfolio of studies, all of which may have some weaknesses in design and analysis, but cumulatively trigger the progression to a first in man study! In addition, only very few researchers are aware of the fact that their mouse findings may provide the direct basis for so-called “compassionate use”. Compassionate use refers to investigational drugs or biologics given to patients outside a clinical trial setting for treatment purposes. Doctors are allowed to use such experimental therapeutics if no comparable or satisfactory alternative therapy to treat the disease or condition is available. Compassionate use is a last resort of desperate doctors to treat severely ill patients for which all guideline therapies have been tried without much success. Preclinical researchers take note: It is often grounded on only a few well-published cell culture or mouse experiments! Believe it or not, most clinicians trust and rely on the rigor and robustness of the experimental literature and the “stories” we tell in it (see LJ 1/2018).
What I want to say is that basic and pre-clinical research have an ethical dimension that many researchers are not aware of. Ethics in preclinical research is usually discussed in the context of animal experimentation: Is causing animals pain justified in the name of healing human diseases? Or one thinks of fraud, theft of ideas, plagiarism etc. as “unethical” research practices. But there is another ethical dimension, one emanating from the mediate or immediate consequences that our scientific activity has for the patient — because our basic research impacts on humans. This impact could be positive, for example when we discover a mechanism of a disease that at some point leads to an effective therapy. That’s what we all hope for! Or our work could have indirect negative impact on patients, which may be reality more often than we realize! Such negative effects could come from errors made in the lab, from biases that are not properly controlled, from false positives and overestimation of effect sizes due to small sample sizes, from selective data use, or reporting only positive findings. We may end up with a high level publication, but resources may have been wasted, and patients could potentially be harmed. And this, without us ever noticing, because there are many steps following the basic and preclinical research which disconnect it from our lab. And we should not count on regulatory authorities and ethics commissions as safety net: They do not seem to care about the quality of the preclinical evidence, or efficacy in disease models, as Daniel Strech’s group found in a recent study (Wieschowski et al. 2018).
Could a patient photo posted in the laboratory be of help here? Of course not. The effect wears off after the third time around, even if it did exist. And that is questionable. Radiologists from Ottawa tried to replicate the study that had been performed in Israel and reported in the New York Times, but was published only as an abstract. The Canadian researchers pre-specified their hypothesis and used very clean methods. But they were unable to replicate the beneficial effect of exposing radiologists to a patient photo. Their study was properly published in 2015 in a reputable journal (Ryan et al. 2015). Such a scenario is typical: Spectacular preliminary findings make it into leading newspapers, but only a presentation and an abstract follows. When a replication with high methodological rigor fails, this is no longer newsworthy. The abstract with its spectacular finding is quoted six times more frequently than the original article of the failed replication!
Bottom-line: Don’t forget the patient, even if you work with cells and mice! To those who would like to learn more about this issue, I warmly recommend our recent article (Yarborough et al. 2018).
A German version of this post has been published as part of my monthly column in the Laborjournal: http://www.laborjournal-archiv.de/epaper/LJ_18_10/28/index.html
Yarborough M, Bredenoord A, D’Abramo F, Joyce NC, Kimmelman J, Ogbogu U, Sena E, Strech D, Dirnagl U. The bench is closer to the bedside than we think: Uncovering the ethical ties between preclinical researchers in translational neuroscience and patients in clinical trials. PLoS Biol. 2018 Jun 6;16(6):e2006343. doi: 10.1371/journal.pbio.2006343.
Yarborough M, Dirnagl U. Preclinical research: Meet patients to sharpen up research. Nature. 2017 Nov 16;551(7680):300. doi: 10.1038/d41586-017-06024-2.
New York Times Published: April 6, 2009, Radiologist Adds a Human Touch. https://archive.nytimes.com/www.nytimes.com/2009/04/07/health/07pati.html
Turner Y, Hadas-Halpern I. The effects of including a patient’s photograph to the radiographic examination. Presented at: Radiological Society of North America Scientific Assembly and Annual Meeting; Oak Brook, Illinois; 2008.
Ryan J, Khanda GE, Hibbert R, Duigenan S, Tunis A, Fasih N1, MacDonald B, El-Khoudary M, Kielar A, McInnes M, Virmani V, Ramamurthy N, Kolenko N, Sheikh A. Is a picture worth a thousand words? The effect of viewing patient photographs on radiologist interpretation of CT studies. J Am Coll Radiol. 2015 Jan;12(1):104-7. doi: 10.1016/j.jacr.2014.09.028.
Wieschowski S, Chin WWL, Federico C, Sievers S, Kimmelman J, Strech D. Preclinical efficacy studies in investigator brochures: Do they enable risk-benefit assessment? PLoS Biol. 2018 Apr 5;16(4):e2004879. doi: 10.1371/journal.pbio.2004879.