Of Mice, Macaques and Men

Tuberculosis kills far more than a million people worldwide per year. The situation is particularly problematic in southern Africa, eastern Europe and Central Asia. There is no truely effective vaccination for tuberculosis (TB). In countries with a high incidence, a live vaccination is carried out with the diluted vaccination strain Bacillus Calmette-Guérin (BCG), but BCG gives very little protection against tuberculosis of the lungs, and in all cases the vaccination is highly variable and unpredictable. For years, a worldwide search has been going on for a better TB vaccination.

Recently, the  British Medical Journal has published an investigation in which serious charges have been raised against researchers and their universities: conflicts of interest, animal experiments of questionable quality, selective use of data, deception of grant-givers and ethics commissions, all the way up to endangerment of study participants. There was also a whistle blower… who had to pack his bags. It all happened in Oxford, at one of the most prestigious virological institutes on earth, and the study on humans was carried out on infants of the most destitute layers of the population. Let’s have a closer look at this explosive mix in more detail, for we have much to learn from it about

  • the ethical dimension of preclinical research and the dire consequences that low quality in animal experiments and selective reporting can have;
  • the important role of systematic reviews of preclinical research, and finally also about
  • the selective (or non) availability and scrutiny of preclinical evidence when commissions and authorities decide on clinical studies.

It started – as it should for a spectacular Story – with a super top publication: a phase I study published in Nature Medicine. The authors of the Jenner Institute in Oxford reported that the unsatisfactory effect on the immune system of the conventional BCG vaccine can be significantly increased, namely by means of a simultaneous vaccination (booster vaccination) with another antigen (Ag85a) of the tuberculosis bacteria expressed by a modified vaccinia virus (then named MVA85a). This was a breakthrough finding, and a more efficient tuberculosis vaccination seemed to be within reach. Animal experiments were carried out on various species, from mice to cattle up to primates (macaques). The results, insofar as they were published, nourished hopes of a new era in TB prophylaxis. Oxford University made a contract with a biotech firm to develop and market the new entity. The University and members of the research team were shareholders. Grant-givers from Wellcome Trust up to the Paul and Melinda Gates Foundation proved generous. A total of over 40 million Pounds in research grants. Now it was time to show its safety and efficacy in humans, and logically a region with a high TB incidence was chosen – found in South Africa — and the study was carried out on 2 900 infants in an area where 2 – 3 % of all children develop manifest tuberculosis. The study was performed with all the permissions and in accordance with all the rules of the trade: Permission through all offices including ethics, randomization, blinding etc. But: It was “negative”! MVA85a did not reduce the TB rate of the vaccinated children.

Unfortunately, “negative” (actually: neutral) clinical studies are quite common. But here, the disappointment was particularly high and had dramatic consequences. The results from the animal studies (four species including primates!) had been more promising than seldom before. The Paul and Melinda Gates Foundation, the world’s largest grant-givers in the field of infectious diseases in the developing countries, decided to withdraw from this form of translational research … because animal experiments are so obviously not predictive for humans!

But is that really true? A high quality systematic review of the animal-experimental evidence on which the study in South Africa was based came to a disturbing conclusion. They found that the quality of the diverse studies was low (lack of randomization, blinding etc.), and sample sizes exceedingly low. The meta-analysis found no indication that MVA85a was effective in animals.  Worse still: In primates it looked as if the booster vaccination might be harmful. The surprising conclusion: bench to bedside translation worked after all: No effect in animals, none in humans! The authors of the meta-analysis thus posed the obvious question: How could the study even get to the stage of being carried out on the infants, exposing them to potential risks with when benefit had not been convincingly demonstrated?

What we did learn from the BMJ investigation, however, was that right from the start there had been doubts about the animal studies. It appears that negative findings showing a higher number of deaths among the apes with booster vaccination had been repressed. An observant virologist who had done research in the same facility and in a similar field had noticed this. He had notified the university; there followed several investigation committees that could find no problems. Who did encounter problems was the whistle-blower. He was notified by the university that he was not permitted in future to perform any research in the premises of the institute. Unfortunately, the BMJ research does not shed closer light on these dealings, the likes of which are not uncommon. Thus demonstrating that only the study’s positive results from the animal experiments had been presented to the ethics commission and the permissions authorities in South Africa.

This leaves us with several pressing questions. Does this kind of thing happen often? How high is the quality of preclinical research in other fields? How is quality assured? How frequently are negative or neutral data left unpublished or even kept mum? The available literature that has taken up these questions points to big problems. The preclinical study results published are almost exclusively positive; the average group sizes are as a rule under 10; measures for preventing bias (e.g. blinding, randomization) likewise often go unreported. So how good is the preclinical evidence before studies on humans are performed? And on the level of approval procedures (ethics, FDA/EMA) are measures taken to ensure that all available evidence is fed into the decision process?

There is already a sturdy answer to the last question; the corresponding manuscript is in review. The group under Daniel Strech of the School of Medicine in Hannover has scoured systematically through a great many ethics applications on clinical phase I or II studies in three German universities. The group is looking to see whether the applications contain information regarding preclinical evidence for the study in humans. The results were sobering. The overriding majority of these applications cite no published studies at all on preclinical effectiveness of the study medicine. And where reference is made to preclinical data, no mention is made of measures taken to hinder bias and wrong assessment of case numbers. Moreover, the results given are almost exclusively positive, even when neutral or negative results are available in the literature. How can such bodies then carry out an informed risk/benefit evaluation?

The Oxford case exposed in the British Medical Journal is, we hope, extreme, but we must fear that clinical studies elsewhere are also performed on shaky preclinical evidence. I suspect that a relevant reason for the difficulties in translating results from animal experiments to humans is that preclinical evidence is selectively reported and is based on qualitatively shaky ground. Ethics commissions and regulatory authorities should assure that the totality of evidence has been made available to them, that it is of high quality and is in a form that enables a properly founded judgement. Sometimes a clinical study that turned out to be a disappointment would then probably not be carried out and study participants not exposed to unnecessary risks.

 

A German version of this post has been published as part of my monthly column in the Laborjournalhttp://www.laborjournal-archiv.de/epaper/LJ_18_03/28/index.html

 

 

Further reading:

Researchers were disappointed when a clinical trial of a new tuberculosis vaccine failed to show benefit, but should it have gone ahead if animal studies had already raised doubts, and what does it mean for future research? Deborah Cohen investigates:

Cohen D. Oxford vaccine study highlights pick and mix approach to preclinical research. BMJ. 2018 Jan 10;360:j5845.

 

We must review how we use animal data to underpin clinical trials in humans:

Macleod M. Learning lessons from MVA85A, a failed booster vaccine for BCG. BMJ. 2018 Jan 10;360:k66. http://www.bmj.com/content/360/bmj.k66.long

 

Responsible bodies should demand high quality reporting and systematic reviews of animal studies:

Ritskes-Hoitinga M, Wever K.Improving the conduct, reporting, and appraisal of animal research. BMJ. 2018 Jan 10;360:j4935.

 

A systematic review of the animal data does not provide evidence to support efficacy of MVA85A as a BCG booster:

Kashangura R, Sena ES, Young T, Garner P. Effects of MVA85A vaccine on tuberculosis challenge in animals: systematic review. Int J Epidemiol. 2015 Dec;44(6):1970-81.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s