Animal Experiments: In A Mousetrap

22. February 2016
Share article

Year after year, researchers put a lot of time and money into animal experiments. It cannot be extrapolated that these results are always valid with humans – and in some cases they can't even be replicated by other laboratories. Resistance is now growing.

A lot of work for nothing: animal experiments are controversial with regard to their explanatory power. Daniel Hackam of the University of Western Ontario by 2006 had examined more than 70 animal studies [Paywall]. All papers had a common denominator: they reported on successful therapies, such as with Alzheimer’s or Parkinson’s. Only a third of the innovations were able later to be transferred to humans.

Many approaches did not work on Homo sapiens – or were not even tested, often for safety reasons. These were often not only based on the cited differences in the genome, but also on methodological errors. Animal models of human disease imitate only a few central aspects, but never the whole clinical picture. At the same time researchers embellish many pieces by “forgetting” some animals.

Mice missing

This aspect is now being dealt with by Professor Dr. Ulrich Dirnagl of the Charité – Universitätsmedizin Berlin. Together with colleagues he has critically examined 100 publications in the fields of stroke and cancer research. The researchers in doing so found 316 animal experiments on infarct volume and 206 pieces on therapeutic contraction of tumours. In numerous cases, a lack of information existed regarding the exact number of animals in the papers. Having an average of eight test subjects, the sizes of the groups were also statistically too small to obtain relevant information. Meanwhile animals “disappeared” in dubious manner, something carrying massive implications for validity. Did colleagues sort them out consciously, for instance due to serious side effects associated with treatment?

Ulrich Dirnagl found no answer, but comes to a damning judgement by way of computer simulations. The fact that animals get excluded over the course of work is a normal event; it is however something that has to be documented by researchers. Should researchers select organisms that do not confirm their hypothesis this leads to false positive results. The cause is normally not the result of fraud, but of ‘bias’, in other words the unconscious influence of researchers in order to verify their hypotheses, Dirnagl adds.

Caught in a vicious circle

Working groups in establishing the basis of their project here end up consulting methodically bad publications – a vicious cycle arises, even affecting manufacturers. Staff at the US pharmaceutical company Amgen were able to reproduce just six of 53 pieces on cancer therapy. This pessimistic assessment was recently backed up by John P.A. Ioannidis of Stanford. He reviewed 441 publications which had been included in PubMed between 2000 and 2014. The focus here was primarily one involving issues of transparency and reproducibility.

With most papers Ioannidis is concerned about incomplete experimental protocols and lack of raw data – being in the online age, web resources would provide opportunities for supplemental information. Information on possible conflicts of interest, relating to finance, appeared only in a few cases. Ulrich Dirnagl makes mention of a substantial waste of resources in scientific enterprises, a situation which last but not least affects all taxpayers. Solutions are now needed.

People on the chip

For quite a long time already researchers have been demanding that binding standards, as are practised today in clinical trials, be transferred to basic research. These include accurate descriptions of methods, as well as criteria for the exclusion of experimental animals. In order to obtain better results in human medicine, the designs used for experiments should be reviewed. For instance: in the laboratory human skin can be raised via cell culture. The hope is that, using the respective cells, more reliable pharmaceutical and chemical testing is achieved than by using laboratory animals. In addition, scientists are trying to simulate physiological processes by computer modelling – limitations being accepted, as they are with each of these virtual systems.

Organ-on-a-chip systems simulate processes under real, reproducible conditions. Using lung-on-a-chip models [Paywall] researchers test out not only drugs, but examine inflammation or infection. Heart-on-a-chip systems [Paywall] are built around cardiomyocytes. For other research groups nephrons (Kidney-on-a-chip) or explanted small blood vessels (Artery-on-a-chip [paywall]) are the centre of attention.

Patrick Guye and Ron Weiss of the Massachusetts Institute of Technology (MIT) found out that complex tissue form on chips via the self-organisation of induced pluripotent stem cells. “Human-on-a-chip” [Paywall] is one vision, the aim being to produce parallel simulations of various processes. Despite all the hype there is still a downer. The new technologies do not compensate for any known methodological weaknesses. Quality standards, urged by experts in the Consort Statement (Consolidated Standards of Reporting Trials), are heading generally in the right direction.

4 rating(s) (5 ø)

Comments are exhausted yet.



Copyright © 2017 DocCheck Medical Services GmbH
Language:
Follow DocCheck: