How trustworthy are randomized clinical trials?
A veterinary journal recently announced it has retracted a published study, after its authors publicly announced a discovery of new data that would change the outcome of their findings.1,2 At the American College of Veterinary Internal Medicine’s (ACVIM) Forum 2024 in Minneapolis, Minnesota, Brad Hanna, DVM, MSc, PhD, used multiple clinical trial examples to demonstrate how to tell their trustworthiness and accuracy.
Hanna then explained the tips that he gives veterinary students, and veterinarians, on how to identify key markers that make a randomized clinical trial (RCT).4
Hanna kicked off his lecture by telling attendees that when he left veterinary school, he had trusted RCTs full-heartedly. He assumed that because they were published pieces of research, the researchers had the advanced knowledge needed for the study and everything was laid out for him, ready to be used for his patients. However, as he went through clinical trials and entered the veterinary profession, he began to learn that the assumptions he made about clinical trials when he entered his career were not always correct when it came to RCTs.
“I assumed randomized control trials were done by people with advanced training of the way experts and they were done correctly. I assumed that peer reviewers also had advanced knowledge and clinical trials, and they had caught any errors that may have been made in a clinical trial before it was published,” shared Hanna.
“I also assumed that to be able to detect an error in a clinical trial would require the advanced training that I didn't have. And so, if I tried it, a nonexpert like me would have performed a subpar analysis anyway, it wouldn't have been a good use of my time, and I thought this was very rational, and sensible,” he continued.
Hanna told the room that within veterinary schools, students might not be as prepared to analyze RCTs to ensure their accuracy. He shared that at the Ontario Veterinary College, University of Guelph in Canada he would have students write a 1-page RCT and ask them to identify 3 strengths and 3 weaknesses in the trial. He would also ask them how they felt when they were doing the task and share some of the top words, they used to describe it. Among the top words were confused, frustrated, overwhelmed, and surprised, to name a few. Showing the students what they do and do not know can help them better prepare to analyze studies on their own, or peak the ideology to not blindly trust what they read in the title or conclusion.
Checking RCTs is not something that has to be time-consuming and difficult. Hanna offered multiple ways veterinary professionals can check the study to see if it is dependable or not because, to him, triage is better than trusting.
He started by telling attendees that reviewing the participants of the study is part of that triage. Within the conclusion of a study, it should disclose the population it is aiming for with the study. A treatment being investigated within an RCT could be testing only on 1 population, which would provide data for that group, but not much for others not included. He used the example of a study that used healthy, young male beagles. Although the results may be helpful for those working within that niche group, they might not apply to females, other breeds, and dogs within different age groups.
“So the simple idea is that when you're looking at clinical trials, check to see if the conclusions are aimed at a population that is similar to the participants in the study.Here's [what else] to check, is there a reasonable belief or reasonably clear description of the animals in the study to begin with? And are they completely aimed at a similar population?” said Hanna.
Another way to help distinguish the accuracy of an RCT is through the primary outcome. Did the authors of the study identify the main measure of treatment success? Did they provide a conclusion based on it? Hanna explained that a common error in RCTs is the reader not realizing the risk applies to 1 measure. So, if you measure 2 things, each having a 5% risk, the overall risk of the study getting a false positive result is ~10.
He provided an example to attendees from a feline cardiology paper that was published in 2022. “[The trial] says the primary endpoint was trying to time cardiac death. So they've identified a primary outcome. This is going to be the measurement that tells us whether this treatment is effective or not. The primary income did not differ significantly between groups and they were very clear on that. They emphasized it in the abstract, and the body of the paper, but they did then look at some other vendor, which we would refer to as a secondary measurement, and concluded that the treatment effect appears to be effective in doing something else.
“The problem is that increases the risk, and now you're looking at multiple measures and you're willing to draw conclusions on any of them and they actually measured more than 2 things, they actually measured 5 things, so really, the risk that they're going to see a difference is more like 25%, so it just increases the certainty,” explained Hanna.
When reading a clinic study, Hanna told attendees that need to check if the authors identified a primary outcome and are the conclusions of the study based on the primary outcome.
With the recent news of the JAVMA retraction bringing the concept of deeper reviews of clinical trials to the forefront of veterinary professional’s minds, it is important to know how to better identify errors in RCT. For Hanna, it’s better to triage than trust.
“When I graduated from veterinary school, I would have seen major veterinary journal authors from major veterinary schools. The conclusion is that the treatment works. And I would think, well, there's something I can use in my patients. And what I hope you've seen during my talk is, thatit isn't hard for a busy primary care practitioner to look at the study and say, well, for these reasons, this study doesn't look right. You need better evidence,” Hanna concluded.
Reference