Search the KHIT Blog

Monday, March 17, 2014

Issues with Comparative Effectiveness Research

Transcribed excerpt via Dragon, from the JAMA Online Network Reader.
Ethics, regulation, and comparative effectiveness research
Time for a change.  Published online March 13, 2014

Richard Platt, MD, MS, Nancy E, Kass, ScD, Deven McGraw, JD, LLM, MPH
The US healthcare system is poised to learn more about preventing, diagnosing, and treating illness and has ever been possible. This change is powered by the increasing commitment to comparative effectiveness research, increases in practice-based research, and the increasing availability of data arising from electronic health information systems to help patients, clinicians, and others understand who benefits from which treatments. Much can be learned by observing the outcomes of the varied decisions that clinicians and hospitals make. However, for many healthcare questions, it is important to intervene by systematically varying care, for instance by randomly selecting the order in which a new practice is introduced into different parts of the system or by randomly assigning different commonly used treatments to patients who are good candidates for all of the approaches. Indeed random assignment would be important to ascribe causality to the change.

The strategies to improve clinical care are hampered by regulations that, in areas such as this, no longer match current needs. A a more fundamental problem is the entrenched view that research, including evaluation of treatments already approved and widely administered to patients, automatically creates higher risks than ordinary care. For example, a comparison study of marketed agents for routine bathing and decolonization of intensive care unit patients that randomized hospitals to one or another regimen required substantial review and evaluation by an institutional review board IRB to determine whether and how to contain up sent from patients, how to address the possibility that a prisoner might be admitted during the course of the study, and whether participating hospitals needed a pre-existing relationship with one another.

Another example involves significant debate about appropriate ethical oversight for a proposed study randomizing patients to morning or nighttime dosing of antihypertensive medications. Clinical guidelines generally do not address timing of medication administration, although hypotheses exist for potential benefits of both nighttime and morning dosing. The study alters neither the medication prescribed nor the frequency of administration.

The so-called protections being debated reflect a view that informed consent is required for most activities labeled clinical research. This is true even of systematic investigations of aspects of patients care that are never deliberated with or communicated to patients but instead are routinely decided at the level of the hospital or other administrative unit. This difference in norms discourages formal evaluation of the routine care changes and becomes particularly striking when contrasted with hospitals authority to change system-level care without any evaluation, transparency, or patient consultation — changing, for example, the ratio of nurses to patients and similar administrative decisions that could have profound effects on patient outcomes. The irony deepens when consent requirements become barriers to even low risk studies intended to identify strategies to protect patients. An increasingly germane question for policy and for ethics is what level of oversight of comparative effectiveness studies is necessary. Admittedly defining low risk can be, dictated and as patients become increasingly involved in their care, it will be important to solicit their views...

New and Reformed Regulations
In the longer term, innovative health systems will need a more holistic approach to oversight of both systems level an individual level interventions. Such oversight should be developed in consultation with patients, clinicians, researchers, and bioethicists. Indeed, patient participation in oversight is critical. Core to its principles will be transparency to all stakeholders about both the continuous learning to which the system is committed, about the ways in which data will be used for research and for all other purposes, and about the particular learning activities proposed.

Furthermore, one example of a broader oversight approach "rejects the assumption that clinical research and clinical practice are fundamentally different enterprises," and suggests that all stakeholders share a moral obligation to contribute to learning and improvement of care. This ethical framework suggests that regulatory oversight should be refocused on specific features of learning activities rather than their motivation. Risk, both physical and informational, would be key, as would knowing whether an intervention involves treatments in common practice or the level of evidence for their use, whether the type of intervention is (or should be) typically discussed with patients and whether analyses require new data collection. Such activities should be conducted with full transparency to patients, with commitments to share findings broadly. As an important aspect of this transparency, regardless of whether investigators are required to obtain formal informed consent from study participants, patients should be informed that their health information is being used for research, as well as for several other purposes.

Harmonized regulation
Regulations of all Health and Human Services agencies should be harmonized to the greatest [sic] extent possible. New regulation should reward more robust engagement of patients in the research process, improve the degree to which uses of data beyond treatment are made transparent to the public, and broadened situations in which informed consent is not required or could be waived, including for systems level evaluations of practices that healthcare institutions might otherwise introduce without any evaluation. Regulations should further specify circumstances under which waivers of consent, community consent, or streamlined consent procedures would be acceptable for randomized comparative effectiveness studies. Characteristics of such studies might be ones for which all interventions are in common use; eligibility is limited to patients who are good candidates for either approach; adverse effects, modality, and other features that might engage meaningful patient preferences or values are similar between compared approaches; and privacy risks are minimal. Regulations should not impose barriers to interorganizational collaboration or two widely disseminating important findings. Opportunities to improve outcomes through systematic evaluation of care are substantial research policies should enable this learning not over protect against it.
Credible CER will not be easy to do, notwithstanding the exponentially-increasing Health IT-enabled floods of accessible data. Myriad policy questions loom, as the authors note. Moreover, many critics dismiss its potential out of hand, arguing that anything short of formal double-blind clinical trials will be inadequate to advance the state of medical science, that noisy, error-riddled, apples-and-oranges trainloads of Big Data will do little more than muck things up further. Methodologically, there will in fact be issues relating to "moving-target" study cohorts, stratification issues, and the unavoidable confounding problems that come with "uncontrolled" studies.

Relatedly, if you want to go Full-Bore Naysayer more broadly, consider this from SBM.
Fighting Against Evidence
Posted by Steven Novella on January 29, 2014 (76 Comments)
For the past 17 years Edge magazine has put an interesting question to a group of people they consider to be smart public intellectuals. This year’s question is: What Scientific Idea is Ready for Retirement? Several of the answers display, in my opinion, a hostility toward science itself. Two in particular aim their sights at science in medicine, the first by Dean Ornish, who takes issue with large randomized controlled clinical trials, and the second by Gary Klein, who has a beef with evidence-based medicine...

Gary Klein
It’s rare to see a logical fallacy stated so overtly. Klein could not have crafted a better example of the Nirvana fallacy if he tried:

But we should only trust EBM if the science behind best practices is infallible and comprehensive, and that’s certainly not the case. Medical science is not infallible. Practitioners shouldn’t believe a published study just because it meets the criteria of randomized controlled trial design. Too many of these studies cannot be replicated. Sometimes the researcher got lucky and the experiments that failed to replicate the finding never got published or even submitted to a journal (the so-called publication bias). In rare cases the researcher has faked the results. Even when the results can be replicated they shouldn’t automatically be believed—conditions may have been set up in a way that misses the phenomenon of interest so a negative finding doesn’t necessarily rule out an effect.
Really – unless science is infallible and comprehensive, we should ditch it? Unless we have perfect knowledge of everything we should behave as if we know nothing?

This attitude is not new. It is common in the alternative world. It just usually isn’t stated so boldly.

Again, Klein points out legitimate problems with the institution of science in general, and evidence-based medicine in particular. Yes – there are biases, there are publication issues and failure to replicate. We spend a great deal of time on SBM pointing out and discussing all the various challenges to rigorous science.

Klein and others, however, want to throw the baby out with the bathwater – to ditch scientific evidence, rather than work toward improving it. All of the problems with science in medicine have potential solutions, and we are making progress...
Klein is the "Expert Intuition" guy. I've read his books, which are fine. But, I think his schtick has gone to his head.

Beyond even these critics are those advocating going headlong into "Personalized Medicine," that clinical trials and CER are so Last Century.

What do you think?


SBM is on a tear today.
A tale of quackademic medicine at the University of Arizona Cancer Center
Posted by David Gorski on March 17, 2014 (21 Comments)

Quackademic medicine.

I love that term, because it succinctly describes the infiltration of pseudoscientific medicine into medical academia. As I’ve said many times, I wish I had been the one to coin the phrase, but I wasn’t. To the best of my ability to determine, I first picked it up from Dr. R. W. Donnell back in 2008 and haven’t been able to find an earlier use of the term. As much as I try to give credit where credit is due, I have, however, appropriated the term “quackademic medicine” (not to mention its variants, like “quackademia”), used it, and tried my best to popularize it among supporters of science-based medicine. Indeed, one of my earliest posts on this blog was about how quackery has infiltrated the hallowed halls of medical academia, complete with links to medical schools that have “integrative medicine” programs and even medical schools that promoted the purely magic-based medical modalities known as reiki and homeopathy. It’s been a recurrent topic on this blog ever since, leading to a number posts on the unethical clinical trials of treatments with zero or minimal pre-trial plausibility, the degradation of the scientific basis of medicine, and the acceptance of magical thinking as a means of treating patients in all too many medical centers...
I got my full lifetime dose of quackery back when my late daughter was dying of cancer. The scams persist, accelerated by the ever-easier dissemination of bullshit on the internet.

Meaningful Use achievement continues to thwart hospital IT departments and care team professionals, and most feel they haven’t gotten everything out of it yet. A lack of resources continues to beleaguer the meaningful use process, and now priorities are shifting to the upcoming challenges of ICD-10 transition and what to do with all that [sic] data.
 From the Stoltenberg Consulting HIT Industry Outlook Survey (pdf)

Yeah, ICD-10. Gonna be interesting. Recall my post ICD-10: W6142XA, Struck by turkey, initial encounter.

More to come...

No comments:

Post a Comment