<$BlogRSDUrl$>

sexta-feira, novembro 30, 2007

Em nome da humildade científica, mais um excelente artigo da edição de ontem do NEJM:

The New England Journal of Medicine
e-mail icon FREE NEJM E-TOC HOME | SUBSCRIBE | CURRENT ISSUE | PAST ISSUES | COLLECTIONS | Advanced Search
Institution: FACULDADE MEDICINA DO PORTO | Sign In as Individual | Contact Subscription Administrator at Your Institution | FAQ
Perspective
PreviousPrevious
Volume 357:2219-2221  November 29, 2007  Number 22
NextNext

In Defense of Pharmacoepidemiology — Embracing the Yin and Yang of Drug Research
Jerry Avorn, M.D.


This Article
- PDF
-PDA Full Text

Tools and Services
-Add to Personal Archive
-Add to Citation Manager
-Notify a Friend
-E-mail When Cited
-E-mail When Letters Appear

More Information
-PubMed Citation
The past decade has not been kind to observational studies of medications. The damage began in 1998 with the publication of the Heart and Estrogen–Progestin Replacement Study, a randomized controlled trial showing that hormone replacement increased the risk of cardiac events among postmenopausal women with heart disease. Like many physicians, I had been teaching the gospel that estrogen use prevented heart disease — an idea based on observational studies1 showing that postmenopausal women who regularly took estrogen were less likely to have heart disease than apparently similar women who did not take hormones. It now appeared that this had been a misleading conclusion that may have led to drug-induced disease in millions of patients. A heavier blow came in 2002 with publication of an even larger randomized trial from the Women's Health Initiative (WHI), demonstrating that women without preexisting heart disease who were given estrogen also had an increase in cardiac events — along with expected increases in breast cancer, thrombophlebitis, stroke, and pulmonary emboli. Other WHI trial data refuted the conclusions of observational studies that estrogen users were less likely than nonusers to develop dementia, depression, or incontinence.

More disconnects followed. Epidemiologic findings that dementia or cancer was less likely to develop in statin users than in nonusers were not borne out by clinical trials2; other observational studies suggesting that patients treated with statins really had higher cancer rates didn't pan out either. Next, clinicians, policymakers, and families were alarmed by the contention — based on a limited number of cases and so far unsubstantiated — that children given stimulants for attention-deficit disorder are at increased risk for potentially fatal heart disease. In January 2007, the BMJ ran an article arguing that, as its headline stated, "Observational studies should carry a health warning," on the grounds that analyses in such studies can produce unreliable findings. In September, the New York Times Magazine published a controversial cover story suggesting that epidemiologic studies of drugs and diet often arrive at bogus conclusions. At the Food and Drug Administration (FDA), drug-epidemiology activities have long been relegated to second-class status, resulting in a failure to identify many important drug risks in a timely manner.3

So is this the end of the line for such research? Not by a long shot. Although the traditional means of assessing drugs, the randomized, controlled trial, is the rightfully enshrined standard for determining efficacy, such studies often have important drawbacks (see box): limited size, making it difficult to detect uncommon adverse events; short duration, even for drugs designed to be taken for a lifetime; frequent reliance on placebos as the comparator, limiting clinical relevance; termination after a surrogate end point is achieved, without measurement of real clinical effects; and underrepresentation of patients with complex health problems, especially the elderly. Some of these limitations result from lax study protocols that are proposed by manufacturers and are too readily accepted by the FDA. But others are inherent in the randomized, controlled study design. No trial could ever be large enough to gather enough data to quantify the risks of all uncommon side effects, and studies lasting long enough to document all clinical outcomes would be impractical if they required many years of follow-up before approval could be granted. In addition, since the FDA generally doesn't require head-to-head comparisons of similar drugs, preapproval trials are unlikely to provide the comparative data that physicians, patients, and payers need. Improvement in study designs could address some of these problems, but many are hardwired into the nature of randomized, controlled trials (see box).

View this table:
[in this window]
[in a new window]
Get Slide
Strengths and Weaknesses of Randomized Controlled Trials and Observational Studies of Medications.


Fortunately, much of the information we need can be found through careful epidemiologic analysis of data being routinely amassed through the computerization of nearly all health care transactions, from prescriptions to hospitalizations to laboratory tests — a wealth of readily mobilized insights about real-world drug use and outcomes in millions of patients who have been followed for years. Data can also be collected through interviews, registries, and other methods that can be rigorous, practical, and cost-effective. New FDA regulations and funding that took effect in October4 may give postmarketing drug surveillance a more central role both inside and outside the agency.

But will that merely generate more misleading findings of illusory benefits and phantom risks? Not if we do things right. Pharmacoepidemiology is still in its adolescence, with all the characteristics that implies: expansive energy, huge potential, limited experience, a sense of infallibility, accident-proneness, and occasionally impaired judgment. Many of us who work in this area recognize the need to advance the discipline's methodologic sophistication to prevent the sort of glib conclusions that have bedeviled the field; that arcane work is making important strides.5 We are learning key lessons: that without randomization, physicians and patients select therapies in ways that can introduce substantial confounding; that the trendy approach of propensity-score analysis, in which researchers try to statistically align the characteristics of users and nonusers of a drug, often is not powerful enough to eliminate such confounding; that the best observational research mimics the randomized, controlled trial by studying new users of a drug in comparison with new users of another medication, not just analyzing the "survivors" who have been filling their prescriptions consistently and have probably had the most success with their treatment regimens; that the presence of a diagnostic code on a billing form does not necessarily correspond to the presence of a given disease; that occasional mailed surveys are not an adequate means of assessing actual medication exposure; and that subtle misdefinition of clinical outcomes or periods of drug use can introduce substantial bias. Many of the gaffes of errant pharmacoepidemiologic studies can be traced to such methodologic lapses, which we are gradually learning how to prevent.

When done correctly, epidemiologic studies of drug effects can be both more conceptually demanding and more powerful than the average randomized, controlled trial, especially in assessing drug safety. Undertaking a randomized trial requires considerable resources, planning, human-studies approvals, and time; the design standards for such trials are well established, and these studies are increasingly subject to registration and public scrutiny. But anyone with a few hundred thousand dollars can buy access to a database and perform an observational study, analyzing the data in myriad ways. Since such studies are not currently registered, there is no way of knowing how many analyses were attempted — enormous numbers of alternative designs can be run on a computer before a preferred result is selected for publication. This is particularly worrisome, because such research is often sponsored by stakeholders who may have billions of dollars riding on the outcome.

We forget how difficult it was to establish the rules of the road for conducting randomized trials. In terms of design theory and pubic policy, drug-epidemiology research is now where randomized trials were in the 1950s. We have much to learn about methods, transparency, and protecting the public's interest. But that work can be done, and we often have no other way of gathering vital insights.

A Zen garden in Kyoto, Japan, contains 15 large stones surrounded by gravel. Monks meditate on the fact that one cannot see all 15 rocks from any one point; no matter where you sit, at least 1 stone is blocked by another. Like one of these perspectives, randomized trials offer one kind of knowledge but prevent us from seeing other properties of a drug. Epidemiologic studies can help elucidate those properties but may introduce new blind spots.

The two approaches are a little like rocket ships and telescopes. Space flights got us to the moon and beyond, but astronomy — noninterventionally — revealed some of the basic secrets of the universe and made space flight possible. Is there one better way of knowing — aeronautics or astronomy, invasive or analytic method, yang or yin? This is a nonchoice: to understand everything we should know about a drug, we must do both kinds of research with rigor and with humility.

No potential conflict of interest relevant to this article was reported.


Source Information

Dr. Avorn is a professor of medicine at Harvard Medical School and chief of the Division of Pharmacoepidemiology and Pharmacoeconomics at Brigham and Women's Hospital — both in Boston.

Etiquetas:


follow me on Twitter
Comments: Enviar um comentário

This page is powered by Blogger. Isn't yours?