SVG
Commentary
Wall Street Journal

The End of Medical Miracles?

Former Senior Fellow

Americans have, at best, a love-hate relationship with the life-sciences industry—the term for the sector of the economy that produces pharmaceuticals, biologics (like vaccines), and medical devices. These days, the mere mention of a pharmaceutical manufacturer seems to elicit gut-level hostility. Journalists, operating from a bias against industry that goes as far back as the work of Upton Sinclair in the early years of the 20th century, treat companies from AstraZeneca to Wyeth as rapacious factories billowing forth nothing but profit. At the same time, Americans are adamant about the need for access to the newest cures and therapies and expect new cures and therapies to emerge for their every ailment—all of which result from work done primarily by these very same companies whose profits make possible the research that allows for such breakthroughs.

Liberals and conservatives appear to agree on the need to unleash the possibilities in medical discovery for the benefit of all. But it cannot be ordered up at will. It takes approximately ten years and $1 billion to get a new product approved for use in the United States. Furthermore, only one in every 10,000 newly discovered molecules will lead to a medication that will be viewed favorably by the Food and Drug Administration (FDA). Only three out of every ten new medications earn back their research-and-development costs. The approval success rates are low, and may even be getting lower—30.2 percent for biotech drugs and 21.5 percent for small-molecule pharmaceuticals.

It is the very nature of scientific discovery that makes this process so cumbersome. New developments do not appear as straight-line extrapolations. A dollar in research does not lead inexorably to a return of $1.50. Researchers will spend years in a specific area to no avail, while other areas will benefit from a happy concatenation of discoveries in a short period. It is impossible to tell which area will be fruitless; so many factors figure into the equation, including dumb luck. Alexander Fleming did not mean to leave his lab in such disarray that he would discover that an extract from moldy bread killed bacteria, yet that is how it happened. Conversely, if effort and resources were all it took, then we would have an HIV/AIDS vaccine by now; as it stands, the solution to that problem continues to elude the grasp of some of the most talented and heavily funded researchers.

Scientific discoveries are neither inevitable nor predictable. What is more, they are affected, especially in our time, by forces outside the laboratory—in particular, the actions of politicians and government bureaucracies. The past quarter-century has offered several meaningful object lessons in this regard. For example, in the 1980s, the Reagan administration undertook a number of actions, both general and specific, that had a positive effect on the pace of discovery. On the general front, low taxes and a preference for free trade helped generate a positive economic climate for private investment, including in the rapidly growing health-care sector. More specifically, the Reagan administration engaged in new technology transfer policies to promote joint ventures, encouraged and passed the Orphan Drug Act to encourage work on products with relatively small markets, and accelerated approval and use of certain data from clinical trials in order to hasten the approval of new products. All of these initiatives helped foster discovery.

That which the government gives, it can also take away. As the 1990s began, a set of ideas began to gain traction about health care and its affordability (it seems hard to believe, but the first election in which health care was a major issue was a Pennsylvania Senate race only eighteen years ago, in 1991). Americans began to fear that their health-care benefits were at risk; policymakers and intellectuals on both sides of the ideological divide began to fear that the health-care system was either too expensive or not comprehensive enough; and the conduct of private businesses in a field that now ate up nearly 14 percent of the nation's gross domestic product came under intense public scrutiny.

A leading critic of Big Pharma, Greg Critser, wrote in his 2007 Generation Rx that President Clinton picked up on a public discomfort with drug prices and "began hinting at price controls" during his first term in office. These hints had a real impact. As former FDA official Scott Gottlieb has written, "Shortly after President Bill Clinton unveiled his proposal for nationalizing the health-insurance market in the 1990s (with similar limits on access to medical care as in the [current] Obama plan), biotech venture capital fell by more than a third in a single year, and the value of biotech stocks fell 40 percent. It took three years for the 'Biocentury' stock index to recover. Not surprisingly, many companies went out of business."

The conduct of the businesses that had been responsible for almost every medical innovation from which Americans and the world had benefited for decades became intensely controversial in the 1990s. An odd inversion came into play. Since the work they did was life-saving or life-enhancing, it was not deemed by a certain liberal mindset to be of special value, worth the expense. Rather, medical treatment came to be considered a human right to which universal access was required without regard to cost. Because people needed these goods so much, it was unscrupulous or greedy to involve the profit principle in them. What mattered most was equity. Consumers of health care should not have to be subject to market forces.

And not only that. Since pharmaceuticals and biologics are powerful things that can do great harm if they are misused or misapplied, the companies that made them found themselves under assault for injuries they might have caused. It was little considered that the drugs had been approved for use by a federal agency that imposed the world's most rigorous standards, and was often criticized for holding up promising treatments (especially for AIDS). Juries were convinced that companies had behaved with reckless disregard for the health of consumers, and hit them with enormous punitive damages claims.

The late 1990s also coincided with an unpredictable slowdown in the pace of medical discovery, following a fertile period in which new antihistamines, antidepressants, and gastric-acid reducers all came to market and improved the quality of life of millions in inestimable ways. A lull in innovation then set in, and that in turn gave opponents of the pharmaceutical industry a new target of opportunity. An oft-cited 1999 study by the National Institute for Health Care Management (NIHCM) claimed that the newest and costliest products were only offering "modest improvements on earlier therapies at considerably greater expense."

The NIHCM study opened fresh lines of attack. The first came from the managed-care industry, which used it as a means of arguing that drugs had simply grown too expensive. Managed care is extremely price-sensitive, and its business model is built on cutting costs; executives of the industry were well represented on the board of the institute that put out the report. They were, in effect, fighting with the pharmaceutical companies over who should get more of the consumer's health-care dollars.

The second came in response to the approval by the FDA in 1997 of direct consumer advertising of pharmaceuticals. The marketing explosion that followed it gave people the sense that these companies were not doing life-saving work but were rather engaged in the sale of relative trivialities, like Viagra and Rogaine, on which they had advertising dollars to burn that would be better spent on lowering the cost of drugs. And the third element of this mix was the rise of the Internet, which gave Americans a level of price transparency that they had not had before regarding cost differentials between drugs sold in the U.S. versus Canada and other Western countries.

These three factors precipitated a full-bore campaign by public interest groups that bore remarkable fruit over the next several years. By February 2004, Time magazine was publishing a cover story on pharmaceutical pricing, noting that "the clamor for cheap Canadian imports is becoming a big issue." Marcia Angell, a fierce critic of the pharmaceutical industry and the FDA, wrote in the New York Review of Books in 2004 that, "In the past two years, we have started to see, for the first time, the beginnings of public resistance to rapacious pricing and other dubious practices of the pharmaceutical industry."

Harvard's Robert Blendon released a Kaiser Family Foundation poll in 2005 in which 70 percent of Americans reported feeling that "drug companies put profits ahead of people" and 59 percent saying that "prescription drugs increase overall medical costs because they are so expensive." Overall, noted the foundation's president, Drew Altman, "Rightly or wrongly, drug companies are now the number one villain in the public's eye when it comes to rising health-care costs."

A cultural shift had taken place. Pharmaceutical manufacturers, once the leading lights of American industry, had become a collective national villain.

The life sciences are among the most regulated areas of our economy, and are constantly subjected to significant policy upheaval from Washington. Because these products are so expensive to develop, the regulatory and policy whims of Washington tend to have a disproportionate impact on investment in the industry. Without investment, there is no research, and without research, there are no products.

According to Ken Kaitin of the Center for the Study of Drug Development at Tufts, new drug approvals from the Food and Drug Administration are not keeping pace with rising research-and-development spending, which means that recent spending has not been leading to results. This raises the question of how long such investments will be sustainable if they do not provide sufficient return for investors.

FDA approval is not the only hurdle for products making their way to market. Manufacturers and investors need to deal with the Department of Health and Human Services at four levels in order to get a product to market and reimbursed. Basic research begins at the National Institutes of Health (NIH), a $30-billion agency that often partners with the private sector on promising new areas of research and that has just received a $10-billion boost from the stimulus package. The FDA then handles approvals of products.

Once a product has been approved, someone must pay for it in order for the product to be used. The Centers for Medicare & Medicaid Services (CMS) determines which products will be paid for by Medicare. Because CMS is the largest single payer in the health-care system, its decisions often help to determine which products will eventually be covered by private insurance companies as well. Finally, the Agency for Healthcare Research and Quality is in the process of increasing its role in conducting post-market product evaluations. These bureaucratic and evaluative hurdles have injected far too much uncertainty into the process and have dried up investment capital for the industry as a whole.

The issue constantly plaguing the industry is cost. Biologic treatments can cost hundreds of thousands of dollars, which can lead both Medicare and other insurers to refuse to cover certain treatments (although Medicare typically does not take cost into account during coverage decisions). When available, these treatments can bankrupt individuals and accelerate the impending bankruptcy of our Medicare trust funds. As a result, politicians on both the left and the right are examining various schemes for controlling costs.

One of the perennial ideas in this area is the re-importation of drugs from foreign countries. Most countries around the world impose price limits on pharmaceutical and other medical products. The United States does not do so, and as a result, prices for brand-name pharmaceuticals are higher in the United States than in other countries. Although there has been little appetite for the imposition of direct price controls in the United States, there have been some indirect efforts, including the re-importation of drugs from other countries, especially Canada.

This is a problematic notion. First, it is not clear that it would save money. The states that have created their own importation programs have generated little interest. Another is safety. Americans ordering drugs from abroad have no guarantees of the provenance of the products. We have an FDA in order to guarantee the safety of the products sold in the United States. Americans ordering from abroad have no such guarantee, and, indeed, clever counterfeiters have been known to stamp "made in Canada" on products with questionable Third World origins, usually places Americans would never think of buying from.

But there is also a philosophical reason to avoid importation schemes. We have a market economy, and this system allows U.S. firms to put in the dollars for research and development that make innovative new products possible. Elizabeth Whelan of the American Council on Science and Health has observed that the U.S. "produces nearly 90 percent of the world's supply of new pharmaceuticals." The pricing structure in the United States, which allows American firms to recoup research costs, makes this industry the dominant player in the global medical-products market. Without it, innovation could grind to a halt, and future generations might not benefit from life-saving, life-extending cures just over the horizon.

And this loss is not just a theoretical one. It can be quantified. A study by the Task Force on Drug Importation convened by the Department of Health and Human Services found that the loss of profits caused by re-importation could lead to between four and eighteen fewer drugs per decade. There is no way to know which promising enhancements would be lost.

At the same time, as Sally Pipes of the Pacific Research Institute has shown, the American system also lets its consumers obtain many products far more cheaply than consumers in other nations. This is because our competitive system allows for low-cost generic drugs that drive down prices. Brand-name products are more expensive here, but generics, which are available after patent protections expire, are cheaper and more widely available in the United States than elsewhere.

Another factor that reduces the availability of research-and-development investment is the growth of lawsuits. According to a Pacific Research Institute study, "American companies suffer over $367 billion per year in lost product sales because spending on litigation curtails investment in research and development." A recent analysis by the Washington Post found that "courts have been flooded with product liability lawsuits in recent years, and statistics show about a third are against drug companies."

The pain reliever Vioxx alone has prompted over 27,000 lawsuits against Merck, its manufacturer. Merck spent $1 billion defending itself before reaching an almost $5 billion settlement. This figure is actually relatively small, compared with the $21 billion Wyeth spent as a result of the recall of the diet drug combination fen-phen. Most famously, perhaps, Dow Corning was forced to declare bankruptcy after being flooded with over 20,000 lawsuits and 400,000 claimants over its silicon breast implants, despite the fact that the evidence has finally demonstrated definitively that they are not harmful.

The results are unmistakable. Dow Corning was forced to remain in bankruptcy for nine years. The uncertainties introduced by the impact of lawsuits and the changes in the policy environment have led to firms banding together through mergers. Wyeth was recently taken over by Pfizer; Merck has merged with Schering-Plough; Roche is buying up Genentech. These mergers mean that, by definition, there will be fewer laboratories working in a competitive way to develop new drugs.

The situation as it stands now is bad enough. But it would be made worse by the proposed elimination of an FDA policy called preemption, which holds that when federal laws come into conflict with state or local laws on drug matters, federal laws prevail. This is particularly important for the FDA, which sets federal safety standards. Preemption makes the FDA standards supreme as long as the manufacturers adhere to FDA guidelines. This policy prohibits trial lawyers from suing manufacturers in circumstances where the manufacturer adhered to the federal guidelines but where state law differs. A recent Supreme Court case, Wyeth v. Levine, opened up this issue by ruling that the FDA's approval of a medication does not protect the drug's maker from state-level lawsuits, thereby limiting preemption's scope.

The case concerned a musician who received an injection of the anti-nausea medication Phenergan. The medication label has a warning that it can cause gangrene if it strikes an artery, which it did in this case, and the musician had to have her arm amputated. She sued Wyeth, the drug's maker, and a jury awarded her $7.3 million, even though the FDA had approved the medication with a label warning of the drug's dangers.

This decision presents manufacturers with increased vulnerability to litigation, and will likely encourage them to be far more cautious, and to seek new products that minimize risk rather than ones that maximize benefits. Such increased caution will lead drug makers "to pester the FDA with even more requests to augment safety warnings, reinforcing an existing tendency toward over-warning rather than under-warning," notes Jack Calfee of the American Enterprise Institute. The result of over-warning is likely to be fewer drugs approved with any kind of risk profile, which will also limit the scope of potential benefits.

The policy consequences of Wyeth v. Levine could be far reaching. The Supreme Court had earlier ruled in favor of preemption, but that decision involved medical devices. By clarifying that preemption now only applies to that area, the court may have created a situation in which the FDA adds years and billions to the costs of new drug approvals on one end and then is given little weight to provide a definitive word on safety on the other.

All of this comes at a time when new drug approvals are far lower than they were a decade ago, and approvals of products in Phase III, the last level of assessment before products go to market, have declined in recent years—indicating that FDA officials may be nervous about being second-guessed. One of the Obama FDA's first actions has been a new initiative to review the safety and efficacy of 25 medical devices marketed before 1976, a curious act that will have the agency devote precious resources to reexamining old technologies rather than reviewing new ones.

These activities raise the troublesome possibility that the FDA will adopt the old motto of the cautious bureaucrat: "You won't be called to testify about the drug you didn't approve." But when innovation is squelched, who can testify for the life that would have been saved? FDA timidity is especially problematic and often detrimental to public health when it comes to risky new drugs for cancer or new antibiotics for increasingly resistant infections.

Then there is the looming shadow of health reform. One of the great requirements of a systemic overhaul is controlling costs, which were $2.5 trillion last year and growing at a rate triple that of inflation. It is clear that Congress and the administration will have to cut costs in order to come close to paying for an ambitious plan. How they do so could have a devastating impact on medical innovation.

Attempts to universalize our system and pay for it with cost controls that could stifle innovation contradict their own goal, which is, presumably, better health. It also embraces the notion that you can get something for nothing—namely, that you can get innovative new discoveries and better health outcomes somehow without paying for these discoveries to come into being.

We forget the power of the single-celled organism. For most of man's existence on earth, the power of a single-celled animal to snuff out life was an accepted—and tragic—way of the world. Human beings could be wiped out in vast communicable plagues or simple through ingesting food or water. In the last century, the advent of the antibiotic has changed all that. For millennia, the only cure for an infection in humans was hope. Today, antibiotic use is so common that public health officials struggle to get people not to overuse antibiotics and thereby diminish their effectiveness.

Just as there is potential danger from the way in which Americans take the power of the antibiotic for granted, so, too, one of the greatest threats to our health and continued welfare is that Americans in the present day, and particularly their leaders, are taking for granted the power, potency, and progress flowing from life-saving medical innovations. And in so doing, they may unknowingly prevent the kind of advance that could contribute as vitally to the welfare of the 21st century as the discovery of the antibiotic altered the course of human history for the better in the century just concluded.