The 21st Century Cures Act has been lauded as a bipartisan success. It’s actually the result of a long war on drug regulation.

Christmas came early for the pharmaceutical industry this year. Last week, the Senate followed the House in passing the 21st Century Cures Act. Though this bill has been lauded by liberals for providing much-needed funds for medical research, its real impact will be elsewhere. Whereas drug approval traditionally required the demonstration of real clinical benefit in a randomized clinical trial, under the Act drug firms will increasingly be able to rely on flimsier forms of evidence for approval of their therapies (incremental steps in this direction, it is worth noting, have already occurred). The Act, by reconfiguring the drug regulatory process, lowers the standards for drug approval—a blessing for drug makers, but an ill omen for public health.

In the Senate, a grand total of five senators—including Bernie Sanders and Elizabeth Warren—voted against it. The media, meanwhile, has for the most part done a poor job dissecting its actual contents. As a result, few now realize how detrimental the act is likely to be for drug safety, or appreciate the mix of conservative ideology and pharmaceutical industry greed underlying the longstanding campaign that brought it to fruition.

The thinking behind the 21st Century Cures Act—and likeminded proposals—goes something like this: In the twenty-first century, the pharmaceutical industry—driven by the profit-motive—continues to do a fine job innovating new therapies. Far too often, however, they are being held back by risk-adverse, slow-moving FDA bureaucrats with outdated standards for approval. “Modernize” the FDA—release the cures! Yet if the law did nothing other than weaken FDA standards, it may not have passed: Liberals understandably embraced the act’s new NIH funding, its mental health provisions, and its support for state anti-opioid programs. For Democrats, it also represented the sort of bipartisan “victory” that shows that all is not gridlock in Washington, after all.                

Yet this thinking is flawed on multiple levels: “We need to remember,” as former editor-in-chief of the New England Journal of Medicine Marcia Angell wrote in her 2004 pharmaceutical exposé, The Truth About the Drug Companies, “that much of what we think we know about the pharmaceutical industry is mythology spun by the industry’s immense public relations apparatus.” First among these myths is the notion that the status quo of private sector drug research and development is the best of all worlds. On the contrary, as Angell put it, “me-too” drugs—lucrative, duplicative agents that do not improve on existing therapies—are in fact the “main business of the pharmaceutical industry.” We can’t rely on the profit motive to bring forth new cures, when it’s just as easy for companies to make big profits by redesigning or tweaking drugs that already exist.

Second, the notion of a slow-moving, risk-adverse FDA is wrong: If anything, the agency’s drugs review process is sometimes too hasty, while its standards of evidence for approval are frequently too lax. Consider, for instance, two recent studies of new cancer drugs. The first—published a year ago in JAMA Internal Medicine by Chul Kim and Vinay Prasad—looked at cancer drugs approved by the FDA on the basis of “surrogate endpoints” between 2008 and 2012. “Endpoints” is a term for outcomes: Hard clinical endpoints refer to outcomes such as survival, where the benefit to the patient is unambiguous. Surrogate endpoints, however, refer to metrics like the change in the size of a tumor on a CT scan. Though a shrinking tumor logically sounds like a good outcome, it is only meaningful if it actually translates into an improvement that an individual actually experiences, like a longer life or a better life. Often, however, that’s not the case: New therapies can change numbers without improving our actual health. This is what Kim and Prasad found: Of the 36 drugs approved on the basis of surrogate endpoints, at least half had no demonstrated benefit.

Perhaps they had other benefits? Or perhaps not. In late November, Tracy Rupp and Diana Zuckerman in the same journal examined these 18 drugs, and found that not only did they not improve survival, but only one had evidence that it improved quality of life (the others lacked data or had no effect, negative effects, or mixed effects). Despite this lack of benefit for either the quantity or quality of life, they note, the FDA withdrew approval for only one drug. Those drugs that either didn’t improve or actually worsened quality of life continue to be sold at an average price of $87,922 per year. Not a bad return for a basically useless drug.

How has this state of affairs come about? At least in part because, as scholar Aaron Kesselheim and colleagues describe in a 2015 study in the British Medical Journal, a total of five new “designations” and one new pathway (“accelerated approval”) have been created since 1983 to lubricate the drug approval process. As they find in their study, as of 2014 some two thirds of drugs are now being reviewed through one or more of these expedited programs, which sometimes allow them be approved more quickly, in some instances with skimpier evidence.

The 21st Century Cures Act will only take us further down this road. Indeed, as Trudy Lieberman has written at Health News Review, the bill is best seen as the “culmination of a 20-year drive by conservative think tanks and the drug industry that began during the Clinton Administration to ‘modernize’ the FDA.” PhRMA—the industry’s primary lobbying group—alone spent $24.7 million on Cures Act-related lobbying, according to data assembled by the Center for Responsive Politics and reported by Kaiser Health News. No less important, however, are the industry’s generous campaign contributions, which have helped construct a compliant and conducive political climate in Washington over the years.

The act reverses many of the protections that stemmed from the 1962 Kefauver–Harris Amendments, signed by John F. Kennedy, which bolstered the Food and Drug Administration’s (FDA) regulatory powers: These reforms meant the FDA could require proof not just that a drug was safe, but that it actually worked, prior to approval.


Read full article here.