Removing Barriers to Medical Innovation
Enormous improvements in human health have taken place over the last half-century. Americans live, on average, a decade longer. The most prevalent cause of mortality in the United States — cardiovascular disease, which causes heart attacks and strokes — has decreased by over 50%. Cancer, the number-two cause, has become far more treatable, and cancer mortality rates have plummeted in recent years. A few decades ago, it was common to see individuals whom arthritis had deformed and condemned to immobility and chronic pain. Now, such victims are uncommon. Mental-hospital wards that used to house patients with intractable psychoses and depression either sit empty or have been repurposed. The outbreak of the HIV/AIDS epidemic in the 1980s elicited apocalyptic predictions that it would bankrupt the health-care system. But within two decades, many HIV-infected individuals were living normal, albeit medicated lives.
The driving force behind these remarkable improvements is unquestionably medical innovation: new technologies including medications, medical devices, and improved medical procedures. But most people are unaware of who or what deserves our gratitude for these advances.
In a 2015 commentary in the New York Times about medical innovation, Nobel Laureate economist Joseph Stiglitz wrote, "As it is, most of the important innovations come out of our universities and research centers, like the National Institutes of Health, funded by government and foundations." If asked, most people would agree: The credit for medical innovation belongs to research performed at universities and in non-profit research institutes. And, for more than half a century, the American government has provided most of the funding for such research, creating what might usefully be termed a "government-academic biomedical complex."
Most people are aware that universities and government- and foundation-funded research centers do not actually manufacture innovative products. But they conclude, based on such statements from authorities like Stiglitz, that the private firms that do accomplish this task do so simply by exploiting the discoveries arising from universities and research centers. This conclusion, which is false, is one manifestation of what I have called "the pharmaphobia narrative": a set of ideas that demonize the private medical-products industry, and that is promulgated by the principal beneficiaries of the relationship between the government and research institutions, such as university officials and medical-journal editors.
Powerful cultural and economic forces in universities and research institutes combine with the pharmaphobia narrative to make this government-academic biomedical complex not just relatively inefficient for achieving medical innovation but even counterproductive to it. It is time to reduce the power of this complex to dictate the direction of research and build a complementary partnership between academic institutions and private industry. A private-academic partnership would allocate resources and talent far more effectively, paving the way for further life-enhancing medical innovations.
The history of medical innovation can be divided into three eras. The first consists of the centuries during which knowledge potentially relevant for medical treatments slowly accumulated, through the study of human anatomy, the realization that blood circulates, and other insights. This information, however, did not stem the practice of useless and harmful bleeding and purging therapies, which were based on ancient theories and legends. The more recent adoption of effective measures such as antisepsis and anesthesia also had to overcome stubborn resistance from the medical profession.
The second era of medical innovations came in the 19th century with the establishment of universities and independent institutes in Europe, and subsequently in North America. Innovations originating from these institutions included a wide variety of revolutionary treatments: the identification and potential eradication of disease-spreading insects; effective treatments of infectious disease with antitoxins and antibiotics; vaccines to prevent diphtheria, pertussis, tetanus, and polio; anticoagulants ("blood thinners"); and vitamin and hormone-replacement therapies.
In this second innovation era, medical advances did in fact largely arise in universities and research institutes. Private industry's role was to provide manufacturing capabilities that enabled the discoveries to benefit patients, and it was the principal source of the chemical compounds that clinicians — often in academic settings — tested for therapeutic effects.
The essential contributions of science in general and medical innovation in particular to the Allied victory in World War II ushered in the third and current era of innovation. Franklin Roosevelt's science advisor, Vannevar Bush, called in 1945 for a sharply increased federal investment in research. Bush's entreaty, entitled "Science: The Endless Frontier," emphasized the importance of basic research unlinked to practical ends as the best route to innovation. The result of Bush's effort and impassioned lobbying by medical academics was that the federal government markedly increased its spending on the National Institutes of Health, such that it became the major source of medical-research funding in the United States. Its "intramural" facility in Bethesda, Maryland, emerged as the country's largest research institute. For decades, Congress allocated generous and growing funds to the NIH that enabled it to provide many research grants to universities. As a result, universities expanded their laboratory facilities and research faculties — and the government-academic biomedical complex, or GABC, was born.
Truth be told, however, in this era private industry, rather than the GABC, has progressively become the dominant source of new medical products. By 1990, corporate investment in medical research had exceeded that of the NIH, and it is now over twice as large. Three separate analyses concluded that 85% of the drugs approved by the FDA since 1988 arose solely from research and development performed within the industry.
An important fact to remember about medical innovation is that, unlike engineering advances informed by physics and chemistry, it relies on the far more uncertain underpinning of biology. Although insufficient information is one reason for this uncertainty, the dominant problem is the biological variability that evolution has programmed into us. Highly mutable micro-organisms dominate our world and use the components of our bodies to colonize and sicken us. If our responses were static and predictable, these invaders would have eliminated our species eons ago. The liability of this unpredictability is that drugs that seem promising in the laboratory or in inbred animals more often than not fail in clinical trials. As a result, the basic science discoveries championed by Vannevar Bush, though extremely valuable in some ways, are not reliable or straightforward guarantees of medical innovation. Regardless, the GABC continues to pour money into the pursuit of basic science, often in the name of medical innovation, for a number of cultural and economic reasons.
A CULTURE OBSTRUCTING INNOVATION
Given that the names of the majority of the institutes at NIH refer to specific diseases, one might assume that most of the research funded by that agency relates to finding treatments for those illnesses. But the vast majority of their research has little immediate relationship to diseases or treatments. The same is true of other research institutions as well: A 2003 survey of over 25,000 research articles published by academic investigators in prestigious basic biomedical science journals revealed that only 100 (0.4%) even mentioned a practical implication of the results presented. Noteworthy is the fact that of those 100 published projects, 20 resulted in clinical trials; this outcome was eight times more likely to occur if one of the project participants had a relationship with private industry.
Several factors related to academic culture explain this disconnection. In the second medical-innovation era, prior to the growth of the GABC, investigators in medical schools and research institutes were predominantly physicians who addressed whole-organism physiology and causes of disease. Non-physician chemists importantly enabled these physician-researchers to identify potentially therapeutic chemicals, which became some of the treatments mentioned above, such as vitamins and hormones.
With an increase in government funding for research, however, came an influx of non-medical scientists into biological research. Technical advances such as sophisticated microscopes and other imaging devices, the ability to identify and characterize the body's component molecules (particularly genes), and techniques for modifying those components and observing the effects of such modifications provided ample opportunities for these researchers. Other methodological improvements enabled chemists to, rather than identify new chemicals, screen for the effects of existing ones on what biologists thought could be potential therapeutic targets. The elegance, rationality, and clarity of these technical approaches, which favored studying single cells and their internal components, understandably seemed more intellectually appealing than the more empiric and messy work of working with whole animals, patients, or their internal tissues and organs. Accordingly, as medical-school faculties expanded thanks to the infusion of government funds, they gravitated toward this kind of reductionist science.
The trend toward reductionism has resulted in a common preoccupation with finding "molecular mechanisms" to explain biological phenomena. How something works has become as or more important than whether it works. Yet many of the most revolutionary medical innovations of the second innovation era came about without any understanding of their underpinnings at the time of their introduction — or even long afterwards. Vaccines, X-rays, ether anesthesia, antibiotics such as sulfonamides and penicillin, and aspirin as a treatment for pain and fever were discovered by chance, not molecular analysis.
The drift toward reductionism is related to another suboptimal trend in medical science: an obsession with novelty. Sociologists and historians of science, such as Robert Merton and Thomas Kuhn, have identified this tendency among scientists in general; they point out that what researchers are predominantly after is credit for the priority of their work. This impulse accounts for the historical importance of the scientific journal for the advance of science. Publication of new findings affords researchers the priority they seek. In addition, researchers attempt to impress influential peers with the novelty of their findings and their theoretical or experimental virtuosity in solving difficult scientific puzzles. Attesting to the accuracy of this contention, the protocols for reviewing NIH grant applications explicitly include "novelty" as a criterion for evaluation and ascribe high value to it in calculating priority scores that determine whether a grant will be awarded.
The reward systems in universities, such as promotion and tenure, accrue from publication of findings that other researchers consider impressive. Academic culture renders influential peer recognition the dominant measure of success. The fact that the peer reviewers of papers submitted for publication are imbedded in the academic culture (as are many of those who review research grant applications) has assured the ascendancy of reductionism. It also ensures that the measure of value of a researcher's work is not medical innovation, but publication in prestigious journals that, in turn, serve as the currency for obtaining research grant funding.
Although revered by academics as a quality filter, "peer review" of research papers submitted for publication (and of grants for research funding) is a flawed enterprise. As scientific journals found success in providing researchers the priority and credit they were looking for, the volume of submissions began to exceed the supply of journals' publication space. The practice of peer review — having selected experts render opinions regarding the quality of articles submitted to journals — was designed to solve that problem. Today, electronic publication has eliminated the space problem, but a prestige hierarchy of journals has replaced it with a false scarcity. Researchers covet attention in the most prestigious journals, and the high-profile journals sustain their elevated status by arbitrarily rejecting the majority of articles submitted to them. The monopoly power of these journals, fueled by researchers' vanity, allows indifferent editors to delay decisions about whether to publish research articles until dueling authors and reviewers come to a resolution. The referees of these disputes provide a quality of service that would be expected from the nature of the reviewers: anonymous, unpaid cronies or competitors of a paper's authors. As a result, research data can languish in obscurity for months or years while authors work their way down the prestige pecking order and finally obtain a place to publish.
Researchers in the private-industry sphere also undergo an evaluation process, though a quite different one. The evaluation of private industry's innovation is done by regulators such as the FDA; full-time paid analysts review, in real time, large amounts of original data submitted by the companies developing products. While such regulatory oversight, investors' due diligence of innovation prospects, and academic peer review all tend to be risk-averse, the more immediate promise of societal returns from innovation makes analysis of it less so. In general, investors' decisions are faster than journal and grant review, and, unlike journal submissions, aspirants for investment can approach many potential investors at one time. Investors — putting their assets on the line — are more likely to make responsible decisions than "disinterested" parties with little or nothing to lose.
Attesting further to the inadequacy of publication to promote innovation, even results reported in high-profile medical journals frequently do not withstand the test of time and are not reproducible. The problem of reproducibility has gained attention in recent years, particularly after a prominent Dutch psychologist admitted in 2011 to fabricating data in multiple published studies. But even those who haven't used falsified data may be wrong. For example, when a group of 270 researchers attempted to reproduce the results of 100 studies published in prestigious psychology journals, they succeeded just 39 times. Sadly, this trend will likely continue. Such reproducibility studies have gained a great deal of attention recently, but academia still values novelty for novelty's sake, and purported "breakthroughs" garner far more attention than confirmation of earlier results.
Achieving innovation requires wanting to innovate more than trying to impress reviewers of research papers or grant applications. It involves trial-and-error efforts that academic-review committees dismiss as "fishing expeditions" and that violate the scholarly premium on "hypothesis-driven" studies. Success in academe also demands sticking to one's research "brand." By contrast, innovation usually requires shifting gears to employ different technologies and experimental approaches. Such inconsistency reliably leads grant-application reviewers to discount an applicant's qualifications.
Innovation also requires more team effort than the academic reward culture affords. Academics celebrate individual heroism as evidenced by the celebrity status conferred by researchers' election to elite societies or garnering of research prizes. The conventional limitation of the latter to a maximum of three recipients reveals that the main purpose of these awards is to serve as marketing for impressive science. Most scientific achievements, including practical advances, represent the efforts of many contributors, and often determining who did what is nearly impossible.
Sometimes, of course, such academic scientific research can contribute importantly to medical innovation. Indeed, nearly every medical innovation can be linked at least historically to some remote information unearthed by scientific inquiry — which is why basic research in universities or medical schools does indeed provide value. Knowledge concerning drug targets and mechanisms of action has sometimes led to the development of new medications involving those targets and actions, as well as improved efficacy and safety outcomes. A deep understanding of the immune system has informed the development of recent therapies that appear to be somewhat effective against previously untreatable cancers. Knowledge concerning the intricacies of blood coagulation has led to effective ways to prevent heart attacks and strokes. The introduction of widely used non-steroidal anti-inflammatory medications (or NSAIDs, like ibuprofen) resulted from an understanding of how aspirin works.
But such progress is rarely straightforward or predictable, let alone clean and elegant. Indeed, the stories underlying most drug-innovation projects reveal enormous randomness and the need to adapt to unexpected results. For example, researchers predicted that, based on their mechanisms of action, certain drugs might be useful for lowering high blood pressure. They weren't, but testing them unexpectedly led to the blockbuster erectile-dysfunction drugs. Such serendipity is more the rule than the exception.
The cultural aspects of device development are somewhat different from those that impact drug innovation. Device invention is more akin to private-sector product R&D in that it involves iterative, trial-and-error tinkering rather than hypothesis-driven experimentation. Device inventors tend to be not academic researchers but clinicians who identify innovation needs from hands-on experience. They are more set on solving problems than impressing peers.
One specific example of a device research success is the management of critically narrowed or obstructed arteries feeding blood to the heart. Strategies using drugs to minimize heart-muscle damage caused by reduced blood flow had failed, so medical innovators experimented with bypass surgical procedures, catheters, and stents that restore blood flow — treatments that have been strikingly successful. Although the GABC contributed to validating these technologies, private entrepreneurship was the engine that created them.
Another cultural impediment to medical innovation is the fact that it requires diverse skill sets and capabilities that are rarely found outside of industry. These capabilities include drug formulation, manufacturing, quality control, procurement, market research, regulatory affairs, finance, toxicology, and many others, without which discoveries would never come to clinical use. The one exception, of course, is clinical trials; because only licensed physicians can treat patients, in the past most clinical trials of drugs and medical devices took place in academic medical institutions. Recently, however, economic pressures discussed below that emphasize clinical services in these settings have limited physician participation in clinical trials. Now, such testing increasingly occurs in proprietary clinical research organizations or private medical practices.
The most anti-innovative cultural phenomenon, however, is the pharmaphobia narrative. Its power in academic circles is pervasive; for instance, a 2012 survey published in Nature Biotechnology of over 100 articles in high-profile medical journals found that nearly 90% of the articles emphasized the risks of industry interactions with academics and physicians.
The pharmaphobia narrative has generated a battery of "conflict-of-interest" regulations predicated on the theory that industry has a corrupting influence on medical research, education, and practice. These regulations first appeared in the 1990s and initially concentrated on research in universities, based on the idea that industry sponsorship of research incentivizes — or "appears to incentivize" — researchers to misrepresent results to please their sponsors. In response, academic institutions erected bureaucracies to "manage" research relationships with corporations or forbid them if deemed inappropriate. A decade and a half later, this regulatory fervor had spread over the entirety of medicine, principally directed at sanitizing what the pharmaphobia narrative declared to be devious corporate marketing material and persuasion techniques that physicians are incapable of resisting.
Extensive analysis of the empiric basis of these theories reveals that they are baseless. Nevertheless, conflict-of-interest regulations are ubiquitous. Possibly the most extreme is the federal "Physician Payments Sunshine Act," in place since 2013, which requires manufacturers to report payments of $10 or more to physicians. Again, no empiric evidence supports any benefit from these disclosures, and the act made no provisions for obtaining such evidence. What is clear, however, is that large sums paid by companies to do the reporting divert scarce resources from research and development.
In the era prior to the expansion of the government-academic biomedical complex, the medical-school culture strongly valued research as a criterion for institutional prestige. Prevailing circumstances supported this attitude. During long hospital stays, patients did not receive the minute-to-minute monitoring and caregiver responses prevalent today. This relative leisure enabled physicians to take time away from patient care for research. Although medical care was relatively cheap at the time, revenues were sufficient to underwrite the time doctors devoted to such pursuits.
But circumstances changed. One outcome of the medical innovation of the past several decades has been the introduction of life-prolonging and life-improving technologies that have turned academic health centers into intensive service enterprises competing with one another for revenues and customers. Despite the fact that these institutions tout their research activities, the days when research dominated the activities of these centers are long over, and the operations have become large patient-care businesses accruing ever more layers of financial management and requiring continuous physician attention to patients. While this has been great news for many current patients, it has slowed the pace of medical innovation.
One particularly damaging consequence of federal research funding occurred as the GABC was forming and continues to be felt today. The influx of federal largess encouraged universities to leverage their research activities funded by government subsidy based on the fantasy that funding would continue forever. This mindset in many respects resembles the adverse effects of aid to developing nations; not only does external gifting discourage responsible financial stewardship, but it also promotes exploitation by the management class that benefits from the gifts. One such gift in academe is generous overhead funding — sometimes 50% or more of direct costs — funded by NIH grants that academic administrators have successfully negotiated and that contribute to managerial salaries instead of research.
Further evidence of academic management's exploitation at many American universities is their offloading of faculty-salary obligations onto external grant funds. This practice is perverse on a number of levels. For one, it means that the largest share of these grant monies pays for senior faculty salaries rather than for research supplies and equipment and ancillary personnel to assist with projects. Even worse, academic institutions have used the aforementioned overhead funding as collateral to borrow in order to build more research facilities. These practices persist even as, beginning in the 1980s, the flow of research money down the federal pipeline has progressively slowed in proportion to the expanding programs its earlier generosity encouraged. When a temporary uptick in NIH appropriations in the late-1990s and early-2000s alleviated what had become steadily slowing rates of NIH funding increases, most universities made little or no effort to use the relief to ensure research-program sustainability by aggressive solicitation of private endowment funds. Instead, they used it to recruit faculty and build new facilities.
Since the mid-2000s, congressional appropriations for the NIH have essentially been flat, and have declined when adjusted for inflation — despite the fact that the NIH has bipartisan support and advocates in Congress. One reason for this seeming contradiction is that the NIH appropriation is discretionary and — like all other discretionary spending — has suffered from the expansion of entitlement allocations and their effect on the national debt. Another is that the public has a poor understanding of medical research and its economics. Voluntary health organizations that focus on specific diseases are more recognizable than the NIH, despite its far greater financial contributions to non-profit research. The relatively low visibility of the NIH and the obscurity of what it contributes blunt the political will to raise its funding priority.
For these reasons, the probability of an applicant obtaining a research grant from the NIH has gradually fallen below 20%. Academic researchers who in the past sustained their research programs by obtaining multiple NIH grants are no longer able to do so. A consequence of these financial constraints is that academic tenure in medical schools has become increasingly meaningless. Absent external grant support, even nominally tenured faculty members must leave research activities for other pursuits.
Another consequence of having too many grant applicants and not enough grant money has been a steady rise in the age of grantees. In 1980, 16% of awardees were 36 years of age or younger compared to less than 1% over 65; by 2010, twice as many over-65 applicants received NIH funding compared to those 36 and under. This is in large part because established investigators have greater track records and more preliminary data compared to younger aspirants, which impresses risk-averse grant evaluators. The NIH has responded to this situation by explicitly favoring neophyte applicants. This social-engineering effort arguably undermines the quality of funded efforts and the NIH's promotion of medical innovation, which the grants are meant to foster.
In addition, to conserve its financial assets, the NIH has limited the size and shortened the duration of grant awards from an average of five years to four or even less. As a result, even if the cultural factors that discourage medical innovation in academic institutions could be overcome, potential innovators decline to take risks. It takes on average 12 years for a new drug to achieve FDA approval, and the immediate need to generate experimental results to impress grant-application reviewers at short time intervals prevents academic researchers from making the long-term efforts necessary for innovation. The dollar limitations also preclude taking risks. The decline in government funding for academic medical research has occurred at the same time as an enormous increase in the expense of innovation. For instance, the latest estimate by economists Joseph DiMasi and Henry Grabowski finds that achieving FDA approval of a drug costs over $2.5 billion.
These circumstances invert the conventional wisdom that basic, government-funded academic scholarship addresses future-oriented knowledge advancement, whereas commercial pursuits are supposedly limited to achieving near-term profit. And yet the power of the GABC and the pharmaphobia narrative persist, standing in the way of medical innovation.
The standard response of the academic biomedical research community to the economic challenges has consistently been, "Give us more government money!" Instead, we need to find a way forward that takes advantage of the basic science being done at the academic level and harnesses the untapped resources of private industry to drive innovation.
A NEW PARTNERSHIP
Powerful and entrenched vested interests promote and defend the GABC. Even in today's financially strapped circumstances, a small cadre of the most successful biomedical researchers in universities is thriving and constitutes a powerful lobby to advocate for the status quo. Many of these academic luminaries serve on the boards and profit as directors of pharmaceutical corporations. Yet they do not oppose the pharmaphobia narrative.
Terence Kealey, former vice chancellor of the University of Buckingham, made a strong case in his 1996 book, The Economic Laws of Scientific Research, that the importance of government funding of research — especially basic research in universities — for innovation has been greatly exaggerated. Daniel Sarewitz, professor of science and society at Arizona State University's School for the Future of Innovation and Society, has advanced in a recent issue of The New Atlantis arguments similar to Kealey's that echo many of the points raised by this article.
But public funding of academic research does serve a useful purpose. Research has an intellectually enriching influence on the academy, and its existence there helps to recruit individuals who have the interest and aptitude for it. Universities educate and train even those researchers who end up working for private industry, so the government subsidy that contributes to that education and training helps to build innovation workforce capacity. And while the cultural attributes discussed above that underlie academic research might not be commensurate with medical innovation per se, they represent the only metrics for evaluating the talent and potential of researchers in training.
Government subsidies also play another important role; basic medical research has difficulty obtaining patronage. It is far more complex and expensive now than in the past when amateurs performed it. It does occasionally attract the attention and largesse of wealthy individuals, but such donors are sufficiently uncommon to render government support necessary.
However, the academic research workforce should be downsized, facilitating academic institutions' ability to sustain a research effort with less dependence upon ephemeral outside support. The resulting increase in financial stability could promote risk-taking and the longer-term projects required for innovation.
Only private industry has the assets necessary to further innovation significantly, and — in response to pharmaphobia — academic institutions woefully underutilize those resources. Adequate funding is the sole means of overcoming the economic impediments that create the distractions and insecurities plaguing medical innovation in universities. Therefore, an industry-academic biomedical complex devoted to innovation could complement, not replace, the GABC.
The challenges for such a strategy are daunting. It requires suppressing the now-entrenched pharmaphobia that contaminates academic culture. And industry should not simply be an ATM that doles out project grants like public charities do, and that university administrators exploit for their own purposes.
Allowing pharmaceutical, biotechnology, and device companies to have their researchers identify academic investigators who can help them advance their projects under development would be one approach. This grassroots strategy is preferable to the formal institutional alliances between companies and universities that have arisen in the past and that have been imposed from the top down. These relationships, in which firms have funded individuals or groups of selected researchers, have not been durable, because of the tendency of the latter to use the resources to further their basic science agendas.
These relationships have also impeded innovation by restricting the ability of researchers to work with other industry partners. For example, an alliance between Harvard and the Swiss pharmaceutical company Sandoz funded research that eventually contributed to a highly effective treatment for certain malignancies. But a different Swiss company, Ciba-Geigy, had manufactured the drug responsible for that treatment, and the Harvard researchers couldn't test it because of proprietary restrictions. Ironically, the two companies later merged to form Novartis, which went on to develop the treatment. Absent the exclusivity, this development might have occurred earlier.
Universities should also jettison the snobbish attitude that basic research is superior to practical applications. Innovation activities should be considered to be equally as valuable as elegant experiments — and should serve as legitimate criteria for academic promotion. Garnering of industry partnerships, filing of patent applications, licensing of intellectual property, and achievement of development milestones should have the same weight as publication in top-tier scientific journals.
Abandoning the pharmaphobia narrative should make it easier to facilitate free cooperation between industry and academic researchers. One problem that such intermingling could help address is the fact discussed above: that as much as half of the research results published by academic investigators are not reproducible. Of particular importance for innovation is the claim that companies have not been able to replicate outcomes purported to be relevant for drug development. While many reasons can account for poor reproducibility, the cultural and economic shortcomings of academic research are major contributors. The cultural reason is that making sure that findings are reliable — by repeating them and ruling out reasons that might invalidate them — is far less interesting to academics than breaking new ground. The economic element is that underfunded university researchers lack the resources to focus on administrative details. In contrast to academics, industry researchers must document that results are reliable, and they have the financial backing to do so. Working side-by-side with the academics discovering potential drug candidates, company investigators could better ensure that they acquire the sometimes-tacit knowledge required to reproduce someone else's work.
Conversely, companies have formidable technological resources to advance biological research. Allowing academic scientists to have access to them not only could facilitate achieving discoveries but would also obviate the need for universities to invest scarce assets to acquire such technologies. These resources include some of the sophisticated tools mentioned above, like powerful microscopes, that have encouraged academic researchers to emphasize pursuing reductionist studies rather than solutions to practical problems. This outcome does not invalidate the importance of these technologies. Rather, it means only that applying them to both basic research and innovation is preferable to limiting their use to one or the other.
Another reason to encourage cooperation between academics, physicians, and industry is that many industry scientists either lack medical training or else do not have consistent exposure to real-world medical care. As a result, industry researchers may pursue medical innovation opportunities that appear attractive in theory, but are not readily reducible to practice. Regular communication with seasoned clinicians can help them avoid such pitfalls.
Despite the immense health improvements delivered by medical innovation in the past, disease and disability persist. For example, the increased longevity afforded us by medical advances has perversely engendered a rising incidence of dementia. Addressing this challenge is one very good reason to understand clearly how medical innovation works, so we can maximize it with the resources at hand. The facts indicate that the government-academic biomedical complex is no longer — if it ever was — the best approach to medical innovation, and our efforts must adapt. Such change and its potential to improve human health is a nonpartisan mandate.