Rethinking Research
What is research? What is science? What does it mean to say that scientific research has discovered something new, or proven something long hypothesized? What is "scientific consensus," and how much confidence should we place in it? Are scientific researchers motivated to push knowledge forward in whatever direction it may take? Or are they responding to very different incentives?
Terms like "science," "research," "discover," and "proof" are rampant in everyday use. They play an enormous role in guiding both individual decision-making and public policy. Yet we rarely pause to consider what they mean and how we use them.
That oversight runs far deeper than casual conversation. Few if any of the key players conducting, funding, directing, or promoting scientific research in America ever revisit these fundamental questions. Fortunately, several factors make such a reassessment timely. Higher education, which plays an enormous role in defining and directing research, is an industry in crisis. Trust in experts and traditional gatekeeping institutions has plummeted. Laws and policies governing commercialization, patenting, and innovation have been on autopilot for decades, to the detriment of the goals they're supposed to serve. The internet, data science, and now artificial intelligence have democratized access to high-end research tools. And public disclosures are raising troubling questions about the relationships between federal agencies and the beneficiaries of their largesse — including researchers and universities.
These factors call for a major rethinking of research — how we conduct it, how we fund it, how we motivate its deployment, and how it serves American security, prosperity, health, and welfare.
ACADEMIC RESEARCH AND FUNDING
Research is an underappreciated component of higher-education reform. Until the Trump administration began targeting their research funding, most discussions of the problems facing the nation's universities focused on either administrative practices or the humanities and the social sciences; the scientific, engineering, medical, and professional fields largely escaped attention.
That omission is troubling, because most of the money in higher education flows to the latter. The combined budgets of the preeminent government agencies funding the humanities, the National Endowment for the Humanities and the National Endowment for the Arts, are a bit over $400 million — just 0.02% of total federal higher-education research funding. Social-science funding, though harder to define and pinpoint, is likely below 5%. The rest is dedicated to the hard sciences, medicine, and the like. No industry can be reformed if important revenue sources are insulated from scrutiny.
Universities are the gatekeepers of the nation's researchers. They monopolize credentialing. They train nearly all corporate scientists. Their professors and graduates dominate the editorial boards directing research publications and the committees allocating research spending. And as high-minded as their pursuit of knowledge may sound, these researchers operate like all other people: They respond to incentives.
Sometimes, these incentives are tied to the quality of their work — there is certainly profit and prestige to be gained from making a major scientific breakthrough. More often, however, the incentives embedded in the decision-making processes surrounding hiring, promotion, tenure, publication, funding, and professional esteem motivate academics far more than do institutional mission statements. These distorted internal incentives have corrupted the academy's quest for scientific truth.
In the typical academic department, the incentive structure operates as follows: A few recognized luminaries will announce important directions for their disciplines — directions that soon come to define orthodox thinking. Each entrant into the department must move a little bit further in the designated direction: The "little bit" contributes to a foundation on which others can build, the "further" enhances the reputation of those who built the foundation on which the new work stands, and the "designated direction" ensures that those in charge preserve their leadership.
As a result, research funding and professional success often go not to those engaged in an honest pursuit of the truth wherever it may lead, but to those most adept at promoting favored orthodoxies. Sometimes the chosen direction is correct, and science advances swiftly. Usually, however, a worthy initial inquiry yields a tangent that diverges from reality. Once disciplinary leaders commit to that tangent, each new "discovery" pushes the frontiers of knowledge further from anything that might be helpful or useful. As the discipline becomes increasingly detached from reality, faculties become increasingly insular, defensive, and intolerant of disapproved ideas. This incentive system, which I call "incremental outrageousness," is rampant at our nation's major academic institutions.
Though perhaps most obvious in the social sciences and the humanities, incremental outrageousness also infects science, engineering, medicine, and the law. A few famous examples illustrate as much. Lord Kelvin, the great 19th-century physicist, challenged Charles Darwin's theory of evolution and prevented it from becoming a consensus until well after the latter's death. The Big Bang theory faced heavy opposition from establishment cosmologists — notably Fred Hoyle, who coined the alliterative term. Albert Einstein, whose early work provoked significant pushback from prominent quarters of academia, fought hard against the philosophical implications of probabilistic quantum mechanics, observing that God "does not play dice" with the universe. And in 1969, John McCarthy and Patrick Hayes elevated a logic-based approach to AI while denigrating probability and statistics; two decades later, it was still difficult to publish articles about probability-based reasoning systems in the AI literature.
None of these examples of establishment resistance prevented superior scientific theories from achieving widespread acceptance — eventually. Yet they all resulted in massively misallocated time, effort, and resources.
Sooner or later, reality forces researchers who have pursued irrelevant or incorrect tangents to recalibrate. But when that happens, no one is held accountable for the wasted effort. Erstwhile opponents of changing direction simply claim that the recalibration validates rather than repudiates their thinking, arguing that the time and resources dedicated to their overturned orthodoxy provided critical information underpinning the new truth. Those who had long kept their fields on the now-debunked tangent remain comfortably in charge. Those who had been sidelined for having been correct too early are rarely rehabilitated.
If we want to rebuild America's universities into sources of national pride, we must see them for what they are: corporations that develop products and sell services. As unaccountable and inefficient corporations, their structural corruption is hardly surprising. The only way to compel corporations in any industry to change their internal incentive structures is to alter their external incentives. In other words, if we want to fix the way universities operate, we're going to have to change the ways they make their money — and that emphatically includes research-funding reform.
Tellingly, President Trump's conditioning of research funding on structural reform generated self-righteous outrage from those who benefit most from the status quo. With research-and-development expenditures for higher education topping $108 billion in 2023 — including $59.7 billion from the federal government and $5.4 billion from the states — the potential leverage is huge.
Of course, any targeting of research funding must proceed with extreme caution. American higher education is in many ways a failed industry, but it is also a strategically vital one. In today's innovation-driven world, an undersupply of quality research and a citizenry educated in falsehoods are deadly to both prosperity and national security. If we don't produce successive generations of well-informed, capable citizens committed to preserving our national leadership, adversaries who succeed at those tasks will surpass and defeat us. No country can flourish in the 21st century without topflight higher education — and that requires conducting cutting-edge research.
American higher education is screaming for wholesale reform. Our nation needs academic institutions that produce consistent, solid returns on our research investments; that train a workforce suited for a modern economy; and that promote the love of ideas, innovation, experimentation, and risk that have powered our remarkable 250-year national journey. For the most part, those are not the institutions we have.
Widespread reform will require bigger and bolder initiatives targeting research dollars. The starting point for such reform is a simple question: American taxpayers are both deeply invested in and deeply reliant on academic research. We deserve, and have every right to demand, top returns on that investment. Are we receiving such returns?
LINKING RESEARCH TO DEVELOPMENT
Academia can provide only half the answer to that question; the other half involves the innovation policies motivating how academic discoveries are deployed. Research reform thus implicates both sides of the technology-transfer bridge: education policy and innovation policy.
Innovation policy — the intersection of intellectual-property, competition, and trade law — is a critical if all-too-often overlooked component of American research. No assessment of the research status quo can be complete without considering the relationship between academic researchers and those who apply their discoveries.
To be justified, public research funding must benefit society. We use public funds to push the boundaries of science because we believe that deeper scientific understanding will enhance our safety, security, prosperity, health, and general welfare. If new discoveries don't lead to new products or services, the public is unlikely to appreciate the benefits of scientific advancement.
How can American educational, scientific, economic, and legal policies best motivate a smooth handoff from discovery to implementation? The last such inquiry fueled decades of innovation-driven prosperity. It began in 1978, when President Jimmy Carter directed his Commerce secretary, Juanita Kreps, to lead a domestic-policy review of industrial innovation. Participants noted from the outset that although the United States led the world in scientific research, its companies were lagging in commercialization. So they set off to find out what would motivate America's technology industries to expand in ways that enhanced both domestic employment opportunities and global leadership.
Much of what they discovered concerned innovation policy, specifically patents. Between 1980 and 1984, three major legislative changes, a reassessment of judicial jurisdiction, two lines of Supreme Court rulings, and a policy reversal at the Department of Justice altered the contours of the American patent system almost beyond recognition, powering an American economic resurgence that dominated the ensuing two decades.
The legislation most obviously related to academic research was the Bayh-Dole Act of 1980, which allowed universities to patent inventions developed in whole or in part through federal research grants, and to retain the commercial benefits of those patents. Other major legislative changes consolidated jurisdiction over patent appeals into the Federal Circuit — effectively charging this new court with injecting coherence, consistency, and predictability into patent law — and altered the relationship between branded drug companies, generic-drug producers, and the Food and Drug Administration. Meanwhile, the Supreme Court determined that laboratory-synthesized microorganisms and algorithmic software were entitled to patent protection — bringing biotechnology and software fully within the patent system. Finally, the Reagan Justice Department inverted five decades of thinking that had allowed antitrust law to shackle patent licensing.
The policy changes that took root in the early 1980s reinvigorated the patent system and contributed to decades of economic revival. The radical overhaul of incentives gave rise to university patenting programs, the industrial parks that now ring many campuses to facilitate collaboration, and the small-business start-up culture that built the internet (among other achievements). It accomplished all of these goals with relatively little fanfare, few formal overarching announcements, and surprisingly scant attention.
Yet this policy environment fell into disfavor with equally little notice. By the early 21st century, a chorus of voices was complaining that the patent system had become so strong that it invited rampant abuse. "Patent trolls" (technically "non-practicing entities") — law firms that bought sizeable collections of questionable patents, filed infringement suits in favorable jurisdictions, and held productive companies hostage to their outrageous demands — became reviled villains. Public opinion turned heavily against small patent holders seeking to challenge the products and practices of America's large, successful technology companies.
The matter surged into full public view in the early 2000s, when NTP, a non-practicing entity, threatened to shut down the then-popular BlackBerry network. In 2006, the Supreme Court restricted the ability of non-practicing entities to enjoin infringers. It was the first in a series of interventions that trimmed the availability of biotechnology patents, severely restricted patents on algorithms and business methods, and introduced a variety of other legal changes — all of which reduced the value of patenting to small or non-practicing entities. Congress jumped on board with the America Invents Act of 2011, easing challenges to patents that had already been issued while incorporating other changes favoring large, practicing entities over their smaller, non-practicing challengers.
Following these judicial and congressional interventions, powerful incumbents — particularly those in the technology sector — benefited from a legal and policy climate that hamstrung small businesses and start-ups. Two major economic shocks — the financial collapse of 2008 and the Covid-19 lockdowns of 2020 — further devastated smaller entities while strengthening their larger competitors.
Innovation became increasingly concentrated in the hands of large firms. Though both higher-education research budgets and university technology-transfer numbers roughly doubled (in real terms) between 2000 and 2023, a 2017 survey with 187 respondents reported that the top 15 universities accounted for 72% of total revenues. Today, eight technology companies account for about a third of total S&P 500 market capitalization.
As in the early 2000s, a policy preference that has operated on autopilot for 20 years has overcorrected. The forces arrayed to undermine patent trolls weakened all non-practicing patentees, including university patenting and technology-transfer programs. It's time to turn the dial back toward small inventors and entities not positioned to practice their patents fully — like research universities.
Today's academic researchers play a far greater role in the American economy than they did 50 years ago, but little more than they did 20 years ago. Bayh-Dole unlocked incentives that helped focus academic research on innovations likely to benefit the public, and promoted commercial relationships between universities and businesses, from multi-nationals to start-ups. Innovation policy of the past two decades, however, has favored industry concentration and powerful incumbents rather than the small-team and individually directed basic research at which universities excel. Innovation-policy reform is thus at least as important to improving America's return on its research dollar as is higher-education reform.
THE DISRUPTION OF RESEARCH
In addition to the corruption of higher education and the sclerosis of a patent system attuned to entrenching incumbents, there is yet another element urging research reform: Advances in information technology have altered the economics of analysis and review. The consequences of this development will transform our assessment of which scientific results are considered "proven" or "accepted," and they will do so in ways that wrest gatekeeping from academic research scientists, much the way social networks upended mass media.
Throughout recorded history, information was scarce — and thus valuable. Today, information is abundant and ubiquitous. Anyone on the internet can access and broadcast massive data at close to zero marginal cost — necessarily revolutionizing each industry as it transforms from physical to informational.
Perhaps the first old-line industry to complete that transformation was music. The digitization of music in the 1990s caught record labels by surprise. Industry leaders awoke to discover that their products had become digital, transportable across the internet, and entirely uncontrollable. A business model resting on the capital-intensive processes of reproducing and distributing music imploded almost overnight as the costs of doing so dropped to near zero.
Predictably, the aggrieved, disempowered incumbents petitioned Congress for tougher copyright laws to replace the protection that technological advances had eroded. At the same time, they developed new business models, launched massive education and propaganda campaigns, and sought partners whose technological innovations would restore rather than remove their control. What unfolded was both a new legal regime and a new music industry, one that embodied a predictable pattern of opportunity, action, and reaction that is about to reshape the world of research.
The starting point of the information disruption is always digitization and quantification. Once something is digitized, it becomes subject to the economics of information, meaning the cost of reproduction and distribution plummet. The tremors of this economic shift are now shaking the research world. While field work and data generation are still relatively expensive, time consuming, and resource intensive (more so in some disciplines than in others), data analysis is not. The moment information gleaned from the physical world is converted into data and put online, anyone with basic skills and widely available software tools can conduct analyses rivaling those of traditional experts.
That new reality — demonstrated during the Covid-19 pandemic — threatens the incumbent research leadership. It eviscerates the long-standing practice whereby scientists submit their work to journals for consideration, editors select peer reviewers, and those reviewers determine the merit of the submitted work. What justified this process for determining which scientific discoveries are correct, valid, important, and accepted? Little more than a seemingly rhetorical question: Who but the experts in a field are qualified to assess work in that field? While that question should not be dismissed casually, it now has an answer: plenty of independent internet users. That answer heralds the most disruptive change to academic research in generations.
The Covid-19 era revealed many unfortunate truths about America's academic, bureaucratic, and research elites. One of them is that they're far less adept than advertised at analyzing their own data. While the Imperial College model predicting pandemic doom galvanized global shutdowns, it hardly required sophisticated epidemiological understanding to observe that its variables lacked precise definitions and that it was riddled with worst-case assumptions. Anyone with a basic understanding of mathematical modeling should have known at a glance that this model's predictions were as hysterical as they turned out to be.
Those who noted the problem at the time, however, were loudly decried as science deniers who were largely (though not exclusively) speaking far outside their areas of expertise. That pattern repeated itself at every phase of the Covid-19 crisis: Official sources made pronouncements that the data could not justify — and, in many cases, that the data contravened. Internet users with basic data-science skills revealed the flaws. Officialdom immediately closed ranks against the dastardly deniers spreading dangerous misinformation to a gullible public. Public trust in the incumbent experts and their institutions plummeted.
At times, debunking the official story required little more than an understanding of Bayesian inference. Consider the discussion surrounding the origins of Covid-19. Upon hearing that a rare coronavirus had broken out near a laboratory experimenting with rare coronaviruses, anyone who understands basic Bayesian modeling would have constructed a simple differential diagnosis — a list of possible explanations arrayed by probability. Initially — that is, prior to the consideration of specific evidence — life experience would have guided those probabilities. Those with relevant scientific training would have begun with greater experience than those without, of course, but most adults would have appreciated that the proximity of the outbreak to the lab was either highly relevant or a stunning coincidence. The lab-leak theory should thus have topped every initial differential diagnosis and every list of subjective "prior probabilities."
Analysts would then have integrated evidence as it arrived, refining the probability assessments with each new discovery. As is always the case, far less evidence would have been required to confirm the hypothesis with the highest prior probability (i.e., the lab leak) than to disconfirm it. Yet the scientists decrying the lab-leak theory as misinformation merely argued that the virus's arrival was consistent with natural causes — to which the obvious retort, "so what? Still one hell of a coincidence!" would have landed the analysis precisely where Bayesian inference places it: with the lab leak as the most likely origin of the virus.
The trajectory of public analysis and reporting was almost entirely backward. Prominent members of the scientific establishment, with political backing, fed a media campaign rife with shocking statistical illiteracy. The obvious, most likely explanation — a lab leak — was immediately derided as a conspiracy theory. The scientific establishment helped the media present an inverted analysis suggesting the need for overwhelming evidence to support the obvious explanation while accepting meager statements of consistency to elevate what would have been a remarkable coincidence. Only years later could mainstream reporting conclude what should have been known at first glance — that meager "evidence" consistent with explanations other than the obvious is insufficient to overcome the lifetimes of experience embodied in a Bayesian prior probability.
The stunningly illiterate probabilistic analysis emanating from the pinnacle of the scientific establishment was but the tip of the iceberg. The Covid-19 lockdowns, which disrupted the entire socioeconomic fabric of modern life, sent shockwaves through the long-staid world of research. Social-media users without identifiable credentials or prestigious positions launched a "do your own research" movement to challenge every finding and prescription that appointed experts announced. Predictably, America's research elite did not take kindly to the competition. Harvard Kennedy School's Misinformation Review, for example, cast this threat to incumbent dominance as a dangerous conspiracy theory.
Granted, research tools are like all other tools: Users who don't know what they're doing can hurt themselves and others. But America's research elites seem to have forgotten that most of the students they've trained are not working as "professional" researchers. America possesses a vast and unharnessed pool of research talent more than capable of checking the work of their former professors.
The myriad pandemic-era examples drove home an important and elegant point: In the information age, the analysis and interpretation of results can — and likely should — be severed from the community that generates the data. A basic ability to construct both Bayesian and linear-regression models can go a long way. Comfort using the tools of data science, analytics, and AI can go a lot further. Such skills are distributed broadly throughout the general population.
Few things threaten gatekeepers as greatly as the broad distribution of skills among people capable of bypassing their gates. The overall superiority of homebrew research was thus an overdue, predictable consequence of life in the information age. So, too, was the response from the threatened incumbents, who lobbied to protect the monopolies that technology had eroded.
The establishment-championed and government-sponsored misinformation-prevention efforts of the past five years must be understood accordingly. Like all anti-blasphemy laws, the argument was that misinformation risks misleading the feeble-minded into harming themselves and society; the reality was that incumbents feared losing their monopoly over "truth." While those efforts appear to have failed — at least for the moment, and at least in the United States — the challenges are far from over. Incumbents accustomed to monopolizing science and proof will not relinquish ownership without a fight. Like the many beleaguered incumbents who preceded them, they'll soon discover that only two counterattacks are possible: They can seek increasingly draconian government protection, or they can harness the technologies that broke their monopolies and work with responsible new entrants to develop a superior set of standards.
In all likelihood, they will favor the former. Yet it's imperative that they opt for the latter. There's nothing sacred about existing standards for determining whether a hypothesis has been "proven" or whether a scientific result is "accepted." Peer approval is a convention, not gospel. Most areas of active research are de facto monopolies or cooperative oligopolies defined by an incentive system of incremental outrageousness, in which the primary objective of each research community is not discovering the truth, but bolstering the reputation, prestige, and perks of its leading researchers.
Those incentives need to change. While a few leaders of the scientific community may have been displaced, no leading institution or methodology has been replaced. The leadership of American scientific research is precisely where it has been for decades, promoting unchanged standards and embracing the same corrupting incentives. The only difference is that these powerful incumbents are now wounded, desperate to maintain their leadership, and unconcerned with how much their self-preservation efforts may harm American society or the cause of individual liberty.
Those of us eager to preserve both America's research leadership and the American ideals of free speech have a window of opportunity. We can seize this moment to establish new standards of scientific review, proof, and acceptance, availing ourselves of the shift in information economics, or we can wait until circumstances allow the disgruntled incumbents to launch their next attack.
FOCUS ON THE FUTURE
Justifying a timely rethinking of research is one thing; specifying what needs to be done and how to do it is another.
A good first step would be to begin reconceptualizing academic research, and the best method of doing so is with eager allies. Powerful incumbents — in this case, America's most prestigious, well-funded universities — are notoriously resistant to change. They have reputations to protect, ongoing contracts tied to existing systems, and topflight personnel with a proven record of navigating these systems. Reforming the arrangement they have ridden to the top is disruptive — and, from their perspective, an unnecessary and terrifying risk.
The best universities at which to launch new ventures, therefore, are those with few legacy entanglements, enough infrastructure to expand quickly, little enough invested in reputation and branding to take calculated risks, and visionary leadership able to think outside the box. As a practical matter, this implicates institutions toward the bottom of the R1 Carnegie grouping or toward the top of the R2 grouping. Such institutions attract millions of research dollars, employ a growing research faculty, and often draw leaders from outside academia. While many top universities will transition successfully, these smaller, less-established players are now in a position to surge forward.
The next step concerns the role of the federal government in motivating higher-education reform. The most obvious leverage is financial. The $400 million in research funding that the Trump administration used to exact reform commitments from Columbia University represents just 0.67% of the $60 billion federal investment in academic research. If the federal government's scientific agencies adopted reformed standards for evaluating research, the entire academic world would follow. Both Congress and the executive should work to design and fund these new standards.
Additionally, a rethinking of American patent and innovation policy is in order. We must return to an environment that motivates small and non-practicing inventors to patent and license their innovations in collaboration with companies poised to commercialize them. Innovation policies focused on promoting new entrants and inventions by non-practicing entities would stimulate small-business growth and entrepreneurship at and around America's universities.
The relationship between patenting and research also suggests a path toward democratizing data analysis. Anyone who's ever seen a patent has likely come away with two general impressions: they are well organized, and they're unreadable. Unreadability is inevitable, because patents are simultaneously legal documents written for lawyers and engineering documents written for engineers. Patent organization, on the other hand, is standardized and excellent. Research publications should be rethought in a similarly standardized manner, making them more accessible to a generally educated audience.
After delineating the necessary identifying information, articles published should open with an introduction explaining why this particular research project was worth pursuing. What questions did it raise? Why were they worth addressing? Why should anyone outside a narrow academic field care? A generally educated audience should be able to comprehend these explanations. A conclusion addressing that same audience should describe the findings.
In between, the author should detail the hypothesis, experiment, data generation, and data collection in language that is precise and detailed, if likely decipherable only by those with specialized training. Finally, the author should present the data in a format accessible to all readers with suitable analytic skills and software — invariably a small subset of those able to appreciate the introduction and conclusion, but a far larger audience than can appreciate the experiment design and implementation.
That recalibration, with three different presentations for three different audiences, would pave the way for crowdsourced reviews. It would enable a hybrid model of scientific development that preserves a significant role for those with specialized training, opens part of the review to those who can analyze data qua data, and explains basic scientific inquiries and developments in language accessible to the public. Such a revolution in research review would reduce the scope and power of the incumbents but preserve an important leadership role for those who can make the transition.
It's far beyond time to let the shifting economics of information end American higher education as we know it. The key will require changing internal incentives, with research and innovation reform proceeding in parallel. We must retire the incentives that motivate a small cadre of insiders to run these institutions for the primary benefit of those they've invited inside. In their place, we should introduce new incentives that reward and promote those working toward clear explanations, transparent reporting, commercialization, and societal welfare.
A PIVOTAL MOMENT
We have reached the rare moment in which overarching higher-education reform appears possible. Leadership in Washington and a growing number of states recognize its importance. Academics who'd navigated the system successfully only to find themselves canceled for elevating common sense above political expediency are reuniting with former students and colleagues who'd been cast into the wilderness for challenging the orthodoxy. They're engaging committed outsiders from government and industry to propel reform forward. Just as reassessing and recalibrating the connection between academic research and commercial development in the 1980s changed the nature of the modern research university while generating new products, so, too, could today's reforms.
Several incentive-based improvements will encourage such positive results. Restricting research funding to institutions committed to liberal education rather than activism would improve civil society while elevating the broader culture. Adopting innovation policies focused on promoting non-practicing invention and entrants over incumbents would promote small-business growth and entrepreneurship around America's universities. New approaches to licensing and collaboration would generate university operating revenues untethered to public support. At the same time, democratizing data analysis and research review would weaken academia's de facto monopolization of research agendas and open multiple scientific fields to perspectives at odds with institutional orthodoxy.
All of these changes are possible with concerted thought, effort, and leadership. The American future depends on our getting them right.