A Politics of Knowledge

John O. McGinnis

Winter 2012

Over the past few decades, the rise of information technology has dramatically recast Americans' everyday experiences. Cell phones suddenly allowed us to stay in touch anytime, anywhere, only to be replaced by Blackberries, iPhones, and ever smarter devices. The internet and its search engines have given us instant access to unprecedented amounts of information, opening doors to a virtual reality that has become nearly as important to many of us as the tangible, physical world. From doctors accessing patient records on their laptops during examinations to university professors using meeting software to instruct students three time zones away, these new technologies are continuously transforming how we work, play, and live.

But we rarely pause to consider what such relentless technological progress might mean for our lives as citizens, and for the work of our government. At the core of the information revolution is an explosion of networked computing power, and the great promise of that revolution is therefore the promise of networked information processing. It is the promise not so much of doing more as of knowing more — of turning vast quantities of raw data and diffuse knowledge into manageable, usable, and focused expertise and understanding. The potential of such power for a government like ours is enormous, because the reach of our government constantly exceeds the grasp of our understanding. We lack a clear picture of what our government now does well and what it does poorly: Massive federal programs disburse billions of dollars with no real sense of whether they achieve their purposes; new programs are created based on vague and ill-informed projections of their future effects; and bureaucrats in Washington are expected to know far more than any small group of technocrats ever really could.

We have grown accustomed to the notion that this is simply how modern government works. But the technological developments of the past few decades now allow for a very different approach to public policy — for rooting social decision-making in greater knowledge about which policies have worked and which have not, for focused expertise applied in real time to judge the meaning of events as they occur, and for better projections and predictions built on a more dynamic analysis of data.

These three categories — assessments of past policies, analysis of current ones, and projections of proposed future ones — illustrate how the computing and networking power of modern information technologies can dramatically improve American self-government. In looking backward, greater computational capacity can allow for more empirical evaluation of policies and their results. In the present, new dispersed media can subject contending policy claims and proposals to instant analysis by an extraordinarily diverse array of experts. And looking ahead, novel means of aggregating judgments — like the internet betting pools known as prediction markets — offer enormous potential for projecting the consequences of proposed new laws and policies.

These examples point toward a model of governance in the information age that will sharply contrast with the model that dominates today — an outdated approach that traces its roots to the Progressive era. That older model, too, sought to improve government through the use of social information; the key difference, however, was that analysis of potential improvements was to come from the top, foisted upon the public by experts and bureaucrats. In our time, technology increasingly permits information to bubble up from below — channeled through more dispersed sources and filtered through more competitive mechanisms. By taking advantage of these new technologies, we can retain and improve on the best of the model we have — a politics that seeks to be informed by expertise and social-scientific knowledge — while shedding the arrogance and insularity that have led to so many of the troubles now plaguing American government.

Such an information-rich politics is an urgent necessity. The same technological acceleration that provides new mechanisms for creating social knowledge also generates a wide range of innovations, from nanotechnology to biotechnology to artificial intelligence. While some of these technologies may offer unparalleled benefits to mankind, they also may create catastrophic risks, such as rapid environmental degradation and new weapons of mass destruction. More subtly, technological acceleration can continuously disrupt society, overturning familiar ways of living and working. Such rapid transformation will require agile political and policy processes. And only a democracy capable of assimilating facts and reaching informed judgments rapidly is likely to be able to minimize the costs and maximize the benefits of the technology that is transforming our world at a torrid pace.

PROCESSING THE WORLD

The empirical social sciences, like the natural sciences, seek to marshal evidence in order to confidently explain causes and effects. But the natural and social sciences are separated by a crucial difference: In many cases, practitioners of the natural sciences have the advantage of the laboratory, in which they can create physical experiments that isolate particular variables, and can repeat those experiments again and again to test and refine their theories. In contrast, the phenomena of the social world are far more difficult to isolate and replicate in ways that can truly clarify causes and effects. This limitation poses serious problems for the application of the social sciences to public policy.

Imagine, for instance, that Massachusetts enacted statewide merit pay for teachers and Idaho did not. Policymakers interested in the effects of merit pay on student achievement might seek to compare the two states, relying on social-scientific data to help them decide whether to expand or curtail the use of merit pay. The problem, however, is that even if Massachusetts students achieved substantially better educational outcomes than Idaho students, researchers and lawmakers could not necessarily conclude that merit pay had been the reason for the difference. There might be other distinctions between the two states causing Massachusetts's higher scores — anything from dissimilarities in demography and wealth to differences in curriculum and school culture. This sort of situation illustrates the difficulties faced by social-scientific empiricism in establishing the causes of social phenomena, and especially in distinguishing causation from mere correlation.

Social scientists have found clever ways to address this problem, of course. In the example above, one option would be to increase the number of jurisdictions studied. If, for instance, a varied group of states or localities had instituted merit pay while a similarly varied group had not, a researcher could examine cases within both groups in order to correct for variables unrelated to merit pay. Yet while such methods can help distinguish correlation from causation, they do so not by isolating factors — as natural scientists often can — but by multiplying them, so as to create larger samples. The upshot is that greater accuracy requires social scientists to analyze truly immense quantities of data. The limits of data-gathering and analytical capacity have thus bounded the reach of the empirical social sciences and their use in public policy.

But these are precisely the limits that the information age has begun to overcome. Computers possess ever-greater storage capacity (allowing more and more data to be collected and so making possible more precise measurements of circumstances and events), as well as ever-greater calculating power (permitting the construction of ever more complex equations and more repeated sampling of data). Networks of computers, and especially the internet, exponentially increase both of these capacities.

The result is that social scientists can now make use of immense amounts of experimental data to inform the work of policymakers and the choices of citizens. Recent years have offered some prominent examples of what such contributions might look like, but these are only the tip of the iceberg. For instance, as Michelle Mello and Kathryn Zeiler showed in a 2009 paper, evidence gathered through the meticulous mining of data has shown the life-saving capacity of graduated driver's licenses: a process through which the state places initial restrictions on the conditions under which young people may drive, moving them from learner's permits to full licenses over the course of a year or two. As a result of the evidence, this approach to licensing young drivers has been adopted in some form by all 50 states. To take another example, in 2008, the Department of Labor decided to give incentives to companies to offer all employees retirement accounts unless workers opted out of them, instead of offering employees retirement accounts only if they opted in to them. The reason? Research showed that opt-out policies substantially increased employee retirement savings, a goal that federal policymakers certainly have an interest in advancing.

But such examples remain few and far between. As a general matter, governments at all levels have made appallingly little use of social science, even as the potential of such research has vastly increased. And policymakers have been especially hesitant to use this research to analyze the effectiveness of existing government programs. Last year, David Muhlhausen of the Heritage Foundation surveyed more than five decades of government social programs; in his examination, Muhlhausen found that, even though these hundreds of programs had spent trillions of dollars, the government had conducted only 13 rigorous empirical studies of their effectiveness.

This lack of reflection is surely caused, in part, by a reluctance to confront policy failures. Indeed, those few evaluations that have taken place have often proved popular programs to be ineffective. The Head Start early-education program is the most notable recent example: The Head Start Impact Study, carried out by academic researchers on behalf of the Department of Health and Human Services between 1997 and 2006, was one of those 13 serious social-scientific studies. Despite the tens of billions of dollars spent on the program over more than four decades, Head Start "has no demonstrable impact on [students'] academic, socio-emotional, or health status at the end of first grade," according to the Brookings Institution's Russ Whitehurst, who was on the HHS panel that designed the study.

But the dearth of social science in the policymaking process also results from the fact that the design of government policy today does not lend itself to social-scientific experimentation. Public programs are not crafted in ways that clearly define measureable criteria for success or that yield useful experimental data. And this design flaw cries out for correction. There is no reason that, given our growing ability to collect and analyze vast quantities of data, policymakers should not think of government programs as policy experiments — designing and administering such programs in ways that produce useful data and lend themselves to careful analysis, and providing dedicated funding to make sure this analysis is actually completed. Every law introducing a new public program, and every law reforming an existing one, should answer the question, "How will we tell if this works or not?"

There are a few simple steps that policymakers can take to answer that question, and to make the most of the growing abilities of empirical social scientists. First, Congress must do a better job of respecting federalism. Careful comparisons of different policies in 50 states (and many thousands of localities) can help illustrate what works and what doesn't — but that kind of analysis can succeed only if policies are permitted to differ from place to place. Put another way, policy experimentation requires decentralized government. And more decentralization allows for more experimentation: Indeed, just as the federal government should give more authority to the states, the states, too, should seek opportunities to devolve the authority to design programs to localities, comparing the effectiveness of different local programs when devising state-level solutions.

Second, Congress should systematically create experiments through legislation. The most obvious method is randomization — assigning different individuals or groups of individuals to different pilot programs at random before creating or reforming large public programs. Using this approach, government agencies could test different program designs, approaches to regulation, procurement policies, means of communicating with the public, or ways of preventing fraud and abuse before adopting them on a sweeping scale. Private businesses now often use such field experiments to guide their own decisions; government, with the help of social scientists, can do the same.

To be sure, such experiments will not always prove conclusively which policies should be pursued. Social phenomena often interact in complex ways, and the results of policy experimentation will need to be confirmed and refined. Nevertheless, even without providing immediate conclusive proof, such experimentation can change our views incrementally — showing what works and what fails, and allowing us to gradually understand why.

Congress and the president should also move to require the publication of all government data that do not undermine national security or personal privacy. Unfortunately, in last year's budget deal, Congress cut funding for the Obama administration's initiative for open-government programs (like the data.gov web site), from $35 million to $8 million. This is one budget cut that will actually end up costing the taxpayers more money in the long run: More accessible data about the operation of government is vital to understanding what works and what doesn't.

For its part, the Supreme Court can promote experimentation by embracing a jurisprudence of social discovery that reinforces federalism and permits the free flow of information. The Court should continue its revival of constitutional federalism, and should refrain from enlarging the scope of uniform federal rights imposed on the states through the elastic doctrine known as substantive due process. By constraining Congress and restraining itself from unnecessary intrusions into state autonomy, the Court can permit more experimentation and more opportunities for measuring the effects of state decisions — allowing us greater insight into subjects ranging from school choice to same-sex marriage.

Of course, empirical studies will not often directly change the minds of many ordinary citizens regarding controversial political and policy questions. But they can change policy simply by gradually changing expert opinions, which, over time, can exercise an influence over the views of policymakers, and even over the voting public.

In fact, the very notion of expert opinion is itself being transformed by the information revolution, requiring theories to be more thoroughly supported by data. New technologies are making it possible to bring expertise to bear in ways never before imagined — and we are only beginning to grasp some of the consequences of this new form of expert judgment.

EXPERT JOURNALISM

This new brand of expertise is perhaps nowhere more evident than in the emergence of novel forms of journalism. While the growing capability of the empirical social sciences allows us to better analyze the consequences of past decisions and policies, a media revolution is transforming the way policy debates occur in real time.

Most discussions of the "new media" of the information age focus on how those media — and especially their flagship format, the blog — allow more people than ever to have their voices heard. By breaking the monopoly of the traditional media on the means of reaching vast audiences, internet media can allow a diverse array of people to compete for meaningful numbers of readers, listeners, and viewers. This variety is generally celebrated as a way to bring more opinions to the surface — as if the traditional media brought objective analysis to bear while these new media offered a range of subjective views.

But the chief benefit of these new media is not simply the proliferation of viewpoints. Rather, it is precisely their potential to inject detailed factual analysis into our political and policy debates by increasing the availability of expert input. Blogs in particular can address issues at a level of specialization that the traditional media, aimed at very broad audiences, cannot sustain. In economics, law, education, energy, transportation, and a variety of other fields, we have seen the emergence of many specialized blogs that are published by experts and practitioners in those fields and read by other leading experts and practitioners (as well as by a larger public with an interest in those subjects). Within each field, these expert blogs often respond to one another, creating a networked conversation at an extraordinarily high level of sophistication. The best minds in these fields are essentially talking problems through with one another in public while the rest of us listen. And when the subject is a new law or policy proposed or enacted in Washington, the result is instant, intense, and detailed analysis and discussion among experts, available to any interested citizen — a public service of a sort barely imaginable before the advent of modern information technology.

This kind of online conversation can improve our society's (and government's) factual grasp of the policy world in three key ways. First, such specialized media provide incentives for greater accuracy, because they are likely to be both run and monitored by specialists. Most experts participating in such media — concerned about their reputations among their peers — will speak relatively cautiously, and will make efforts to be precise about factual claims. Furthermore, the very structure of the internet and web-based reporting — which includes links to relevant primary sources and other supporting material — promotes more factually grounded journalism. The nature of the medium pushes experts to make their sources clear, and so makes the foundations of disagreements among experts more apparent.

Second, as University of Tennessee law professor Glenn Reynolds (who runs Instapundit, one of the most popular blogs on the internet) has emphasized, such specialized media also feed into larger, more established media, providing the kernels for stories that reach the wider public. The beat of a major newspaper reporter these days no longer involves simply pounding the pavement and calling a few key sources: He must now go online to see what a network of plugged-in bloggers with expertise in the area he covers might be saying. Some major newspapers even host expert blogs themselves, which often improve on their reporters' work. The New York Times, for instance, operates a blog (called Economix) that brings the latest scholarship on economic policy to its readers' attention — and often argues for policies very much at odds with those advocated on the paper's editorial page.

Third, citizens are switching from television to the web to get their basic news about national and world affairs — a change that stands to improve deliberation and knowledge in our politics. Television emphasizes the personal, accentuating appearances and images. Web-based reporting and opinion, on the other hand, effectively mark a return to a text-based understanding of the world, which tends to encourage a more analytical eye. Though web journalism is increasing its use of videos and images, text still dominates — lending itself to a more policy-oriented evaluation of, for instance, candidates' positions on the issues rather than emotional connections to politicians' public personalities.

Some critics have expressed concerns that internet-based media actually reduce our democracy's ability to take account of new factual information; the danger, these critics argue, is that the new media cater to niches of political opinion, and have thus produced greater polarization among the public. But this view is not supported by the evidence. In fact, studies suggest that people who get their news online are exposed to a more diverse array of facts and viewpoints than those who get their news from television. Indeed, one recent study — conducted by Matthew Gentzkow and Jesse Shapiro of the University of Chicago — found that conservatives who get their news online ingest information that, on the whole, has an ideological slant equivalent to that of USA Today. Liberals who rely on web-based news, meanwhile, expose themselves to the rough equivalent of watching CNN. In neither case were the online offerings more one-sided than those that the survey's participants would have found off-line.

This informational jousting is all to the good. Just as more vigorous market competition improves consumer welfare by creating better products, more vigorous competition in ideas should improve public policy. Indeed, the advantages of dispersed media over more concentrated media are similar to those of democracy over oligarchy. Oligarchies might appear much more stable than democracies, because they involve less surface conflict. But that absence of conflict makes it harder to change course when facts change, or to build genuine, broad consensus around policies that will work. Far from a polarizing drag that has worsened our politics, then, the new media may offer a much-needed path to serious, factually informed debate.

In this arena, too, there are a few straightforward steps that policymakers might take both to promote the development of a more knowledge-based politics and to make the most of it. First, they should extend the legal protections afforded to journalists — most notably shield laws that allow for the protection of sources — to those working in new media, especially bloggers. Treating bloggers as less worthy of protection than traditional journalists represents a failure to understand the crucial role they now play in our political and policy debates.

Second, some advocates of campaign-finance restrictions suggest that blog postings assessing candidate statements or expressing support for (or opposition to) candidates near an election should be considered contributions to candidates, and therefore subject to regulation under campaign-finance law. Here again, it is a mistake to privilege the old media — whose coverage of candidates is not subject to such limitations — over the new. Restrictions on internet speech would undermine the ability of new media to hold our political system and policymakers to account. And they would curtail this ability precisely when the new media's facility with empirical data, and their swift applications of expertise, are most needed — around election time, when Americans are hungry for informed criticism of candidates' policies and platforms. More generally, we must be wary of campaign-finance "reform" that restricts voters' access to policy information generated by these debates. To be sure, campaign commercials are imperfect vehicles for conveying new policy understanding. In practice, however, canceled political advertisements are replaced not with policy seminars but with beer commercials — hardly an improvement.

One can thus hope that new information technologies will feed into one another, improving our grasp of the consequences of proposed policies. The rise of empiricism will provide stronger analysis for expert bloggers to put before the public. And this combination will help us better predict the future effects of policies, particularly with the help of another application of new information technologies: prediction markets.

BETTING ON THE FUTURE

Even as they improve our ability to judge past policies and current proposals, our rapidly advancing information technologies also offer the possibility of more accurately forecasting future events — with positive implications for public policy.

Prediction markets in particular can serve as a powerful tool for establishing expectations and projections regarding policy and politics (and other fields). As Michael Abramowicz explains in his excellent 2008 book Predictocracy, prediction markets function like commodity-futures markets, in which participants trade positions (or, in the parlance of commodities markets, "buy contracts") in the outcome of a particular event. A presidential election offers a simple example: The operator of a prediction market (like Intrade, the most popular online prediction-market company) will establish a market around the outcome "Barack Obama is re-elected in 2012." Futures contracts in that market are then sold for a settlement value of $10. If Obama is in fact re-elected, then people holding the contract will be able to trade it in for $10; if he is not, then those holding the contract will receive nothing. In advance of the event in question, contracts are priced through the interactions of buyers and sellers based on demand; the price of a contract at any given time is therefore taken to be a measure of the probability that the event will occur as judged by the market's participants. On November 6, 2011 — one year before the election — the price for the Obama re-election contract on Intrade closed at $5.00, meaning the market judged the likelihood of Obama's re-election to be an even 50%.

Because the profit or loss that a participant in the market experiences is a function of the difference between the price at which he bought the contract and the price at which he sells it, a prospective buyer who feels confident that Obama will win has a strong incentive to buy at the $5.00 price, since he can expect it to rise. (And if he buys a large number of contracts at a low price, he can make a lot of money if it rises.) An owner of contracts who now believes Obama will lose, meanwhile, has good reason to sell — since he can expect to lose more of his money if Obama's fortunes plummet. In this way, the market rewards individual educated judgments, and then aggregates those judgments into an overall prediction of probability.

Such election markets have generally been more accurate than national opinion polls at predicting national elections, in part because their highly motivated participants are aware of poll results but also take account of other factors that could influence voters. In a thorough assessment of prediction-market accuracy in 2008, a group led by Joyce Berg of the University of Iowa found that, in predicting the results of that year's elections, prediction markets run by Iowa Electronic Markets proved more accurate than opinion polls 74% of the time. And in a 2010 study, Ian Saxon of the University of Nottingham compared the success of Intrade market values to that of opinion polls in predicting the winners of the 2004 and 2008 Democratic primaries. "In every time period considered," Saxon found, "market prices were a better predictor of the ultimate winner of both the 2004 and 2008 Democratic nomination contests."

Prediction markets are not entirely novel: In one form or another, they long pre-date the information-technology revolution. Formal markets for betting on the outcomes of presidential elections, for instance, began in the United States at least as early as the mid-19th century, and were common through the 1940s. As economists Paul Rhode and Koleman Strumpf showed in a 2003 paper, such markets were especially active around the turn of the 20th century, often forming in the lobby of the New York Stock Exchange building and involving many hundreds of participants. And they predicted election outcomes fairly well, at least in the month or so before those elections.

But the internet has allowed these markets to grow to a size unimaginable in those days (Intrade markets for election results, for instance, often involve tens of thousands of participants), harnessing an enormous amount of aggregated wisdom. Such markets can also be formed around a broad array of subjects and events, not just major elections — anything from the performance of the economy in a given month to who will win the Oscar for best actress. They can also establish markets around conditional events — that is, on the likelihood of one event occurring if another does. Before the 2008 election, for instance, Intrade allowed people to bet on the rise of American government debt conditional on Senator Obama's being elected and conditional on Senator McCain's being elected.

If this new tool of prediction were systematically directed to policy questions, it could well provide an extraordinary resource for both citizens and policymakers. This is especially true of formulating economic policies, the outcomes of which rely heavily on human behavior now very difficult to predict — such as how changing expectations will influence people's spending and saving, or how consumers will respond to various incentives and stimuli.

As an example, consider President Obama's proposal this fall to extend a partial payroll-tax holiday for an additional year. A prediction market could be created to forecast 2012 economic growth conditional on the payroll-tax holiday's being extended; another could be formed conditional on the extension's not being passed. Participants in one market could also be required to bet on both potential outcomes: Traders could, for example, be offered a market on whether economic growth would reach a certain level, with their options rising from, say, 1% to 4% in increments of a tenth of a percent; in considering those growth figures, traders would have to factor in the likelihood of a payroll-tax cut. Such a market would provide policymakers with a valuable indicator of public expectations regarding the effect of the tax holiday.

It is true, of course, that even accurate conditional markets would not necessarily settle policy disputes, because of the familiar problem of separating correlation from causation. Building on the example above, it might turn out that the market predicts higher long-term growth without the payroll-tax holiday than with it. In that event, some secondary phenomenon tied to the failure to enact the tax cut might be responsible for the expectation of additional economic growth. The successful enactment of the tax cut, for instance, might be seen as reflecting or even boosting President Obama's re-election chances; traders might well assume that four more years of Obama would ultimately produce lower economic growth than if his failure to enact a tax cut resulted in the election of a Republican opponent, yielding a president better equipped to revive the economy. Thus, while prediction markets may accurately foretell a policy outcome, they may not always indicate clearly why those effects will come to pass. Even so, the prediction market will still hold value — because it will force to the surface other factors (such as the public's confidence in the president) that may themselves be relevant in setting policy or in voting.

In fact, one way to address this problem of correlation and causation would be to create even more (and more specific) prediction markets, which would test for different conditional scenarios in an effort to arrive at a clearer sense of the most relevant factors influencing likely policy outcomes. Moreover, as Todd Henderson, Justin Wolfers, and Eric Zitzewitz suggested in a 2010 paper, empirical social scientists can take the information from prediction markets and conduct "event studies" — research measuring the effects of events on markets. For instance, if the prices in a market for predicting economic growth conditional on the absence of a payroll-tax cut varied depending on the latest poll of Obama's popularity, such an event study might suggest that it was the prospect of Obama's defeat that was driving the prediction of economic growth, not the failure of the payroll-tax cut. But if it had relatively little effect, the event study would suggest that Obama's popularity and the effect on the economy of keeping the payroll-tax rate the same were not correlated.

This analysis underscores a more general point: Prediction markets benefit from empiricism, and empiricism benefits from prediction markets. Combined, they can be more than the sum of their parts. Researchers collect information about past policies, and bettors in prediction markets rely on this kind of information when wagering on future results. But those bettors also add dispersed information of their own, and in this way, prediction markets create more data — providing a basis for yet more empiricism. And by subjecting prediction markets to backward-looking empirical analysis, we can learn under what sorts of circumstances we should have a great deal of confidence in such markets and when we should have less — making prediction markets even more useful tools for mapping out the future.

Yet another benefit of prediction markets is their ability to constrain expert bias. In the 1970s, Paul Ehrlich — a leading critic of population growth — argued that population was outstripping the world's natural resources. In 1980, economist Julian Simon made a wager with Ehrlich that five key scarce commodities of Ehrlich's choice (he chose copper, chromium, nickel, tin, and tungsten) would decline in price by 1990, suggesting that fears of a coming era of scarcity were overblown. Simon won the bet. Prediction markets would institutionalize such challenges to experts, providing better incentives for dispassionate opinions on everything from the effects of climate change to the reliability of government revenue projections. Expressed another way, they would force experts to put their money where their mouths are. Thus markets could help us better know the social facts on which political policy must be based.

Despite their manifold advantages, however, prediction markets are not immune to critiques, the most serious of which involve concerns about manipulation. If prediction markets come to play some role in informing policymakers, detractors argue, won't people with an interest in the outcomes of policy debates try to sway those markets to their advantage? For instance, people who stand to gain from a president's stimulus package might bid up the conditional markets that show favorable economic indicators should the stimulus pass.

This danger is real, although it is far less grave than the danger of such special interests directly influencing the political process, bending lawmakers to their will with no transparency and little public recourse. Though there will always be strong incentives for people to manipulate policymaking to their benefit — a basic human trait that prediction markets cannot change — such markets at least create countervailing incentives for people who do not have a direct stake in a given policy outcome. By pegging financial rewards to predictive accuracy, these markets inevitably draw in people whose only interest is accurately projecting future events (and profiting from it).

Indeed, economist Robin Hanson has shown that people who try to manipulate prediction markets in fact create higher returns for those trying simply to make accurate predictions. Because manipulators consciously aim at the wrong price, they function as sheep who attract wolves interested solely in accuracy (and profits). Much as investors smell opportunity when they know a stock is dramatically over- or under-valued, participants in prediction markets will pounce when they notice that the price for a given outcome is seriously miscalculated and see the opportunity for a major windfall. Thus, in the same way that it is hard to manipulate financial markets, it is difficult to push prediction markets too far out of alignment. Though investors in political shares may have incentives to bid up their candidates' prices in the hope of skewing elections, such spikes in electoral prediction markets will presumably represent either manipulation or ill-founded exuberance. They are therefore likely to be short-lived, as counter-traders see opportunities for profit and bid the prices down.

Should prediction markets become more widely used, especially to test potential policy ideas, they could prove quite valuable in helping to strengthen the base of knowledge underlying government decision-making and, along with empiricism and dispersed media, in creating a political culture that focuses more on the consequences of policy. That culture, in turn, can encourage those in public office to take these markets — now often viewed as a source of entertainment or a subject of curiosity — seriously.

They can also take some simple steps to facilitate the use and reach of prediction markets. The first step policymakers can take is straightforward: They need to remove the legal impediments to the operation of such markets. In particular, Congress should exempt at least policy-oriented prediction markets from the Unlawful Internet Gambling Enforcement Act, a law that has generated a powerful chilling effect on the willingness of companies to service this market. Because of that law, prediction markets that operate in the United States today generally cannot permit bets using real money, which obviously limits their ability to attract participants and to provide them with incentives for accurate predictions.

A few prediction markets, like the Iowa Electronic Markets, have received special permission from the Commodity Futures Trading Commission to trade using real money as part of a recognized academic experiment. Others, like Newsfutures, use fake money or award prizes for accurate bets. Others respond by basing their operations outside the United States; Intrade, for instance, is based in Ireland — where internet gambling is legal — even though the site clearly caters to Americans (most of its markets involve American politics, policy, and culture). But there is no good reason to prohibit prediction markets from using real money in trades. Prediction markets are not going to become an arena for the criminality and addiction that so often accompany other kinds of betting, and lawmakers should surely be able to distinguish between prediction markets and internet gambling.

Second, in order to enable prediction markets to function better, government at all levels should increase the availability of detailed policy proposals in advance of votes. Before a congressional committee votes on proposed legislation, for instance, or before a vote is taken in Congress or the president signs a bill, the exact language at issue and all relevant amendments should be posted for all to see. Such a reform would of course build on a growing public demand for greater government transparency. Indeed, President Obama promised during his 2008 campaign to wait five days before signing any non-emergency bill, and the current House of Representatives has required that a bill be posted online three calendar days before a vote — but these pledges have proved elastic. In order for prediction markets to be as beneficial as possible, let alone to satisfy citizens' demand for more honest and open lawmaking, such requirements should be implemented — and, just as important, honored — throughout American government.

Prediction markets also need metrics for the results of public policy that are free from manipulation — so that outcomes can be clear, and so that winners and losers among the bettors can be determined. Because these indicators will also often be compiled by the government, the need for such data argues again for transparency and for making all government data public. Once more, the needs and products of empiricism, of the new media of dispersed expertise, and of prediction markets can build upon one another.

TOWARD A POLITICS OF KNOWLEDGE

These are but three examples of a growing trend toward an information-age politics of knowledge. It is a trend that stands to meet a pressing need in American public life. Despite the immense exertions of American government in recent decades, with massive public programs spending huge amounts of money to achieve social ends, we know little about how well such exertions really work and what we should expect of various alternatives.

New technologies offer the power to narrow that knowledge gap. We should not overestimate their potential, of course: Politics and policy will always be highly controversial, and competing claims — some far less responsible and well grounded than others — will always continue to fly. Bias, and even outright falsehoods, will always be part of the process. But by employing some of the tools made possible by information technology, policymakers and policy experts can at least alleviate some of our ignorance about what our government does — and thereby help us perform the task of self-government that much better. In this time of rapid technological and social transformation, we need that help more than ever. The capacity of a society for learning must match its capacity for change.

John O. McGinnis is the George C. Dix Professor in Constitutional Law at Northwestern University Law School. His book, Accelerating Democracy: Matching Governance to Technological Change, will be published in 2012.


Insight

from the

Archives

A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.

advertisement

Sign-in to your National Affairs subscriber account.


Already a subscriber? Activate your account.


subscribe

Unlimited access to intelligent essays on the nation’s affairs.

SUBSCRIBE
Subscribe to National Affairs.