FROM ISSUE NUMBER 27 ~ SPRING 2016 GO TO TABLE OF CONTENTS

TEXT SIZE A A A

The New Central Planners

OREN CASS

The economists of the International Monetary Fund want you to know that they can help with, as they titled a 2014 book, Getting Energy Prices Right. They accuse the world of providing trillions of dollars in annual fossil-fuel "subsidies," not because governments are paying such subsidies but because they are neglecting to impose needed taxes to account for environmental damage. Deference to the market, in this view, is itself a distortion, while reliance on a bureaucratically set price is efficient and proper. In 2013, IMF experts calculated 2011 "subsidies" totaling $2.0 trillion. In July 2015, they changed that estimate to $4.3 trillion. So much for "getting it right."

In Washington the following month, the Environmental Protection Agency released its "Clean Power Plan" to push the U.S. electricity sector toward higher-cost, lower-carbon energy sources. The plan, EPA announced, would avoid 300,000 missed workdays and school days each year. Even better, the plan would save the average family more than $80 on its annual utility bill.

Analyses like these are the counterfeit currency of the administrative state. To disguise the value judgments inherent in their regulatory agendas, economists and policymakers justify their proposals as hyper-rational, politically neutral responses to "market failures." To prove these proposals enhance economic efficiency and societal welfare, they produce "cost-benefit analyses." Free markets and price signals work better, they tell us, when we embrace their recommended taxes, subsidies, and mandates. Any rule that requires $1 million of equipment to reduce by 1% the risk of death for 100 people is just common sense; one requiring $1 billion of equipment to reduce by less than 1% the emission of carbon dioxide for one year is the only responsible choice.

All this may be true on classroom blackboards and in abstract computer models. But technocrats armed with sharpened pencils and boundless egos tend not to calculate reliably which market prices are "too low" or which resource re-allocations would produce benefits that exceed costs. The easy recourse to the language of market failure — the claim that somewhere some social cost is going unpaid — reflects a subjective policy preference when we do not know the cost's size or incidence, the extent to which it is unpaid, or what countervailing benefits might be lost through a correction. A cost-benefit analysis that summarizes how many "lives" a rule will "save," how many "jobs" will be "created," and how much these achievements are "worth," is little more than marketing propaganda when the terms are unhelpful abstractions and the estimates unreliable.

By asserting that their frameworks, tools, and data prove government action will enhance market efficiency, economists are engaging in a new form of central planning. It differs in degree from traditional command-and-control socialism, but not in kind. It is less absolute — the market economy provides a baseline until an intervention occurs. It is less totalitarian — plans are executed through rules and incentives that alter the behavior of market actors instead of through the direct assignment of resources. But it is rooted in the same conceit that technical expertise can outperform market forces and deserves deference where applied. It suffers from the same challenges of incomplete information, heterogeneous preferences, and subjective values. It relies on the same refusal to acknowledge the inherent tradeoffs that underlie the allocation of scarce resources. And, as a result, it also reduces democratic accountability, economic efficiency, and total welfare.

The alternative to technocratic planning is not post-modern, nihilistic resignation to the impossibility of evaluating policy. It is an administrative state designed around a recognition that market signals and political preferences provide a better guide than can bureaucratic analysis, that those signals and preferences vary locally, and that optimization requires constant recalibration. Many current efforts at regulatory reform focus on increasing the influence of cost-benefit analysis, but in fact we need to reduce it. Management within the executive, delegation from the legislature, and oversight by the judiciary should all assume that technocratic expertise lies only in designing the specific rules to implement when there is a political demand for intervention, not in determining when such interventions are appropriate.

Our current model of regulation resembles a game of darts, in which we hope to equip (or constrain, depending on one's point of view) the regulator with tools to ensure he strikes a hypothetical bulls-eye that maximizes welfare. But this produces a central-planning regime in which the policymakers deny the existence of tradeoffs, claiming they can achieve their regulatory goals while simultaneously improving on the market's economic outcomes. There is no such bulls-eye in the real world, and even if there were, the regulators would not hit it anyway.

We need to replace that metaphor with one of a high-wire act, in which the regulator must cross an ever-shifting tightrope with great care and constant adjustment, responding constantly to external feedback. Regulatory policy must tighten and loosen in response to changing economic conditions and societal preferences, not ratchet ever tighter with every new justification invented. A system that emphasizes trial and error, political accountability, and the management of tradeoffs will perform far better than one relying on a cadre of central planners who seem to believe the law of gravity no longer applies.

THE KNOWLEDGE PROBLEM

The new central planner justifies his regulation as the solution to a "market failure," where some characteristic of the actors in the market, the information available to them, or the resources at stake is producing an inefficient outcome. Most prominent among market failures is the "externality": a benefit or cost of an economic activity borne by a third party and thus not accounted for by the actors.

The theory is fine. If producing a ton of steel costs $50 but also releases $10 of pollution, society should want the manufacturer to behave as if the ton's cost were $60. A $10-per-ton tax would accomplish that. This idea can be formulated as a more general rule; as Nathaniel Keohane and Sheila Olmstead write in Markets and the Environment, the basic text of the Harvard Kennedy School's course in environmental economics: "A tax on pollution equal to the marginal damage at the socially efficient level of pollution will achieve the socially efficient outcome."

But what is the socially efficient level of pollution? A market economy traditionally relies upon a market price to facilitate an efficient level of production and consumption, yet that signal is precisely what the new central planner wishes to subvert. Even if he can determine that a marginal ton of pollution at the current level costs $10 (which he cannot, because of the challenges in cost-benefit analysis discussed below), he has considered only one external effect of the economic activity and has no idea of the magnitude or even the direction of its net social impact, nor whether the socially efficient level is higher or lower than the current one. Even if he could reliably determine the net impact of that factory's steel production for each marginal ton, he could not extend the conclusion to other locations or other points in time. Nor would his calculation identify the potentially counterintuitive and counterproductive distributional implications of his intervention.

The greatest obstacle to counteracting an externality is that neither the externality nor its "solution" acts in isolation. The case of fossil fuels provides an excellent example. Recall the IMF's conclusion that market prices for the fuels represented a multi-trillion dollar "subsidy" because negative externalities went uncorrected. But deep in that report, the Fund acknowledged that, at the "right" price, electricity and gasoline consumption in Asia would fall by about 25%. Any positive externalities of that energy use among predominantly impoverished communities — from general economic growth and associated public-health improvements, to the education of children whose schools need heat and light — would disappear. Depending on the relative value of those positive externalities compared to the negative ones emphasized by the IMF, the socially optimal level of fossil-fuel consumption may actually be higher, lower, or equal to the prevailing market level.

More generally, the same industrial activities that produce negative environmental externalities also provide employment, wages, and tax payments, offering a variety of social and public-health benefits to families and communities. Anne Case and Angus Deaton of Princeton University provide a prominent recent example, finding that increased substance abuse, liver disease, and suicide among non-Hispanic whites aged 45 to 54 had led to the equivalent of nearly 500,000 extra deaths between 1999 and 2013, affecting most dramatically those with a high-school education or less. The authors suggest that "economic insecurity" may be a driver. Indeed, it would be surprising if declining economic fortunes for blue-collar workers did not produce an uptick in such maladies, just as surely as an increase in pollution from industrial facilities might produce an uptick in other health problems.

One can add any favored or disfavored social phenomenon to the ledger. The IMF study includes the congestion, accidents, and road damage associated with driving petroleum-powered vehicles. Researchers in the United Kingdom have linked road-traffic noise to strokes and death, a finding that should interest residents protesting the construction of subsidized but noisy wind-turbine farms near their homes. If our ton of steel were manufactured at night, perhaps "light pollution" — which EPA administrator Gina McCarthy has declared to be "in our portfolio" — would come into play, to say nothing of the effect night shifts may have on family and social-capital formation. On the other hand, long-term unemployment — including of those who might otherwise work a night shift — has been linked to poor outcomes for the children of the unemployed.

The eager analyst may be tempted to begin quantifying all the steel ton's externalities, so that a careful calibration of taxes and subsidies to offset each will get the price just right. But such an exercise is ludicrous on its face, both because each individual quantification will be wrong and because unintended consequences will be ignored. If government were capable of it, central planning might actually work. Instead, it mandates children remain in bulkier car seats for more years, then discovers with dismay that families purchase larger and less fuel-efficient cars, then mandates automakers improve fuel-efficiency and promises it will save those families money as gas prices rise, then provides large subsidies for the purchase of electric vehicles that are too small for those families and out of their price range anyway, then watches gas prices plummet, then celebrates that gas prices are low while lamenting they are not higher.

The good news is that such a comprehensive analysis of all relevant externalities is unnecessary, because it amounts to asking a much simpler question: Is the steel factory, given all its social benefits and costs, helping or harming the community in which it is located? If the community sees the steel factory — or, more precisely, the factory's marginal unit of production — as beneficial, restricting its output through a tax is backward. If anything, the community would prefer a subsidy that expands production further. The carefully calculated cost of the associated pollution determines nothing.

The question is also exceedingly situation-specific. A struggling community with clean air and large numbers of unemployed blue-collar workers probably favors production increases — a colloquial way of observing that the net externalities are likely positive, an insight confirmed by the enormous tax breaks new facilities frequently attract. A booming metropolis struggling to reduce smog, meanwhile, may view the marginal cost of further pollution as exceeding the marginal benefit of an additional job. A coastal enclave of educated elites may care mostly about increased traffic and an influx of new workers and thus have no interest in any facility regardless of either job creation or pollution.

Such NIMBYism raises the further question of whose costs and benefits should even count and whether distributional impacts matter. If the coastal enclave's relevant population expands to include outlying working-class communities, its analysis would change. When the costs and benefits act at different scales, managing the balance of equities becomes more difficult still. In some situations the air pollution might spread downwind while jobs remain local, but in others there may be regional economic benefits while only the facility's immediate neighborhood suffers the stench. Particularly in the case of environmental regulation, concentrated costs will often fall on labor and capital within an industry, and on its customers through higher prices, while diffuse benefits will accrue to different groups entirely.

It is also critical to contemplate how changing property values affect who the ultimate beneficiaries will be. Where an externality causes a constant level of harm over time, the value of property in affected locations should be lower by the net present value of that harm (that is, all future harm discounted back to today). Indeed, in some situations economists actually use housing-price differentials as indicia of how much pollution costs. But like a property tax, the entire future burden of an externality will be borne by the owner of property at the moment the burden becomes known. Renters or future buyers pay less for their housing, so they should find themselves no worse off than had the externality never existed. Conversely, where the net externalities generated from an event — such as the opening of a plant — are positive, the entire benefit is captured immediately. Future residents will pay more to move into town.

The transmission of externalities through property values turns the concept of "environmental justice" on its head. Activists commonly claim that the distributional effects of regulation support their agenda, suggesting that low-income and minority communities suffer disproportionately from pollution and other externalities. But that reasoning should apply only to residents who owned property at the time externalities were introduced. A low-income family renting an apartment near the steel plant should pay lower rent than in the absence of the steel plant by precisely the amount of harm the steel plant causes them. Removing the steel plant's pollution would not benefit them, because the value of their apartment and thus the rent charged would increase accordingly. It should not be surprising if heavily polluting plants are found disproportionately in low-income communities; those residents are the ones most likely to live in the accordingly lower-cost housing. Without the presence of the pollution, they might very well be priced out.

Nor do indications point toward low-income communities viewing the marginal cost of pollution as exceeding the marginal benefit of additional industrial activity. To the contrary, as Politico reported perplexedly last September, "President Barack Obama's aggressive environmental agenda is running into a surprising source of opposition: Black elected leaders." The article quoted a letter to President Obama from Steve Benjamin, mayor of Columbia, South Carolina, and president of the African American Mayors Association, explaining that "mayors, county officials and governors still face the challenge of curtailing ozone while expanding the industrial production, construction, and infrastructure projects that create jobs and grow our tax base." In general, the widespread enthusiasm that accompanies the announcement of a new plant's construction, and the dismay that accompanies notice of an impending closure, suggest that current levels of environmental regulation are out of balance with society's needs.

The many complexities involved in the concept of "marginal damage at the socially efficient level of pollution" demand both great humility and a preference for heterogeneity in regulation. If the socially efficient level differs based on community circumstances, the regulatory regime should differ as well. If individuals differ in their own preferences, their freedom to choose among a heterogeneous set of communities should produce far greater welfare than would a single standard. And where a national policy might be necessary, or for whatever reason be chosen, those setting it should recognize their decision is fundamentally political and irreducible to technical factors.

When the new central planners isolate an externality and advocate a tax, subsidy, or other intervention as the efficient response, they are not engaged in objective economic analysis. Rather, they are making the same age-old assertion that "I like (or don't like) something and wish to use government power to create more (or less) of it." Economic terminology does not confer neutrality or entitle the claim to a presumption that society would in fact benefit.

COSTS AND BENEFITS

The new central planner translates a theoretical market failure into a tangible claim of welfare improvement through a cost-benefit analysis. The emphasis on these analyses began as a conservative response to unchecked regulatory excesses, though from the present vantage point it can seem hard to imagine a time when evaluating the costs and benefits of a regulation was novel. President Reagan's Executive Order 12291 initiated what then-professor Elena Kagan described in the Harvard Law Review as "a sea change" in the administrative state, requiring major rules to undergo a "regulatory impact analysis" including a cost-benefit analysis and "determination of the potential net benefits of the rule."

The approach of the National Transportation Safety Board in a conflict with the Federal Aviation Administration illustrates the devastatingly foolish mindset the order aimed to counteract, and the valuable role a cost-benefit analysis can play. As any parent knows, a child under two years old can fly without purchasing a ticket by sitting on an adult's lap. But in 1994, a DC-9 crashed while attempting to land in Charlotte, North Carolina. Thirty-seven of 52 passengers were killed, including a nine-month-old infant held by her mother. The NTSB concluded that "if the child had been properly restrained in a child restraint system, she might not have sustained fatal injuries," and thus issued a formal safety recommendation requiring that small children have their own seats and appropriate restraint systems.

In response, the FAA used the number of annual aviation accidents with survivors and the share of passengers under the age of two to estimate that the proposed regulation could save no more than five infant lives per decade. With more than 6 million such infants flying each year, and assuming a ticket price of $200 per person, the cost per life saved would exceed $2 billion. Further, as the FAA noted, the additional cost would lead some families to drive in situations where they otherwise would have flown, and deaths resulting from "diversions" to far more dangerous highways would cause many more infant deaths than the new rule could prevent. Amazingly, the interagency battle continued for 12 years before the NTSB, without changing its opinion, "reluctantly concluded that it cannot convince the FAA to take the action recommended" and closed the matter.

It is not hard to see why a president might find it necessary to order that his agencies adopt an analytical framework, apply common sense, and report upward on their decision-making process. Cost-benefit analysis has successfully increased transparency within the bureaucracy, placed checks on the worst proposals, and forced consideration of unintended consequences and less-costly alternatives. As a management tool within the executive branch, it is critical.

But conservatives have been hoisted with their own petard. An entire regulatory-industrial complex has sprung forth from the bureaucracies, academic institutions, and private-sector consulting firms to construct analyses that will validate whatever regulation might be proposed. The executive branch peddles the result — now closer to a sales brochure than a deliberative document — to Congress, the courts, and the public in defense of its actions.

Perhaps the greatest flaw in cost-benefit analysis runs parallel to one that bedevils the identification of externalities: the subjectivity inherent in deciding what factors to include. Take the EPA's recent effort to tighten limits on atmospheric ozone levels. Its cost-benefit analysis identified between $19 billion and $38 billion in annual benefits as compared to $15 billion in annual costs. But the comparison is apples-to-oranges. The EPA cast its benefit net widely, counting whatever public-health benefits it could infer from past epidemiological studies and finding two-thirds of its total not from ozone reductions but rather from the "co-benefits" of incidental reductions in other pollutants. For example, missed workdays for mothers staying home to care for asthmatic children — avoided thanks to cleaner air — are valued at $75 each.

On the other side of the ledger, only the most direct compliance costs facing regulated facilities are considered. Even then, the analysis relies on the appearance of as-yet-uninvented technology (and its hypothetically declining cost) to make compliance plausible and affordable. In 2011, when President Obama (facing re-election) rejected the EPA's first attempt to tighten the ozone standard, he explained, "I have continued to underscore the importance of reducing regulatory burdens and regulatory uncertainty, particularly as our economy continues to recover." But the EPA's final 2014 analysis of cost excludes any consideration of how the policy might affect direct economic indicia like energy prices, employment levels, and economic growth, to say nothing of the broader socioeconomic impact of restricting industrial activity across broad swaths of the country. No consideration is given to the damage increased regulation and uncertainty does to the economy's dynamism, or the opportunity cost of firms never started and ideas never pursued.

Such things may seem unquantifiable, but that challenge is no different from the one faced in the translation from air-pollution levels to premature death to a dollar value. When the president of the United States says his policies are saving "tens of thousands of lives each year" and producing "hundreds of billions of dollars in benefits for the American people," one assumes the claim must have a solid basis. But few who see that particular sausage being made would be inclined to ever consume it again.

The claim of lives saved, for instance, does not represent actual lives actually saved. In fact, EPA offers no evidence of the relevant pollutants at the relevant levels ever causing a single death. Instead, it relies on epidemiological studies showing that deaths tend to increase slightly (on the order of 1% to 2% in the case of ozone) on days when atmospheric pollution concentrations are significantly higher. Thus the suggestion is that, by reducing those concentrations, each person faces some minutely smaller chance of dying. When EPA reports its rule will avoid 630 premature deaths from short-term exposure, it means only that each American's risk of death will be reduced by 0.0002%.

To determine the value of that risk reduction, EPA must then use the "Value of a Statistical Life," calculated in large part from wage-risk studies that examine the wage premium given workers in high-risk occupations. For instance, as one Harvard University study sponsored by the EPA found, male blue-collar workers in higher-risk industries earned an additional $0.32 per hour. This was nearly three times the equivalent premium for female blue-collar workers. For white-collar workers, there appeared to be no wage premium at all. Statistical analysis nevertheless translated these findings into a "value" of approximately $10 million per life, roughly the current figure used by the EPA, thus the billions of dollars in savings.

Table 5-20 on page 305 of the 575-page Regulatory Impact Analysis for the EPA's proposed ozone standard shows a "monetized ozone-only benefit" of $6.4 billion for the prevention of premature deaths. When combined with its estimated benefits from the coincidental reduction of other pollutants, the total benefits of $19 billion to $38 billion exceed the $15 billion in estimated compliance costs. But there is no $6.4 billion. There is only a statistical relationship between ozone levels and mortality showing a lower level could lower risk of death for some people on particular days by a thousandth of a percent, and a finding that certain blue-collar workers in certain higher-risk industries receive slightly higher wages.

Perhaps that is the best possible translation of environmental harm into economic cost. The analysis does have some use, insofar as it provides a basis for comparing the relative harm of various pollutants or the relative cost-effectiveness of different mitigation strategies. But it does not follow that society should spend up to 6.4 billion actual dollars to achieve a 6.4 billion "dollar" benefit, or 15 billion actual dollars for the full 19 billion to 38 billion "dollars" of benefit in the EPA's analysis. Nor does it follow that an economist can confidently use these estimates to design a tax that will improve the market's efficiency.

Standard practice demands that analysts employ any means necessary to monetize benefits, but that no such respect be afforded costs. "We start by considering how economists think about the costs and benefits of environmental protection, and how those might be measured," write Keohane and Olmstead, as they introduce the Kennedy School's public officials of tomorrow to first principles. "While costs are relatively straightforward, benefits require considerable thought, as we shall see." Are not costs deserving of considerable thought as well?

Thanks to the past 40 years of environmental regulation, an extensive literature now documents its economic effects. For instance, a 2000 study in the Journal of Political Economy found counties in "nonattainment" with Clean Air Act standards (as many more would be under the EPA's new ozone rule) saw the construction of new plants in polluting industries decline by 26% to 45%. A 2001 NBER working paper found that, between 1972 and 1987, such counties lost 590,000 jobs and $75 billion in economic output. More broadly, a 2013 study in the Quarterly Journal of Economics found that, after the passage of the 1990 Clean Air Act amendments, "the average worker in a regulated sector experienced a total earnings loss equivalent to 20% of their preregulatory earnings." In Consequences of Long-Term Unemployment, the Urban Institute provides a helpful literature review of the many findings on how such economic outcomes lead to "declining human and social capital," "impacts on future labor market attachment," "impacts on physical and mental health," "effects on children and families," and "impacts on communities."

Beyond the concrete effects within affected industries, policymakers committed to giving benefits and costs equal attention could also examine the macroeconomic implications of increased regulation. The Mercatus Center, for instance, has recently launched RegData, a comprehensive database quantifying industry-specific federal regulations. The regulatory equivalent of studies measuring concentrations of pollution in the atmosphere, RegData allows analysts to identify relationships between regulation and outcomes like economic growth and productivity. One early study found that from 1997 to 2010, the least-regulated industries experienced productivity growth at twice the rate of the most-regulated industries.

The point is not that inclusion of these factors could make a cost-benefit analysis truly robust, but rather that abstractly-constructed benefits deserve no special claim to respect. If one defends the Value of a Statistical Life as "better than nothing," one must equally defend the best available estimates for all the economic drag and social cost a regulation can impose. Consideration of such effects would at least provide a more complete picture and, as on the benefits side, offer a mechanism for comparing regulatory approaches. But the output in dollar terms would still not represent real dollars, nor even "dollars" equivalent to those derived from health effects. No equation answers the question of whether a fractional reduction in public health is worth a fractional increase in productivity growth.

Further, the challenges of cost-benefit analysis go beyond the technical. Correcting for the process's structural biases still leaves it controlled by bureaucrats who lack either the inclination or capacity to approach their task reasonably. When the new central planners account for economic effects, their models inexplicably conclude that more aggressive regulation will also produce superior economic outcomes. Tradeoffs, it seems, are for the non-expert.

Thus, former EPA administrator Lisa Jackson described the Waxman-Markey cap-and-trade proposal as "a jobs bill." Current administrator Gina McCarthy has argued that recent regulations are "not going to be a detriment to the economy. They are going to grow jobs, they are going to allow us to grow economy, but they are also going to be extremely beneficial to consumers. It's talking about saving the money." She has also claimed that the Clean Power Plan, aimed primarily at forcing utilities to replace lower-cost sources of electricity with higher-cost ones, will reduce consumers' electric bills. The Department of Energy, approvingly analyzing its own proposed energy-efficiency requirements for commercial refrigeration equipment, assumed that increased prices would produce no decline in purchases and further concluded the new requirements would increase both employment and wages. Experienced analysts have no difficulty getting the answer they want while obscuring critical assumptions behind layers of analysis that the public, other branches of government, and at times even their own politically accountable superiors cannot penetrate.

Of course, industry groups know this too and produce their own analyses showing how each proposed rule will lead to Armageddon. The National Association of Manufacturers commissioned a study by NERA Economic Consulting that found the ozone rule (assuming an especially strict standard) could cost $270 billion per year and eliminate 3 million jobs. But while industry and agency each has a preferred policy outcome that its analysis is designed to support (which were each decided long before the analysis began), the default assumption is that the government analysis is "neutral" while the industry analysis is "biased." The more rational view is that neither analysis answers the underlying policy question, which depends on a political prioritization of competing societal interests.

Only an extraordinarily incompetent bureaucracy would put forward a proposal so weak that it could not be shown attractive by its own cost-benefit analysis. It is hard to tell when the new central planners believe their analysis that their regulations will lower prices and create jobs versus when they simply use it as a means to the end of advancing their policy agenda. Certainly, both phenomena have been prevalent in central-planning regimes throughout history. Neither leads to a rational, let alone economically efficient, regulatory regime.

THE SPREAD OF PLANNING

This discussion thus far has focused on environmental regulation, from which the new central planning emerged and from which the costliest regulations now emanate. But the phenomenon has metastasized. Environmental concerns delay and prevent infrastructure and housing development. Fuel-economy standards hold sway over investment and production decisions in the transportation industry; they are "filtered into the program plan of every domestic and international car manufacturer," according to Administrator McCarthy. The entire energy sector — traditional oil and gas producers, alternative energy technologies, and electric utilities — sees incentives dictated and production quantities guided from Washington. Meanwhile, a parallel public-health methodology links food consumption to all manner of ills, leading to mandates from menu labeling to transfat bans to soda-size limits to calls for sugar taxes.

Even the internet, typically a case study in the benefits of unconstrained innovation and market evolution, has become a target. Ars Technica describes how, when the FCC began its pursuit of "Net Neutrality" regulation, one think-tank "adjusted its work from climate change to the 'Net." The resulting report detailed a series of market failures and concluded that a combination of new regulations and subsidies was just what the internet needed. In the language of the new central planning: "A combination of policies that protect content providers and judiciously deploy government resources to augment private investment in physical infrastructure is the right mix to ensure that the Internet continues to grow and flourish, generating massive benefits for the American public."

A series of conceptual moves promise to further accelerate the spread of the new central planning. One prominent vector is climate change, for which the new central planners blame almost any economic activity or social behavior and on which they in turn blame any number of future problems. Even accepting the scientific consensus as a starting point, climate policy tends to evoke the most self-righteous declarations of certainty and the most irrational claims of benefit (see The Carbon-Tax Shell Game in this magazine's Summer 2015 issue). In fact, it produces some of the administrative state's most incoherent analyses. EPA's 2015 Climate Change Impacts and Risks Analysis project, for instance, warns that an increase of several degrees in average temperatures by 2100 could lead to $200 billion in annual cost from 12,000 additional extreme-temperature-related deaths. This requires an assumption that the mortality rate in New York would be 50 times higher than the rate in Phoenix today, the rate in Pittsburgh 75 times higher.

Another vector is "behavioral economics," which, as described by Ted Gayer of the Brookings Institution and W. Kip Viscusi of Vanderbilt University, "focuse[s] on identifying cognitive limitations and psychological biases that lead people to make choices that cause self-harm, thus suggesting another type of market failure that justifies government intervention." If hiding behind every decision made by a human in a market lurks a potential inefficiency just waiting for identification by a social scientist and correction by a regulator, then the opportunities for "efficiency-enhancing" interventions are truly limitless. In September 2015, President Obama issued an Executive Order encouraging all agencies to "apply behavioral science insights," "recruit behavioral science experts," and "strengthen agency relationships with the research community to better use empirical findings from the behavioral sciences."

And as the government asserts increasing responsibility for the well-being of its citizens, each intervention becomes itself a justification for further intervention. Thus, federal subsidies for health insurance establish a social interest in each individual's lifestyle choices, because worsening public health will cost taxpayers money. Income-based student-loan repayment programs will create "externalities" in every corner of the labor market.

Conventional wisdom holds that government regulation has evolved toward ever greater rationality, economic efficiency, and respect for market forces. Cass Sunstein, President Obama's first administrator of the White House Office of Information and Regulatory Affairs, wrote an entire book entitled The Cost-Benefit State. Yet the substance of regulation is charging headlong in the opposite direction, toward an ephemeral vision of rules free from trade-offs, endorsed every step of the way by purportedly free-market economists and buttressed with seemingly authoritative cost-benefit analysis.

TOWARD A RECOVERY OF SKEPTICISM

These days, many efforts to contain the administrative state demand more cost-benefit analysis, suggesting a serious misdiagnosis. The remedy for an over-reliance on and empowerment of the new central planners cannot be yet more central planning. Their approach does not fail because the software is not powerful enough or because the regulatory process has an insufficient accumulation of layers. Doubling-down only entrenches the legitimacy of the new central planners and suggests that, with proper constraints, they could get it right. Progress will come when we restore our skepticism about the possibility that a bureaucracy can outperform a market in determining prices or allocating resources and we return such attempts at calculation to their proper place on the periphery of the policymaking process.

If the decision to regulate is fundamentally political rather than technical, then it should be under political rather than administrative control. The White House should control the volume of regulation each agency can produce, rather than invite any and all regulations that pass an impotent test. Congress should narrow its delegations of authority. And courts should scrutinize the validity of that authority, not the validity of the analysis under which rules are promulgated. The terms of the policy debate should change as well.

Beginning within the executive, cost-benefit analysis can still provide a vital internal management function, but it will work better if costs receive the same attention as benefits. "A core set of economic assumptions should be used in calculating benefits and costs," wrote Nobel laureate Kenneth Arrow and ten other economists in Science in 1996. "Key variables include the social discount rate, the value of reducing risks of premature death and accidents, and the values associated with other improvements in health." But like those economists, agencies have neglected to include costs among their core assumptions. OIRA should establish a statistical cost per job lost, a baseline demand elasticity, multipliers for translating compliance costs into generalized economic drag, and a direct relationship between the sheer volume of regulation and rates of investment and productivity growth.

The inclusion of such estimates will not make cost-benefit analysis "work"; output will remain subject to all the aforementioned flaws and provide no reliable basis for outside parties to assess the quality of regulation. It will, however, force greater transparency within the rulemaking process and some acknowledgment that tradeoffs exist. There will be fewer claims that regulations create jobs or growth.

The president, through OIRA, should also establish a regulatory budget for each agency that specifies the total volume of regulation (in pages, mandates, or costs) that may be promulgated. A given agency's budget could be negative, representing an order to actively deregulate. A president eager to aggressively regulate would of course be free to do so, but the budgets set might have some political salience or even become a standard campaign issue. The practice once established would be hard to eliminate entirely. And a president seeking to slow or reverse the regulatory build-up could use the tool to great effect.

Several existing proposals call for Congress to impose such a budget, but Congress already proves itself consistently incapable of fulfilling its existing fiscal-budgetary responsibilities. Repeating the drama each year on regulatory matters seems unlikely to improve matters, and, as with cost-benefit analysis generally, a White House determined to regulate will not be constrained by an interbranch demand that it count costs carefully.

Congress could, however, play a critical role by making delegations of regulatory authority to an executive agency "single-serve," meaning they authorize the establishment of rules based on the agency's technical expertise, but only once. When Congress responds to popular demand for increased regulation, it appropriately defers to bureaucratic expertise and the efficiency of the rulemaking process in designing specific rules. But that expertise does not extend to a determination of when increased regulation is appropriate. Thus, a law on the books should not empower regulators to pursue new avenues of regulation decades in the future.

For instance, the Clean Air Act as passed in 1970 reflected a clear choice to increase environmental regulation at the expense of economic growth. Its amendments as passed in 1990 reflected a clear choice to strengthen regulation in certain areas. That the EPA had authority over the specifics in each instance was entirely proper. By contrast, the EPA's efforts to tighten ozone standards or regulate greenhouse gases in the 2010s bear no relationship to a public desire for increased regulation. EPA is acting on the basis of its claimed expertise in measuring externalities and cost-benefit tradeoffs, whereas Congress's enthusiasm or lack thereof is a far better indicator of whether the time is right for action.

Similarly, the Dodd-Frank Wall Street Reform and Consumer Protection Act reflected a clear public desire to impose increased regulation on Wall Street and expand consumer protection. The broad delegations of rulemaking authority therein (putting aside the constitutionality of the Consumer Financial Protection Bureau) were the appropriate mechanism for implementation. But when an agency decides 20 years hence that regulations need to be tightened further or extended into some new sphere, and uses its Dodd-Frank authority to do so, it will be operating outside its expertise. When the FCC commissioners use decades-old telecommunications statutes to impose regulation on the internet, they are likewise making a determination — that what the internet needs is their considered wisdom — that we should not trust them to make well.

This distinction between expertise in determining when to regulate and in determining what regulation to design should extend to judicial review of agency action. Today, judges give enormous deference to agencies in determining the scope of their regulatory authority but focus intently on whether an adequate cost-benefit analysis has been conducted. This is precisely backward. No court will ever have the technical expertise to substantively audit an agency analysis and no barely competent agency will ever find itself incapable of producing analysis that defends its regulatory choices. Instead, the review serves only to catch the agency in procedural defects and produce remands after which the agency better defends its already-decided-upon rule. The end result is heightened legitimacy for an illegitimate mode of policy evaluation.

Instead, judges should aggressively scrutinize any agency's claim that it has authority to impose a new regulation at all. The Chevron doctrine's insistence on tremendous deference is based on some combination of the legal fiction that Congress wants the agency to interpret the extent of its own authority and an assumption that an agency's expertise leaves it best positioned to make such a determination. But if technical tools do not answer the question of when regulation is desirable, the agency is not the appropriate decision-maker — particularly given its likely institutional bias toward exercising power. Courts need not resuscitate a non-delegation doctrine, nor interfere with the significant latitude that agencies may need when implementing legislation, to adopt a strong presumption against an agency's later efforts to reinterpret or expand its own authority.

Finally, everyone engaged in policy debates has a responsibility to treat the claims of the new central planners with the skepticism they deserve. If words are to retain their meaning, a bureaucrat's calculation of a "price" can never become the reference point against which the market price is determined to represent a tax or subsidy. Assertions of market failure — and, even more so, "behavioral" failure — should be treated as qualitative arguments in favor of a policy, not as irrefutable proof that efficiency demands the adoption of the proposal. And the reporting of "saved lives" and benefits in the "billions of dollars" should be replaced with accurate descriptions of the minute reductions in risk across broad populations.

Central planning will never, through any amount of calculation, manage to determine the set of market interventions that will optimize societal welfare. But if we pay careful attention to how markets behave and what people say they want, we might over time succeed in ratcheting regulatory regimes up and down in ways that are responsive to society's needs. Seeking humbly to strike a balance among complex and shifting tradeoffs may lack the sex appeal of claiming superior and unquestionable expertise, but its track record is much better.

Oren Cass is a senior fellow at the Manhattan Institute. He served as domestic policy director for Mitt Romney's presidential campaign.