The Perils of Health-Care Federalism


The problems of American health care are bound up with its strengths. On one hand, our system yields the most major medical innovations, the widest availability of cutting-edge treatments, and the highest cancer survival rates in the world. On the other hand, lavish spending on such technological wizardry inflates costs and leaves millions unable to afford more basic coverage.

Over time, we would expect the costs of these new medical innovations and treatments to come down, much as they have in other industries transformed by new technologies, like telecommunications and consumer electronics. So why are health costs different? The competitive forces that drive down prices throughout most of our economy often fail to constrain the cost of health insurance. Given that so much care is provided on a fee-for-service basis to patients who have already surpassed their deductibles, providers seldom make cost effectiveness a high priority. Patients choosing the most expensive procedures are rarely responsible for paying for them. And as long as insurers are legally bound to pick up the tab, patients have little reason to pay attention to prices.

Addressing this problem has been a key goal of policy reformers on both sides of the political aisle. It has been at the center of a long-running debate over health care that has produced, among other attempted solutions, the Affordable Care Act. In that debate, some form of managed care — in which insurers drive bargains with providers and enforce rules to manage the volume of health services doctors employ — has been widely recognized as essential to controlling health costs. The Affordable Care Act, for instance, established Accountable Care Organizations, which build on earlier attempts to encourage managed-care networks that compete to provide health care more cost effectively.

But this latest form of managed care — and any other that might follow — is likely to fail unless reformers look to the states. For all the attention heaped on federal health-care policy, the debate has largely neglected the responsibility of state governments for inflating the cost of health insurance. Simply put, the effectiveness of managed care has long been undermined by the imposition of poorly designed state regulations and unfunded mandates. These have created a dysfunctional marketplace — one that makes it unprofitable to cover the people most likely to need care, and in which insurance becomes unaffordable for those seeking coverage. The resulting coverage gap too often ends up being filled by government programs, particularly Medicaid, that only add to the strain on our federal system and public budgets.

It is not hard to see why neither of our political parties is comfortable confronting this problem: Republicans are reluctant to intrude on the activities of states, while Democrats are inclined to prefer extending coverage mandates over deregulating insurance. But the bloated cost of American health care cannot be addressed unless would-be reformers on both sides acknowledge the troubles created by the states, and become more comfortable with federal pre-emption of costly state health-care regulations.


In recent years, pro-market health reformers have tended to view increased consumer responsibility as the path to cost control and improved quality of care. They have sought to limit the need for an insurance middleman to intrude in the payment of routine health expenses and have worked to reduce the temptation to overspend (inherent in third-party payment) by incentivizing increased cost-sharing and extending the tax advantages for the purchase of health insurance to cover health expenses paid out of pocket. The Medicare Modernization Act of 2003 significantly advanced this cause by enabling the creation of tax-exempt Health Savings Accounts.

But federal efforts to encourage consumer-driven health care have been undermined by legislation at the state level. Several large states — including New York, New Jersey, and Florida — mandate that certain types of benefits (like diagnostic and preventive care) be paid through insurance even when a patient's deductible has not been reached. Others, including Illinois, Ohio, and Missouri, limit the use of co-payments or deductibles more generally. And supposedly tax-free Health Savings Accounts established by federal law are nonetheless taxed at the state level in Alabama, California, and New Jersey — significantly diminishing their value and appeal.

But more pernicious, and in many cases less obvious, are state rules that impede the effectiveness of managed care, and therefore stand in the way of both liberal and conservative health reformers.

No one should view high-deductible insurance and greater cost-sharing as the end goals of health reform. After all, the nation's health-care expenses are generally concentrated among a small portion of the population consisting of people with major illnesses; indeed, in any particular year, fully 70% of health-care spending is consumed by only 10% of the population. The annual health-care spending of these individuals will substantially exceed any plausible deductible levels, and is therefore unlikely to be constrained much by greater cost-sharing.

Keeping health-insurance prices under control therefore requires other ways of limiting health spending by people who are insured and whose coverage costs well exceed their insurance plans' deductibles. This means that any meaningful health reform will need to include "utilization management": practices employed by insurers to prevent the overuse of medical services and to make sure health dollars are well spent.

We can see how important it is to limit the use of costly procedures by looking abroad. The United States spends roughly twice as much per capita on health care as most other developed nations. Yet American health spending has not always been an outlier. Until the early 1980s, American health-spending levels were in line with those of other Organization for Economic Cooperation and Development countries. Since 1980, however, the portion of gross domestic product that the U.S. spends on health care has risen by seven percentage points, whereas other OECD nations have increased health-care spending by an average of only two percentage points.

Although Medicare, Medicaid, and the State Children's Health Insurance Program have been substantially expanded over this period, the divergence between American and European health-spending levels owes as much to tightened utilization constraints on European health-care consumption as it does to policy changes in the United States. Starting in 1984, the rate of health-care spending growth in France slowed drastically as most French hospitals were forced to subsist with capped "global budgets" and sector-wide spending targets. Even though upward price trends continued unabated, the volume of services provided fell, holding down overall costs. Germany instituted a similar series of reforms a few years later, intended to limit the quantity of services available: Expenditure caps for physicians were established, along with financial penalties for over-prescribing pharmaceuticals. And there, too, spending growth slowed as utilization declined, even though prices continued to rise.

Why did the same thing not happen in the United States? One common explanation is that America's aversion to "socialized medicine" stood in the way. But this claim is a red herring. For one thing, the level of per-capita government spending on health care in the United States is now similar to that in Europe. More important, utilization constraints need not be one-size-fits-all budget caps and prohibitions on procedures the government deems not worth the cost. Private insurers would find such constraints just as valuable, since they would allow for cost control with minimal negative consequences for quality. So why don't they make use of them? The answer can be found in our state laws.

A simple but rarely acknowledged reality of America's struggle with rising health costs is that moves to constrain the use of costly procedures, which have helped keep European health-care spending relatively well controlled, have gradually been prohibited by state governments in the United States. Such legal constraints on utilization management have often left Americans facing an impossible choice between signing up for expensive "Cadillac plans" and going without health insurance altogether.

The fate of managed-care organizations best illustrates this tale. MCOs try to create incentives for cost effectiveness by managing networks of providers in which primary-care physicians act as gatekeepers. Such physicians may be rewarded for limiting the referral of patients to high-cost specialists, say, or incentivized to behave frugally when ordering diagnostic tests. MCOs also endeavor to control costs by checking the use of astronomically expensive technology.

Managed care has worked in America. One recent estimate by health economists David Cutler, Mark McClellan, and Joseph Newhouse found that managed care has been able to treat patients with newly diagnosed heart disease at 40% less cost than traditional indemnity plans, while achieving similar health outcomes. When the number of Americans insured in health-maintenance organizations (a type of MCO) soared from 41 million in 1992 to 83 million in 1997, the country witnessed a unique slowdown in the otherwise upward-spiraling price of health insurance. Nominal health-spending growth, which had previously increased at an annual rate of between 10% and 13% for decades, was halved during the mid-1990s. Yet, in the late 1990s, lobbyists representing physicians and hospitals whose revenues had been squeezed stirred a popular backlash against managed care and pushed bills through state legislatures to break the mechanisms managed-care plans employed to control utilization and negotiate prices.

As with any purchaser, the ability of MCOs to demand good prices from physicians and hospitals depends upon their ability to threaten to walk away from a deal. Health plans have traditionally struck bargains by limiting membership in their networks to providers who agree to meet cost targets. But state legislation since the late 1990s has frequently undermined this bargaining leverage. "Any willing provider" laws in 22 states require health plans to reimburse equally all providers that comply with pre-set terms and conditions. These prevent managed-care insurers from driving hard price bargains with providers. Similarly, "freedom of choice" laws in 23 states limit the ability of MCOs to channel patients toward in-network physicians and hospitals that provide services at lower cost.

In addition to imposing such restrictions on "selective contracting," state laws also undermine utilization-management systems — with 30 states imposing bans on financial incentives for physicians to constrain the provision of costly health services and 47 states prohibiting MCOs from restricting physicians' recommendation of high-cost treatment options. Gatekeeping rules that require patients to obtain referrals from primary-care physicians before they are allowed to claim reimbursements for visits to costly specialists have similarly been targeted by state legislatures across the country.

Following this legislative backlash against managed care — which saw more than 1,000 bills sponsored and 56 laws passed in 1996 alone — health-care spending began to rise again, at an annual rate of 8% to 10%. A study by economist Michael Vita in 2001 found that health-insurance costs fell as managed care became more prevalent but increased precisely where selective-contracting laws were created; spending levels then rose significantly as laws constraining utilization management were tightened. More recently, research by Maxim Pinkovskiy examining variations in these anti-managed-care laws found that the regulatory backlash explained much of the disproportionate growth in U.S. health-care spending between 1993 and 2005.

Health-care providers have also sought to guarantee themselves higher incomes by pushing state legislation to mandate that coverage of a variety of non-essential treatments be incorporated into health insurance. No advertising campaign is as effective at generating revenue as a lobbying campaign to get states to mandate the purchase of one's products. As a result of such campaigns, more than 40 states prohibited the selling of health insurance that does not pay for services like visits to chiropractors or psychologists. Fifteen states require all insurance policies to cover in-vitro fertilization treatments, which can cost from $50,000 to $200,000. Several states even require catastrophic insurance plans, meant to cover basic medical needs, to also pay for the purchase of cosmetic items such as hair transplants and acupuncture.

When such mandates are in place, price considerations are generally left out of purchasing decisions. It ought to be no surprise that spending soars when costs are deliberately hidden, and the spending increases caused by these state mandates are not trivial. Gail Jensen and Michael Morrisey have estimated that chemical-dependency treatment coverage increases health-insurance premiums by 9% on average; psychiatric-hospital-stay mandates increase premiums by 13%; and psychologist-visit mandates increase premium costs by 12%.

Instead of openly appropriating and honestly budgeting to subsidize mental-health care, for instance, states (and, more recently, the federal government) have mandated that it be packaged into the cost of purchasing health insurance. But these requirements — ostensibly meant to expand health benefits — may, perversely, be limiting access to health care. Frank Sloan and Christopher Conover have estimated that a fifth of those Americans without health-care coverage would purchase insurance were it not for the premium-price inflation caused by such laws.

Attempts to directly remedy the problem of the uninsured by adding even more regulations have been similarly counterproductive. Even before the passage of the Affordable Care Act, "guaranteed issue" regulations in 19 states required insurers to sell coverage to all comers, while "rating" requirements in 48 states limited to varying extents the prices at which they could do so. Such laws frequently obligate insurers to cover patients at prices substantially lower than the expected costs of treating them.

By preventing insurers from charging actuarially fair premiums for health coverage, rating regulations have tended to impose massive spikes in insurance prices on low-risk individuals. When New York state enacted community-rating requirements, premiums for men under the age of 30 soared by 170%. Controlling for the terms of health insurance, a 2008 study by Amanda Kowalski, William Congdon, and Mark Showalter found the combination of guaranteed-issue and community-rating regulations to increase premiums in New Jersey by 108% to 227% for individual and family policies.

As a rule, such unfunded coverage mandates replace a system in which insurers compete to offer products that people want at better prices with one that forces people to purchase goods at prices they don't want to pay. Unsurprisingly, this serves to establish an unstable and acrimonious dynamic in which insurers and patients compete to game the system at each other's expense.

Given restrictions on risk-based premiums, insurers may therefore be forced to impose heavy-handed coverage denials to avoid being bankrupted by the residual pool of patients requiring catastrophically expensive treatments. Rather than expanding coverage, the combination of community-rating and guaranteed-issue laws has therefore served to increase the number of people who are completely uninsured.

While legislators have frequently scapegoated insurers for failing to respond to unfunded mandates by broadening health-care provision without premium increases, in reality, it is the mandates themselves that have caused the trouble. Guaranteed-issue requirements have forced insurers to limit their coverage options, reduce the availability of high-value indemnity plans, and withdraw from the most heavily regulated state markets altogether. In the case of Kentucky, the legislature was even forced to roll back its regulations after most of its insurers fled the state.

The Affordable Care Act represents an attempt to salvage this model of health-care financing nationwide. Rather than reducing the state regulatory requirements that leave health insurance so unwieldy and costly, it uses federal power to compel everyone to participate in a broken system. Instead of allowing purchasers to avoid costs that are visible and transparent, it seeks merely to soothe those most aggrieved over rising prices by dispensing subsidies.

It remains true that there may be a pool of seriously ill patients unable to find insurers to cover their expected health-care needs at affordable prices. But this ought to be cause for an honestly appropriated subsidy, targeted at the needy population, rather than an excuse to pile up regulations and taxes to push the rest of society into a single risk pool. Comprehensive mandates for the purchase of health insurance, as pursued by the Affordable Care Act, seem likely to prove perverse, inequitable, and difficult to enforce. But whatever happens to the Affordable Care Act, state laws make any real alternative exceedingly difficult to pull off.


Given the seemingly insatiable appetite interest groups and ideologues have for wrongheaded state legislation, why did a 2009 Gallup poll find 80% of Americans satisfied with the quality, and 61% satisfied with the cost, of their own health care? A key part of the answer is that the health-insurance plans of most Americans who receive coverage through their employers are not subject to state regulation.

This is due to the most inadvertent legislative success story of the past half-century: the Employee Retirement Income Security Act of 1974, known as ERISA. ERISA was primarily intended as a response to scandals involving private defined-benefit pension plans, which had left employees without their expected retirement incomes when their firms went bankrupt. Congress imposed safeguards to ensure that firms set aside the necessary funds for employee pensions, and some of these rules applied broadly to all employee benefit plans, including health-care benefits.

The bill was overshadowed by Watergate, passing the House by a vote of 407 to 2 and the Senate by 85 to 0 before being signed into law by President Ford a month after Richard Nixon resigned. And while the legislation sought to regulate employee benefit plans, including health-insurance plans, its significance with respect to health insurance was not appreciated at the time. Indeed, the 9,000-word Congressional Quarterly Almanac report on the bill did not even mention health care.

Critically, a provision pre-empting state law had been deliberately strengthened in the House-Senate conference committee that yielded the final bill. The AFL-CIO and other key unions had feared that state taxes might undermine nationally secured bargains over employee benefits that had been negotiated between unions and large employers. The unions thus got Congress to exempt from state regulation health plans provided by self-insured employers. While such plans are often managed by insurance companies, the financial risk is borne by the employer (which has generally meant that only larger employers can afford them). These employers, for their part, accepted the provision because it freed them from having to build separate employee benefit arrangements for every state in which they might operate.

ERISA's broad pre-emption clause has radically constrained states' capacity to impede the provision of affordable health insurance and has saved employer-provided health coverage from incremental strangulation by the states. Section 514 of ERISA declared: "[This act] shall supersede any and all State laws insofar as they may now or hereafter relate to any employee benefit plan" covered by the law. As interpreted by a series of unanimous Supreme Court opinions, this has been held to pre-empt any state laws that expressly refer to, direct the administration of, or significantly influence the provision of health-plan benefits. Today, around 140 million Americans receive health coverage through ERISA-governed plans.

ERISA has freed self-insured employer plans from the selective-contracting regulations that inhibit effective managed care, and self-insured plans thus account for much of the growth of managed care. Unlike under state liability laws, which tend to obligate managed-care organizations to provide treatments without considering cost, managed-care plans under ERISA are legally responsible for acting in the interests of plan participants as a whole, not of individual beneficiaries, and can make coverage decisions accordingly. This allows insurers to give doctors strong incentives to constrain the utilization of cost-ineffective care.

By preventing costly extraneous benefits from being mandated, ERISA has also safeguarded health plans from being raided by state legislators seeking to fund projects without the costs appearing on the public-sector balance sheet. By inhibiting attempts by state governments to hide the costs of interventions in the marketplace with unfunded mandates and hidden taxes (of the sort that lead to premium increases that can then be blamed on insurers), ERISA for many years proved a stumbling block in the construction of socialized health-care systems. A refuge was therefore secured for the provision of cost-effective health care, and politicians have been impeded in their efforts to corral employees into government-directed health programs.

While the Affordable Care Act imposed coverage mandates on self-insured health plans, the success and popularity of ERISA plans shielded them politically from many of the legislation's more burdensome regulations. In fact, a desire to dodge these regulations appears to be pushing some firms with as few as ten employees to self-insure and provide health coverage under ERISA. State legislatures in New York, California, and Rhode Island have moved to limit "stop-loss insurance" that helps employers self-insure, fearing that more self-insurance might undermine the heavily regulated "exchanges" that form the centerpiece of the Affordable Care Act.

ERISA has therefore left the United States with a bifurcated health-care market. On one hand, coverage provided by self-insured ERISA plans has largely been exempted from state interference and functions well. On the other hand, individuals and employers too small to self-insure have found the market for health insurance increasingly costly and dysfunctional as the burden of state regulation has increased.

The fragility of the small-group and individual health-insurance markets has frequently been attributed to adverse selection: the fact that insurance is more appealing to the sick than to the healthy, which, in the context of small groups or single individuals, can make it difficult to effectively pool risk. This fragility has been said to point to a flaw inherent in a free insurance market. But risk pools have generally proved to be stable when allowed to function normally, and they have shown substantial vulnerability only when subjected to burdensome regulatory assaults on health-insurance underwriting. ERISA helps to show that our system can work when regulation is not allowed to frustrate cost control.


For those without access to employer-provided insurance, and especially to insurance exempted from expensive state mandates, health coverage has thus become increasingly unaffordable. Yet this has not led to the rollback of costly regulatory mandates. An unfortunate side effect of the scope and success of ERISA's pre-emption of state laws has been to weaken the political coalition against burdensome state regulation, since large employers can more easily avoid them by self-insuring than by pushing for their repeal.

The easiest way to alleviate the worst effects of soaring health-insurance costs has therefore been to increase public spending. As a result, states have been able to push up the cost of insurance, knowing that Congress will ultimately help pay to fill coverage gaps. This accounts for the fastest-growing component of American health-care spending over the past several decades: Medicaid. Grasping the nature of Medicaid and its place in our system of federalism is essential to understanding the role that the states play in inhibiting the development of a rational health-financing system in America.

Medicaid epitomizes the current dysfunctional structure of American federalism. It involves state and federal governments in an ungainly interaction, seemingly designed to minimize accountability and fiscal responsibility. States are licensed to construct public health-care entitlements for which the federal government then covers 50% to 73% of the cost, depending on each state's per-capita income. This mechanism provides a minimum of one dollar of federal subsidy for every additional dollar the state spends on health care.

Although higher matching rates are available to poorer states, the deal is so generous that most states go as far as they can in taking advantage of it — with the wealthiest states usually able to push the furthest. Medicaid spends almost twice as much per person under the poverty line in the wealthiest states as in the poorest states, with expenditures per poor person prior to the Affordable Care Act ranging from $2,014 in Nevada to $7,753 in Maine.

As a result, Medicaid has done much to supplant the coverage held by people who already had access to private insurance but has done little to reduce the number of people who are uninsured. From 1980 to 2011, Medicaid spending soared from $26 billion to $408 billion, while the proportion of the population that was uninsured rose from 12% to 15%. Jonathan Gruber and Kosali Simon have estimated that about 60% of the Medicaid-spending expansion over recent decades has served merely to replace private coverage that people already possessed. As Medicaid is incrementally expanded in some states in the coming years to cover individuals further up the income distribution under the Affordable Care Act, this crowd-out proportion will only become larger.

This would be a purely budgetary problem if Medicaid were a superior product to private insurance, but it is not. Medicaid reimbursements often fail to cover the cost of care, and studies have regularly found physicians much less willing to schedule appointments for people covered by Medicaid. As a result, far from reducing the burden on emergency rooms, patients covered by Medicaid are almost twice as likely to seek treatment in the ER as are those who are uninsured or covered by private insurance.

At the same time, the absence of substantial cost-sharing — and a bizarre arrangement of policies in which physicians spend money on behalf of patients that is mostly paid for by the federal government on behalf of states — makes Medicaid a cash cow for many medical providers and a soft target for defrauders. Nursing facilities are subject to little oversight and are known to enjoy profit margins of up to 40%, while fraud is generally estimated to account for 10% of Medicaid spending. In 2003, one dentist claimed reimbursements from Medicaid for having performed 991 procedures in a single day. A desire to minimize program-administration costs makes subtler fraudulent endeavors reliably profitable. Consider that, in the aftermath of the federal stimulus bill in 2009, Mississippi's matching rate was temporarily pushed up to 83%. At such a rate, the state would have lost money if it had spent more than $1 million fighting to avoid disbursing $6 million in fraudulent payments.

At times it almost seems as if state governments are in on the fraud. By classifying various state spending programs (sometimes only loosely connected to health) under the Medicaid tent, states can shift most of the cost to the federal government. Other billing tricks — whereby medical providers reimburse states for purchasing their services at deliberately inflated prices so that states may in turn claim higher subsidies from the federal government — further enable the states to use Medicaid as a source of free money.

Even though Medicaid's federal structure is unjustifiable on grounds of equity or efficiency, it has proved appealing as part of a political strategy to fuel relentlessly greater public spending. Matching funds have made it easy for states to expand their Medicaid spending in good economic times and then shackled them in business-cycle downturns with large numbers of voters dependent on public entitlement budgets that cannot be readily cut. States thus make affordable health insurance difficult to achieve through their regulations and pay for the consequences with money from Washington.


The net effect of the current federal-state division of labor in the American health-care system has therefore been to undermine managed care and shift costs to federal taxpayers. It is tempting for members of Congress to be permissive while drafting statutes by leaving significant discretion to the states. This allows them to claim credit for passage while leaving the complex and politically hazardous work of implementation to others. Yet allowing states free rein to intrude in markets has created a costly, confused, and dysfunctional mess, which has only strengthened the appeal of more heavy-handed intervention at the federal level. Rather than restraining this dynamic, the Affordable Care Act serves only to subsidize or demand further state regulatory intrusions.

As an abstract idea, the notion that allowing all the states to experiment would lead to the proliferation of the best policies is quite appealing. But voters from states that have become dependent on failing policies rarely push for cost-cutting retrenchments. Instead, giving states greater latitude just makes it easier to demand expanded federal subsidies. As a result, in the absence of broad pre-emption provisions in federal health reforms, bad state policies may be expected to crowd out good ones over time. And federal legislation is unlikely to make health care affordable if it reinforces costly state regulations, builds on them, and commits in advance to filling the gaps, as the Affordable Care Act does.

This lesson may not be easy for market-friendly reformers to accept. They are often inclined to view state laws much more favorably than federal ones. But in health care, federalism is fraught with peril. Federal pre-emption, meanwhile, can make meaningful reform a possibility. This need not preclude honestly funded state spending, but there is little justification for permitting states to impose unfunded mandates that hide costs, confuse accountability, and antagonize insurers and patients.

While the fragmentation of political power may at times be desirable, its multiplication is not. Federalism as it currently functions in American health policy multiplies the opportunities for interest groups to fleece the taxpayer. It has worked to undermine markets and harm consumers and citizens.

Federal health policy should abide by a principle of "one government at a time" to prevent health-care provision from being corroded by cost-shifting charades. As a practical matter, Congress should demand deregulatory pre-emption wherever federal dollars are involved, and insist upon the rollback of costly state regulations as the price of federal aid.

Chris Pope is a policy analyst in Washington, D.C.