What Do Experts Know?


President Barack Obama was widely hailed for promising in his inaugural address that he would "restore science to its rightful place." The pledge was a classic Obama formulation: a lighter-than-air craft designed to soar over the illusory tradeoffs and false dichotomies of ordinary life, and return to earth delivering all things to all people, including a tidy lump of coal for his predecessor. But it raised in passing the rather weighty question of what exactly the "rightful place" of science in our democracy might be.

The president has not since expanded on the meaning of his formulation, but the various contexts in which Obama and his supporters have raised the subject — stem-cell research, environmental policy, public health, and the like — make it clear that the real underlying issue is not natural science in particular, but expert knowledge more generally. Science was denied its rightful place, as they see it, by being denied the power to make exclusive and authoritative expert judgments that would be given effect by the government. In March, Obama signed a directive to executive branch agencies about "restoring scientific integrity to government decision making," intended, as he put it, "to ensure that in this new administration, we base our public policies on the soundest science; that we appoint scientific advisors based on their credentials and experience, not their politics or ideology; and that we are open and honest with the American people about the science behind our decisions." The president has similarly held up a model of detached expertise in health care, where he argues costs could best be contained by an "independent group of doctors and medical experts who are empowered to eliminate waste and inefficiency." And in numerous other areas of policy, he has articulated great faith in and deference to professional, objective, academic, and scientific experts.

Yet Obama's presidency has begun in the shadow of a massive failure of expertise: a financial crisis the scope and nature of which almost no professional economist anticipated, which few could comprehend in real time, and which left our complex system of expert economic analysis and management badly overwhelmed for quite some time. The president's trust in technical experts — and the juxtaposition of his confidence and their failure — opens wide the question of the rightful place of specialized expertise in our politics, and exposes the tension between expert judgment and public judgment that has featured prominently in the story of American democracy.


Like so much else in Obama's program for America, his view of expertise points in the direction not of something new but of something very old: the dream of technocracy. In this view, society should be ruled by accredited and academically certified experts — disinterested authorities who in turn direct an enveloping web of bureaucrats, engineers, civil servants, and other professionals and technicians employed by a vastly enlarged and all-embracing state apparatus. One could argue that such a dream is as old as the noocratic state conjured in Plato's Republic. But it is more properly understood as a child of modernity, the lofty dream of philosophes and utopians — such as the Marquis de Condorcet, Auguste Comte, and Lester Frank Ward — who believed that, given the intelligibility of the world and the limitless capacity of human reason, the methods of science should be extended to embrace all aspects of life, including the ordering and administration of social and political affairs.

The idea, in full or partial form, has had many exponents over the years, even in democratic America. It informed the shape and direction taken by the professions — medical, legal, scientific, scholarly — that established themselves as bastions of expertise in the last decades of the 19th century. It has held a particular attraction for those who dislike or distrust the activity of democratic politics, with its rough-and-tumble clash of competing interests, ideologies, and passions, and its susceptibility to corruption, dishonesty, and self-interestedness. How much better to decide thorny and divisive issues by reference to panels of accredited experts — wise men whose training and commitments embody the breadth and depth of disinterested intellect, rather than the various imperfections of the democratic process.

Much (though not all) of the many-faceted Progressive movement of the early 20th century reflected the same belief. Although the Progressives drew heavily on the nation's deep reservoirs of Protestant moral passion, they sought to blend such passions with an essentially technocratic ideal. In their view, objective social research — generated by scholars trained in the sciences of government and administration, and applied by uncorrupted public officials (including "city managers" who would be appointed rather than elected, and therefore rendered immune to political pressures) — could lead to ever-improving governance. Such a view had obvious applications in the regulation of giant corporations, one of the chief objects of Progressive concern. But it also applied on state and municipal levels.

Many of these ideas were incorporated into the New Deal, which experimented with a variety of policies aimed at increasing federal management of the economy and money supply, controlling prices, regulating agricultural production, and creating a huge array of complex social programs. Although the New Deal did not form anything like a coherent whole, it had a strong underlying technocratic sensibility — ­exemplified by President Franklin Roosevelt's use of a "brain trust" of expert advisors, and codified in the words of Donald Richberg, the second head of F. D. R.'s ill-fated National Recovery Administration: "A nationally planned economy is the only salvation of our present situation and the only hope for the future."

Of course, it was the Second World War, not the economic planning of the New Deal, that lifted America out of the Great Depression. Nevertheless, the so-called "liberal consensus" that emerged in the war's aftermath endorsed and continued the spirit of the New Deal. Weary of the class-conscious politics of the past and of the perfervid ideological politics that had inflamed the sensibilities of the country's recent wartime enemies, the post-war liberal consensus embraced an even cooler, more analytical, and more secular and skeptical style. Hence it was an even purer realization of the technocratic ideal, offering a dispassionate, pragmatic, experimental, flexible, and non-ideological approach to governance, and envisioning America's society and economy as a system of countervailing forces that could be kept in balance by intelligent, problem-solving experts operating out of a powerful national government. Daniel Bell limned the scene in 1960 with a not-entirely-­approving description, captured by the title of his book The End of Ideology: On the Exhaustion of Political Ideas in the Fifties. A similar view was put forth — albeit in more upbeat terms — by President John Kennedy in his 1962 Yale commencement address, in which he declared:

What is at stake in our economic decisions today is not some grand warfare of rival ideologies which will sweep the country with passion, but the practical management of a modern economy. What we need is not labels and clichés but more basic discussion of the sophisticated and technical questions involved in keeping a great economic machinery moving ahead.

Kennedy concluded not with the inspirational uplift customary on such occasions, but rather by advising the graduates to take their part "in the solution of the problems that pour upon us, requiring the most sophisticated and technical judgment." He was ushering them into the ranks of the governing experts.


Of course, this strain of technocratic rationality in American thought and politics has been just one part of the story. There is also a long and colorful history of fervent resistance to it. President Andrew ­Jackson, for instance, was notoriously suspicious of all claims of expertise, which was one reason he so ardently defended the principle that public offices should be "rotated" among party supporters as a way of sustaining the nation's republican ideals. Effective performance in public office required no special intelligence or training, Jackson contended, so rotation in office would ensure that the federal government did not develop a parasitic class of corrupt civil servants who were set apart from the people and could make hay out of their monopolistic control of special knowledge. His sentiments mark an early expression of what would become the populist strain in American political culture.

It has always been easy to dismiss the outlook of Jackson and those who have followed in his footsteps. The historian Richard Hofstadter, himself an emblematic figure of the liberal consensus, found such attitudes to be alternately amusing and appalling, and in any case characteristic of an ingrained and lamentable "anti-intellectualism" in American life. But the populist critics have always had a solid point in their favor. It is, after all, no coincidence that Andrew Jackson still stands as one of the epochal figures in the democratization of American culture. His "age" saw a substantial expansion of political participation and extension of suffrage, and a broadening and quickening of general economic opportunity. His entire persona — a self-made soldier-hero and land speculator from the backcountry who climbed his way to the highest office in the land — was a dramatic affirmation of the worth of the common man. Jackson's example also testifies to the fact that democracy itself is necessarily at odds with expertise, and must insist that expertise be accountable to the populace, and to political and social considerations. Which is precisely why the question of the "proper place" of expertise remains, or should remain, far more vexed than President Obama would suggest.

But critiques of the Progressive and liberal visions of expertise have not been limited to populist dissenters. Indeed, President Kennedy's claim has been undermined most significantly by the products of the very intellectual elite in which he placed his trust. In 1967, five years after Kennedy's remarks at Yale, Paul Goodman wrote of his dismay when, teaching a course on "professionalism" at the New School for Social Research, he found his students expressing open contempt for the steady succession of experts — architects, engineers, doctors, and the like — whom he brought to speak to the class. For all his cultural radicalism, Goodman also had a deep respect for the craftsman-like work of true experts and their high-minded dedication to professional ideals. But this view was not shared by Goodman's students. They sneered that the professions in modern America were nothing more than mechanisms of social control — tools for the consolidation of power and the co-optation of knowledge. Experts were not, in their view, ­dispassionate truth-seekers bound by the integrity of their subject matter and their professional oaths. They were liars and mystifiers, interested mainly in lining their own pockets and protecting their own perquisites.

This hostility turned out to be much more than an isolated phenomenon, and it came to have particular resonance, ironically, in the universities. It was backed by a surge of anti-professional scholarship, which found in the growth of several professions — psychiatry became an especially popular target, as did any field having to do with sexuality — a dishonest power grab that traduced America's democratic promise. The title of historian Burton Bledstein's 1976 book, The Culture of ­Professionalism: The Middle Class and the Development of Higher Education in America, deftly captured the emerging view, which placed the modern university at the center of a great cultural transformation by which the rising middle class established, and then defended, the turf from which it would pursue its class interests. Radical sociologist C. Wright Mills assailed the very notion that social science could or should be "value-free," a perspective that rendered impossible the neutrality that the technocratic ideal seemed to presume. The work of Michel Foucault, especially popular in literature departments and other evidence-free venues, took matters a step further, creating the neologism "power-knowledge" to indicate the ultimate inseparability of the two concepts.

Much in these critiques was vastly overblown and patently self-­indulgent, and much has proved enduringly destructive to the academic enterprise, which withers and dies if it cannot presume at least the possibility of objective standards and civil, truth-seeking discourse. But there was also an important validity to them. The authority of experts took a beating in the past 40 years partly because the idea toward which such authority tended — the notion that expert knowledge could be amassed and exercised in an entirely disinterested way — was deeply mistaken, and deeply subversive of any meaningful conception of democracy. The suspicion behind this turn against expertise has endured, and has had interesting and unexpected ramifications. For example, it played a crucial role in the emergence in the mid-1960s of "public choice" economic theory — which insists that all considerations of public policy include the recognition that government itself contains interested actors, individual and institutional, a premise that rejects the very possibility of disinterestedness.

But as President Obama's remarks show, a more technocratic approach is making a comeback in our politics. And so today we find ourselves at a fitting moment to reflect anew on the uneasy status of expertise in a democratic society. The uneasiness is unavoidable: Expertise locates authority in specialized knowledge and professional certification, based on extensive education and exposure to rigorous standards of evidence operating in highly exclusive, self-correcting communities of peer review. Democracy, on the other hand, tends toward the widest possible affirmation of the fundamental competence of every man, simply as a function of his status as a human being and citizen, irrespective of education, social class, or material wealth. The two can coexist, of course, and may even serve as useful correctives to each other; the ideal of an equal-opportunity meritocracy, for example, seeks to honor them both. But neither can easily be fenced in, or contented with anything short of predominance. Their coexistence, in short, never comes without a measure of chronic mutual distrust.


Expertise, or something like it, has arguably always been a feature of human associations, since it is an outgrowth of the division of labor and of the distinctions of rank and function that are natural to even the most rudimentary human communities. One thinks, for example, of the social roles of priests, shamans, midwives, and the like, and the specialized knowledge that came with those roles. But expertise as we understand it today is a product of the dynamism of the modern world, and of the conditions that gave rise to a degree of complexity and social differentiation unprecedented in human history.

The engine of this transformation was the process Max Weber called "rationalization," the ever more pervasive tendency to subject every part of life to impersonal calculations of cost and benefit, rather than evaluate by reference to custom, precedent, tradition, religion, or affective ties. It is precisely because of these disorienting conditions of modern society — and the manifold complexities that trail in their wake — that we have become so reliant upon expert knowledge and place such a high premium on it, "especially," in the words of historian Thomas Haskell, "when it comes stamped with the special authority of an organized community of practitioners who police each other's opinions and thereby create something approaching a consensus of the competent."

Such terms suggest a very high level of trust and respect. But Weber's own vision of modernity's "iron cage" of rationalization was no celebration; the pathos of his work arose precisely out of its unsparing depiction of the degraded condition to which unbridled rationalization seemed inexorably to be leading the human race. Where technocrats saw a future of sunlight and roses, Weber saw instead a "polar night of icy darkness," a condition of "mechanized petrification, embellished with a sort of convulsive self-importance," a world of "specialists without spirit, sensualists without heart," of humanity having become a "nullity" that deludedly imagines "that it has attained a level of ­civilization never before achieved." At the very least, Weber's terrifying vision ought to make us hesitant about endorsing a regime of experts, even if — ­especially if — it could be made to work as advertised. Of course its working as advertised seems terribly implausible without some radical alteration of human nature itself — a fact that is comforting, perhaps, but that further complicates the task of rightly understanding and deploying expertise.

To be sure, we could not make sensible judgments about highly technical issues — such as questions of structural engineering, nuclear power, epidemiological danger, the intricacies of the money supply, or climate alteration — without recurring to the counsel of experts. We rely on the statistics gathered by the Department of Labor to make vital political judgments about the rate of unemployment, and on those ­gathered by the Census Bureau to allocate federal resources in our complex country, and we are right to do so. Yet the fact remains that we are fools if we take these judgments and these data on simple faith, without considering their sources, their internal consistency, the extent of their resistance to political manipulation, and their adequacy to larger contexts of political or personal decision-making that may or may not be intelligible to the expert. No one should take the word "expert" as an open sesame.

This of course creates its own difficulties — leaving us aware of the existence of expert knowledge, but unsure of the reliability of the particular expert before us. Nowhere is this problem more glaringly visible than in the use of "expert witnesses" in courts of law. It goes without saying that there are instances in which the testimony of experts is ­essential — forensic reports or DNA analyses, for example — and must be regarded as reliable, as standards of accuracy are relatively objective and transparent. But there are other examples, such as the notorious forms of psychiatric expert testimony attendant upon the "abuse excuse," that may be less transparent and more vulnerable to common-sense skepticism and challenge. In cases where the parties have particularly deep pockets, there may also be hired-gun witnesses whose expertise is less firmly established than the predictability of their testimony, or dueling "experts" whose views cancel each other and throw the matter back upon the judge or jury. Such a judge or jury must then possess, in a very real sense, an expertise beyond expertise — the ability to weigh and rank claims of expertise against a standard that is impossible to describe, but indispensable to the effective and intelligent navigation of an expert-saturated judicial culture.

This in turn raises another consideration falling well outside Weber's iron cage: the situation in which the putative experts turn out not to be experts at all. And in this case we can find Exhibit A close at hand in the shamefaced profession of economics. No social-scientific field has been more sure of its scientific accuracy and predictive power; none has been filled with a greater sense of self-confidence and self-importance. Then came the global financial crisis, and a great and general humbling ensued, for the wizards of the economics world failed spectacularly to predict this calamity. "It's not just that they missed it," said Franklin Allen of the Wharton School; "they positively denied it would happen." Worse than that, the collapse seemed to have been furthered and fueled by some of their own blithe assumptions.

"Why," demanded an exasperated Queen Elizabeth of a British economist, "did no one see the crisis coming?" A panel of experts assembled by the British Academy offered the following answer:

Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances over which no single authority had jurisdiction. . . . [It] was principally a failure of the collective imagination of many bright people.

And therefore perhaps a failure of expertise itself, or at any rate of our methods of certifying it.

In seeking to understand how such a failure could occur given all those professional safeguards, we can learn a great deal from a ­remarkable — and admirably honest — confession by economist Robert Shiller, who had advised the Federal Reserve Bank of New York until 2004 but kept quiet about the dangerous housing bubble he thought he saw developing. "While I warned about the bubbles," Shiller said, "I did so very gently, and felt vulnerable expressing such quirky views. Deviating too far from consensus leaves one feeling potentially ostracized from the group, with the risk that one may be terminated."

In other words, the "communities of the competent" sometimes behave more like communities of the complacent, or communities of the conforming. What Shiller describes is a situation that was made intellectually and morally dangerous not by the fact of expert consensus itself, but by the illegitimate way in which the existing consensus was arrived at and preserved: by the fear that dissenting views could not be safely aired. The strong moral discipline that characterizes a rigorously truth-seeking scholarly community is exceedingly difficult to create, let alone to sustain. Yet without it, expert opinion is soon rendered useless. We see this scenario acted out again and again around important issues of great public moment, about which loud voices and lazy journalists declare that "all the experts are in agreement" and "the debate is over" and that those who dissent are "deniers." Nothing corrodes the spirit of science and truth-seeking more than this bullying dogmatism; the discipline of economics is now paying the price for it, as are the rest of us.

We may see similar prices paid in other fields. As Nicholas Wade of the New York Times commented astutely on the Shiller case:

If the brightest minds on Wall Street got suckered by group-think into believing house prices would never fall, what other policies founded on consensus wisdom could be waiting to come unraveled? Global warming, you say? You mean it might be harder to model climate change 20 years ahead than house prices 5 years ahead?

Clearly Wade is painting with a broad brush, but what he says is hardly implausible. If he is right, the entire edifice of expert knowledge is in danger of a damaging loss of credibility. The recurrent problem of "political correctness" in the universities ought to be seen in this larger context — not merely the unfairness of excluding important points of view, but the consequent damage that such exclusion inflicts upon the disciplines themselves.

Confirming that sense of academe's shaky foundations is a valuable new book, Michèle Lamont's How Professors Think. The work is a sociological study of what is arguably the central institution of academic life: the scholarly peer-review panel. Lamont is a careful and methodical researcher, and her study is a respectful effort to get inside the "black box" of a procedure that has been rendered far more mysterious than it needs to be. Again and again she documents the frivolous and extraneous factors that skew the peer-review process: panelists swayed by groupthink, applying very personal or political criteria to the evaluation of the work before them, making explicit or implicit deals with one another like horse-trading politicians, and recurring to old-boy ­networks and personal ties of various kinds. In the universe depicted by Lamont, the notion that there are or can be objective standards of excellence or importance seems chimerical.

This too suggests there are serious problems down the road for the very concept of expertise in many academic disciplines, not to mention the ability of those disciplines to command the respect of people outside them. If the choices and judgments of experts demonstrate only the absence of any enduring and non-arbitrary professional canons, then perhaps the disdain of Goodman's students was justified. Perhaps we are, at least in some disciplines, edging toward the old quip that an expert is "a man fifty miles from home with a briefcase."

Indeed, one of the most intriguing books of recent years, James ­Surowiecki's The Wisdom of Crowds, makes the surprising case that the combined judgments of ordinary people are sometimes superior to those of experts. Not every crowd is "wise," in Surowiecki's view, and he is by no means advocating that we be governed by the first 2,000 names in the Boston phone directory. But he does believe that groups possessing sufficient size, diversity, and independence to guard against tendencies toward groupthink, and, most important, a proper means of aggregating individual opinions into a sound group judgment, can often outperform the judgments of experts. Some of that strength comes from the ability of such networks to incorporate dissenting or outlier views, the sort of difficulty that appears to afflict economics and science. "Bubble" behavior, he argues, is precisely a function of group suggestibility and emulation, and thus a failure to meet the standard of "independence."

The book begins with a classic example: how, at a 1906 country fair near Plymouth, England, the polymath Francis Galton was amazed to see that a crowd of nearly 800 people guessed the weight of an ox to within a pound when their individual estimates were averaged. This is the kind of phenomenon the book describes again and again, going on to consider other cases of "decision markets," in which individuals who have different views but similar motives can be more reliable than experts, or "prediction markets," speculative or betting venues created to make verifiable predictions, such as the outcomes of sporting events or elections. Contrary to the usual disparaging views of "crowd ­behavior," Surowiecki repeatedly finds patterns of collective wisdom that turn the logic of expertise on its head.

The internet, for instance, can be an incomparable tool for aggregating vast amounts of information drawn from highly disparate sources, and for doing so more or less spontaneously. One sees this in the ­operations of Google, in wikis, in highly interactive and information-­gathering commercial sites such as, and in blogs. One example of the last-named can suffice: In the midst of the 2004 presidential campaign, when CBS News's Dan Rather presented the ­country with documents purporting to cast doubt upon President George W. Bush's service in the Texas Air National Guard, ordinary bloggers all over the country immediately got to work on the problem. They were not experts, for the most part (though some were possessed of useful bits of arcane knowledge, or knew people who were), and they did not know one another. Unlike the experienced staff of CBS News, they were not part of any certified "community of the competent." But they brought to the task a vast set of idiosyncratic skills and experiences that, when united and focused on a single problem, yielded astonishingly swift and conclusive results. They were quickly able, in an entirely undirected way, to assemble irrefutable evidence that the documents were ­forgeries. Similar scenarios have been repeated numerous times, making it exceedingly difficult for journalists and politicians to present sensational reports, give indiscreet speeches, or draft opaque legislation while avoiding scrutiny.

We may be seeing, in other words, a new model of democratic expertise, which works by rules very different from the model of modern scientific expertise. It is not a matter of declaring every man his own expert, since it is not a process that can work in isolation, but depends on aggregation and open venues to operate. And it depends as well upon the prior existence of various bodies of conventional expert knowledge, to be drawn upon in an ad hoc way by networks of individuals who are asking a particular question or seeking a particular answer.

This new approach has many shortcomings, to be sure, but also great strengths which we are only beginning to appreciate. And while not a substitute for professional expertise, it may prove to be an important check on its excesses. It points toward the possibility of a dialectical relationship between expertise and democracy, in which the authority of experts is not regarded as an ipse dixit to be accepted on faith by a layperson, or a conversation stopper, but rather one element in a back-and-forth process of mutual accommodation, in which popular ideas and expert knowledge learn to respect one another even as they contest one another's judgments. It bears a real resemblance to the pluralism described in James Madison's Federalist No. 10, in which the conflict between factions is presumed, even celebrated, as a source of correction and cohesion — so long as the sphere within which they operate is sufficiently commodious.


Clearly, given the immense and growing technological and cultural complexity of the world we inhabit, we will not be able to dispense with the authority of experts — nor should we. But our choices are not limited to technocratic rule and uninformed populism. Far preferable to either is an order that respects the claims of both democracy and expertise, without denying the tension between them.

But can we make useful generalizations about the "rightful place," and the limits, of expertise? Can we pinpoint the reasons why experts fail, and find means of guarding against such failures in the future? The example of the financial crisis offers clues in this regard. First of all, it reinforces the crucial importance of keeping diversity of opinion and perspective alive in the disciplinary communities — or, given the ideological monocultures that dominate so many of these communities, of introducing such diversity for the first time in memory. It shows us why the dominant paradigms and "normal science" of the day must be kept from exerting a tyrannical control over the community of experts. The lazy and dishonest argument so often heard regarding everything from academic appointments to Supreme Court nominations — that a candidate who has original views is "not in the mainstream" of the field — ought to be regarded as a canard unworthy of a genuinely serious community of expert knowledge, and a criterion that does not have the health of that community truly at heart.

Second, we need to be attentive to the larger context within which we seek and use expert knowledge. British economist Robert Skidelsky has blamed the failure of the economics profession to anticipate the financial crisis on its excessive claims for itself: an overweening pretension to scientific status, an abstract and overly mathematical account of human behavior, and, above all else, a claim to be able to predict the future. It failed to remember that not all of the universe is amenable to ­mathematization, that actors are not rational, that markets are not perfectly efficient, and that risk is ubiquitous and costly. Precisely because it prided itself on its prowess as a predictive "science," economics must now eat crow.

Its current intellectual limitations are, in Skidelsky's view, not beyond correction. They are traceable back to the training of economists in the universities — where broadly based study of the philosophical, moral, and historical dimensions of human existence is almost entirely neglected, in favor of sophisticated technical training that reduces the discipline's subject matter to "a mechanical world of interacting robots." Keynes's observation that "economics is a moral and not a natural science" needs to be relearned — or learned for the first time — by the economics professors of the world, Skidelsky argues. In a sense, they need to be reacquainted with a larger sphere, within which their ideas take on a more proper perspective.

Finally, it is vitally important that the contingency and vulnerability of even the most expert knowledge not be forgotten, and that we learn not to expect too much from experts — or get inordinately cross with them when we have, and they fail us. It is not only their fault, but also our own, if we fail to exercise our own common sense, and forget that life is full of risks and complications, and of smart people who are ­incredibly short-sighted. As the old saying goes, some ideas are so wrong that only a very intelligent person could believe in them. Or to put it more ­charitably, expertise is inherently narrow, but our world is untameably broad. In a society awash in all manner of expert claims, we need to cultivate a judge's skill in evaluating them — to be as expert as we can in the evaluation of experts, and to accept responsibility for the chips as they fall.

In other words, experts should have a voice in our democracy, and a resounding and respected one — but only that. They cannot be allowed to speak ex cathedra merely on account of the letters after their names, or the sheepskins on their walls, or the professional associations to which they belong. They must also persuade, speak a public language, bring themselves up to the level of the democratic bar, and make their case patiently and respectfully, in a way that passes muster with their fellow citizens. An impenetrably technocratic top-down solution to social or economic problems that does not meet this standard, cannot explain itself clearly, and does not seek the informed consent of the governed is unacceptable — because it is finally a betrayal of the very idea of a democratic republic.

But the experts aren't the only ones deserving of blame. Citizens and policymakers too often yield undue authority to expertise, especially in difficult cases. Too many issues are referred to appointed blue-ribbon panels as a way of escaping the heavy lifting of actual politics. We cannot and should not deny the power of specialized knowledge in all spheres of inquiry — but neither should we exaggerate that power, allowing it to run our lives and colonize our imaginations. We should not forget that specialized knowledge is always a means, never an end, and that its knowledge is partial. There are no technocratic experts in the art of living, only flawed individuals finding their own way. And that is as it should be; that is what it means to keep expertise in its rightful place. As Weber understood in the end, the bureaucratization of knowledge and the rationalization of the world lead to a barren knowledge that is not really knowledge, because it encompasses everything but the proper ends to which knowledge should be put. It views the world in splinters and fragments, and insensibly but surely robs human life of its own spontaneous energy, grace, and freedom.

We need to beware that possibility, and to be wary too of the promise that the application of expert knowledge will invariably improve our lives. It is not necessarily so. That is the inner meaning of G. K. ­Chesterton's contention that "if a thing is worth doing, it is worth doing badly." Worth doing badly, that is, because the most important thing of all is not that one does the thing well, but that one does it oneself. Those things most worth doing are ends in themselves, essential features of our humanity, to which the excessive application of self-conscious ­technique is liable to be the kiss of death. Even if it could be shown that the systematic application of expertise invariably led to superior outcomes, we should resist it. One may sing badly, worship badly, make love badly, eat and drink badly, and vote badly. But no one in his right mind wants to yield those functions to others more accomplished, or give thought to how an expert would rate his performance of them.

What, in fact, could such a judgment possibly mean? Whose life is it anyway? The danger today is greater than ever that our mania for improvement may steadily diminish the meaning of the very thing it seeks to improve. Faced with these considerations, expertise needs to lay down its heavy load and fall silent, since of such things it really has nothing to say. In so doing, it will begin to find its truly rightful place.

Wilfred M. McClay is currently the William E. Simon Distinguished Visiting Professor at the School of Public Policy at Pepperdine University. He holds the SunTrust Chair of Excellence in Humanities at the University of Tennessee at Chattanooga.