Recovering Real Expertise

Ronald W. Dworkin

Fall 2022

I am a board-certified anesthesiologist — that makes me an expert in my field. Many years ago, after returning to work from a long vacation, I discovered that I had lost my expertise for a few days. It was not that I had forgotten any scientific knowledge. My hard skills — dexterity with needles and tubes, for instance — also returned quickly. Yet I was still not my expert self. Something was missing.

Our current model of what it means to be an expert cannot grasp, nor can it account for, what that something was.

When the word "expert" first appeared in the 14th century, it was an adjective meaning "having experience." The following century, the term became a noun that referred to someone who had become wise through experience. In the early 19th century, the word appeared again, this time in a legal sense. This sort of expert was viewed as a specialist, as opposed to a generalist who has opinions but no knowledge of particular facts.

The 19th-century definition gave rise to the machine view of the expert that prevails today. In the 21st century, an "expert" is said to be someone who has reliable skills and knowledge of specialized facts. A machine has even more reliable skills and access to even more specialized facts. Training an expert has thus come to mean training a person to be almost as good as a machine, causing the expert's unique psychology — that "something" I was missing — to fade in relevance.

Our bias is understandable: We are entranced by the machine. We are entranced by its perfection — a perfection more exquisite and more enduring than ourselves. Unlike a human being, a machine is all sureness and no chance. Yet the machine analogy may have reached its limit in helping people become experts. Indeed, the limits of training via machine help reveal the limits of our model of the expert.


In 1964, a year after former philosophy instructor Stephen Abrahamson arrived at the University of Southern California (USC) to run its Division of Research and Medical Education, a man named Tullio Ronzoni walked into his office with an unusual request. Ronzoni was an engineer for rocket manufacturer Aerojet General. At the time, defense-department cutbacks were threatening Aerojet's bottom line. To drum up new business, Ronzoni asked Abrahamson if there was a way to use computers in a medical school.

Initially, Abrahamson hesitated. "I don't know much about engineering," he replied, "and I don't know what you're asking me." Ronzoni's response was that if they couldn't work something out, he would simply try again at UCLA. "That's not something that a guy from USC wants to hear," Abrahamson later recounted.

So Abrahamson came up with an idea. At the time, anesthesiology residents learned their craft by practicing on patients at the county hospital. Abrahamson suggested that a computer might be programmed with patient data that teachers could then manipulate, enabling residents to train for real-life problems without risk to real-life patients. Ronzoni approved of the idea, and Abrahamson arranged for a group of businessmen, engineers, and anesthesiologists to meet every Saturday over martinis to plan the project. One afternoon, a member of the group asked: Rather than training anesthesiology residents on computers, "why don't we build the whole body?" — in other words, a mannequin to stand in for a patient. The result was Sim One, a lifelike doll hooked up to a computer. Sim One was the first mannequin simulator.

Simulation technology is now commonplace. And it's not used exclusively by doctors; today's pilots, nuclear power-plant engineers, athletes, financial specialists, and military officers routinely train on simulators. Even hospital chaplains are candidates for simulation training. Although mannequin simulation technology (MST) is largely confined to health care, two teaching principles underlie all simulation technology. These two principles are also mainstays in the training of experts more generally.

The first principle is "deliberate practice," an idea popularized by psychology professor Anders Ericsson and later by writer Malcolm Gladwell in his book Outliers. Through deliberate practice, students learn and rehearse new skills for hours, gradually enlarging their comfort zone by mastering ever-more complex skills. The second principle is the "power of habit," named after Charles Duhigg's book of the same title. In applying this principle, students learn to connect certain cues — in medicine, perhaps a fast heart rate or a low oxygen level — with a specific response so that the latter becomes second nature. The simulator allows a person to practice on a problem at rising levels of complexity (deliberate practice) and with enough frequency that his response to the problem grows reflexive (the power of habit).

All this sounds like an ingenious way to train up-and-coming experts. And yet despite the excitement surrounding its introduction, MST has not proven effective at teaching or assessing expertise. No one in the field seems to know why. Tug on the mystery, and our entire understanding of what it means to be an expert starts to unravel.

Researchers first noticed the problem when estimating MST's ability to improve student knowledge: They found that MST was no better than the conventional method of teaching students with textbooks. MST was only better than no teaching at all — hardly an accomplishment. Researchers then looked at whether MST-trained doctors generated better patient outcomes. To their surprise, they found that such doctors' outcomes were often no better than those of textbook-trained doctors.

Nor was MST very good at assessing expertise. Presumably, board-certified doctors are more expert than doctors without certification, who in turn are more expert than residents in training. Yet MST could not confirm this intuition. A study in anesthesiology found that residents and board-certified doctors exhibited similar intra-operative skills during simulation. Only very junior residents consistently performed worse.

How can this be? Dr. David Gaba, a professor of anesthesiology at Stanford University and one of the founders of the Society for Simulation in Healthcare, offers one possible explanation: He observes that some board-certified anesthesiologists work in low-risk settings and tend to lose their skills, while less-experienced residents who work in high-risk academic settings on complex cases improve their skills over time. In other words, the simulator works just fine; the problem is with the doctors being tested. Older board-certified doctors may not be as expert as we think they are because their skills have degraded.

If Gaba's theory is correct, the medical profession's entire method for confirming expertise may be flawed. A board-certified doctor carries the profession's imprimatur of being an expert, yet he may be no better at his job than a resident — or perhaps even worse. And in fact, large variations in performance exist among board-certified doctors. While many perform well on a simulation and demonstrate a minimum level of knowledge and skill, in some specialties, a full quarter of board-certified doctors perform at remedial levels.

None of this would matter if board certification led to better patient outcomes. Yet studies have shown that board certification among doctors does not necessarily lead to such outcomes. Studies of other fields reveal similar shortcomings: Licensed psychiatrists and psychologists are often no more effective at performing therapy than laypeople with minimal training. Licensed financial experts are often no better at picking stocks than novices. Whole systems for awarding the title "expert" may in fact be next to useless.

This truth gets little airtime, however, because it poses a serious challenge to our understanding of what it means to be an expert.


So just what does it mean to be an expert? We tend to extrapolate from the definition of a novice to define the term. Novices know little; they are unreliable, unpredictable, and often generate poor outcomes. We conclude from this that experts must be the opposite: Experts must know much, must be reliable and predictable, and must generate good outcomes. At the spectrum's far end is the machine — the most expert of all. Nothing, we presume, is more knowledgeable, reliable, predictable, or capable than a machine.

The model of the machine as supreme expert has dominated American thinking for over a century. Human beings by nature are prone to mistakes, so we have tried to remove them from the equation. In this way, we hope to make certain activities more "foolproof" — in other words, we hope to protect them from the fool.

First coined in 1902 when the assembly line came into being, "foolproof" captures the essence of what we expect from our experts. The machine is knowledgeable, reliable, predictable, and capable. The machine never comes to work tired or distracted; the machine is always composed. Even a fool can turn on the machine and the machine will work perfectly. True, an expert is better than the fool; he can be relied upon to add more value to the process than the fool. Still, the expert is not as good as the machine. And yet the more a person approximates the skill set and track record of the machine, the more expert we say that person is.

The automated external defibrillator (AED) arose out of such thinking. An AED diagnoses and treats dangerous heart rhythms on its own. It is available in public spaces where a person might go into cardiac arrest without a doctor or nurse around to treat him. Even a fool can apply the AED's pads and turn the machine on; the machine then decides when and how much to shock the person. An untrained sixth grader is only 20 seconds slower than an experienced doctor in treating a patient with an AED. The AED is what we call "foolproof."

MST works off the same principle. It assumes a continuum of expertise with a fool on one end, a machine on the other, and an expert somewhere in between. MST tries to make doctors and nurses more knowledgeable, more reliable, more predictable, and more likely to produce good outcomes — in other words, more like machines. During a simulated cardiac arrest, for example, a doctor practices compressing the mannequin's chest two inches every second, over and over again. The doctor practices how to recognize different heart rhythms by observing them over and over again. The doctor strives for machinelike predictability and perfection.

Deliberate practice is rooted in the same principle. Its goal is to make a student less like a fool and more like a machine. Psychologist Angela Duckworth, author of Grit, tells students to use deliberate, high-quality practice to persevere. In 2010, she applied her formula to the National Spelling Bee. She celebrated the fact that students who used her grit tactic were more likely to advance to the finals. Yet all this meant was that students with grit were more expert than the fools who lacked it — and the spelling machine was most expert of all. A spelling machine knows every word in a spelling bee. It is no coincidence that Duckworth sought vindication for her grit principle in an activity that a machine would always win if it were allowed to compete.

The power of habit also applies this principle. Through habit, students create mental loops below the level of consciousness in response to some cue. The loops are analogous to a computer's subroutines. A doctor routinely washes his hands after touching a patient not because he actively thinks about the latest infection-control article, but because he has developed a habit of doing so. A machine does not think, and neither does the expert who bases his expertise on the power of habit. Instead, the expert reflexively responds.

In all this we ignore the expert's feelings and attitudes. In some ways, we do so unintentionally: Simulations appear to be the most potent means of reproducing real-life situations, and simulations do not measure attitudes or feelings. But we also do so intentionally, as attitudes and feelings are the very human traits that keep people from attaining the perfection of foolproof machines. Why dwell on what drags us down?

None of the simulation studies described above tested for attitudes and feelings, let alone tried to control for them. Yet these variations in cognition may explain why the studies yielded such confusing results. Doctors who possessed certain attitudes and feelings vital to expertise were randomly mixed in with those who lacked such qualities, independent of whether they were board-certified or non-board-certified, old or young, experienced or inexperienced, skillful or amateur. Some residents in the studies may have enjoyed better outcomes than did well-established practitioners not because they had more knowledge, skill, or experience — more than likely, they did not — but because they possessed an expert's unique cognitive traits.

Evidence for this theory already exists. One study in anesthesiology showed that a medical director's instinctive ability to size up a resident is as valuable, if not more valuable, than objective measures like tests and simulations. How can this be? How can a human's assessment of expertise be more accurate than an objective measure?

When experts are trained by machines to be like machines and are assessed by machines for how machinelike they are, much is overlooked. A machine is not a living thing; it does not grow as living things grow or assume ever-varying forms in accordance with the complex laws of human character. An expert, on the other hand, is a living thing. Although a medical director may be imprecise in his estimate of a resident's knowledge base, he will have watched that resident as only a human being can watch another human being. Asking the director "would you let that resident put a family member to sleep?" conjures up in the director's mind a multitude of small memories, observations, and inferences about the resident — not just those concerning his knowledge base and skill, but also his cognitive traits — leading to a fairly accurate assessment of competency.

Dr. David Murray, director of the Clinical Simulation Center at Washington University, admits that experts "are more likely to differentiate quality than any other measure that we have." What this means is that a machine can judge another machine, but only an expert can judge another expert. Not surprisingly, Murray notes, the simulator is excellent at training for specific psychomotor skills — how to manage a shoulder dystocia during the delivery of a baby, for example. The procedure for doing so is clear and well defined; no special cognitive abilities are needed. The student trains over and over again using deliberate practice and eventually becomes adept at managing the problem.

On the other hand, being an expert involves over 20 different domains of cognitive ability, which are much harder to assess. Even if we could assess them, we must decide which domains are most important. Such a decision requires judgment — human judgment. Hence, it should not surprise us that a residency director's assessment of expertise is more accurate than a machine's.


The expert's mindset is as precarious as it is precious. When we possess expertise, we share in something that is a little beyond our understanding. It fades just as quickly for reasons unknown. At such moments, it must be recrystallized. If one has already become an expert, recrystallization occurs more quickly. But the divine spark must still be struck — whether it is done so again, during the first few days after an expert's return from a long vacation, or for the first time, during the years spent training to become an expert in the first place.

The expert's mindset is not instinctive. It does not have the stability of the machine. It rests on human psychology, which does not lend itself to any permanent equilibrium. Expertise, like a great love, can be won, but even when won, it must continue to be wooed from time to time.

We are approaching the end of an era. For over a century, being an expert has been associated with having machinelike ability. Modern psychology has encouraged this definition. An example is deliberate practice's "10,000-hour rule." Although Gladwell, the writer who popularized the concept, admits that practicing an activity for 10,000 hours is no guarantee of success, his highlighting the rule inevitably made people think it was. This prejudice produces in aspiring experts a coarse texture of mind — a metallic style, an itch for the obvious and the emphatic. People who try to become experts adopt the machine analogy. They practice and practice, and hope that whatever else makes a person an expert just falls into place.

That "whatever else" is something quite different, something that is not easily named or even understood. Psychologists put it under the nebulous category of "cognitive ability." In the meantime, aspiring experts focus on what can be named and understood — the machinelike aspects of being an expert — while putting the "whatever else" on the back burner.

Becoming an expert involves more than knowledge, skill, and experience — the kinds of things that deliberate practice and the power of habit instill. Even fans of deliberate practice admit this. One study showed that such practice accounted for only 1% of variance in performance among experts; some unidentified cognitive variable explained the rest.

Ambiguity also surrounds other components of expertise. Experience, for example, has been shown to have little to do with being an expert. One study of 10,000 doctors found that additional professional experience had only a small benefit in terms of outcomes. In anesthesiology, studies have shown physician experience to be less vital than was once assumed. As for the relative importance of knowledge in being an expert, my own experience tells me that it plays a minimal role. Some book-smart doctors are also the worst doctors. Experts in other fields have observed an analogous phenomenon.

Questioning the prevailing model of what it means to be an expert is unpopular. People want the quickest and surest route to becoming experts, and deliberate practice and the power of habit seem to offer such a route. Many researchers, in turn, want to preserve their relevance in people's quest for expertise, and so they double down on the expert-as-machine model. Just recently, researchers floated the idea of building a machine to track a doctor's eye movements to check whether the doctor is maintaining focus while caring for patients — a machine to check the machine.

We need to update our model of the expert. Many professionals believe they must become more like machines to keep their jobs. Being more machinelike is how they think they add value. Indeed, the whole thrust of specialization and sub-specialization in many professions over the last century is based on the notion that the more circumscribed a task, the more easily an aspiring expert can master it and perform it reliably, predictably, and capably, as a machine does.

The expert is more than a fool, but he is also more than a machine. The expert operates on a different playing field than the machine. He performs in vital ways that the machine cannot. Human attitudes and feelings may prevent us from achieving perfect reliability and predictability, yet they are also an amazing source of creativity, wisdom, and judgment.

Sometimes experts must make decisions where no reliable or predictable answer is possible. At such moments, no machine can get the right answer every time, since the right answer is not the same every time. This is when the expert is often better and more correct than the machine — though he is not always right, for no human being is always right. In moments when the expert is better than the machine, the expert experiences some doubt. Doubt has traditionally been viewed as the great weakness of human beings, preventing them from achieving machinelike perfection. In reality, in situations where no reliable and predictable answer exists, doubt plays to the expert's strength.

Wherever there is doubt, experts thrive. Experts try to find the right way forward, and they have a better than even chance of succeeding — although not much better. The word "expert" itself can be traced back to the Latin verb experiri, meaning "to try." Trying to find the right way in a situation where much remains in doubt while getting a good outcome fairly often is what makes the expert an expert. Doing so depends on a host of cognitive factors beyond the reach of the machine.

RONALD W. DWORKIN, M.D., Ph.D., is a physician and political scientist. His writing can be found at


from the


A weekly newsletter with free essays from past issues of National Affairs and The Public Interest that shed light on the week's pressing issues.


to your National Affairs subscriber account.

Already a subscriber? Activate your account.


Unlimited access to intelligent essays on the nation’s affairs.

Subscribe to National Affairs.