By Jean McAulay
The myth of professional infallibility and the human condition
Eugene D. Robin, M.D., was an insider. A medical professor and researcher at Stanford University, the former president of the American Thoracic Society, an acting chairman of the Stanford University School of Medicine, and chairman of the Pulmonary Advisory Committee of the National Heart and Lung Institute—Robin was at the top of the pecking order. But he wasn’t crazy about the view.
In fact, at the height of his career in 1984, he composed a seminal book critiquing the profession, titled “Matters of Life and Death: Risks vs. Benefits of Medical Care.” In this work, he coined the term “iatroepidemic” to describe the widespread harm he saw caused by systematic errors in medicine. He lists more than 20 examples of iatroepidemics, from using DES during pregnancy, to frontal lobotomies for treating mental illness, to collapsing the lungs to treat tuberculosis.
According to Robin, iatroepidemics follow a predictable pattern: “A practice was introduced into medicine on the basis of a fundamentally unsound idea or poorly interpreted experience. The practice took hold without adequate studies to establish its efficacy and then developed a life of its own. It was supported by a group of experts whose opinions encouraged its continued use. Their own reputations or positions partially depended on the practice and when challenged, they leaped to its defense. As a result, changes were slow to come.”
“The history of medicine is full of examples of serious, systematic errors which, for a time, became incorporated into medical practice on a very large scale, and caused harm or death to many patients before they were corrected,” says Philip Incao, a traditionally trained MD who today practices anthroposophic and homeopathic medicine outside Denver and is a frequent critic of his profession.
Medicine’s struggle with criticism is entwined with the ambitions of doctors, Incao adds. “They want to be known for discovering a new technique that’s helpful and don’t want to admit it if it turns out not to be very helpful or even harmful,” he explains. “It has to do with competitiveness and ambition in medicine and the struggle—especially in academic medicine—to advance your career by being an author of a study or taking credit for some new innovation.”
Conventional medical training, according to Incao, is also to blame for the difficulty medical doctors have in using criticism to improve practice. “Medical school nurtures the idea that if it’s not in the curricula, it’s probably not relevant,” he adds.
Rapid, over-confident decision-making by doctors can create bad results, according to “How Doctors Think” author Jerome Groopman, M.D., of Harvard Medical School. He says, on average, a physician will interrupt patients’ description of symptoms within 18 seconds.
Groopman’s book also highlights the work of Renee Fox, a physician and occupational sociologist who observed how physicians in the hospital setting coped with uncertainty in providing medical treatment by play-acting greater confidence in their recommendations.
Clinical instructor Jay Katz of Yale Law School characterizes this behavior as a “disregard of uncertainty.” He believes physicians develop the façade of certainty to deal with the anxiety of shifting from the concrete nature of concepts introduced in medical schooling to their far less predictable application in real life.
At the risk of blaming the victim, is it possible patients actually contribute to this phenomenon by demanding physicians perform in a god-like capacity? When afraid and in pain, don’t we sometimes want the comfort of someone who seems to know precisely what’s wrong with us and exactly what to do about it?
Even those who are critical of the medical profession as a whole can become a little starry-eyed when talking about their own personal physicians. “There’s a naïve tendency in all of us,” Incao says, “to see certain professions as beyond corruption. We have a naïve faith that they’re inherently working for the good of humanity, and we forget everyone’s merely human with human ambitions and shortcomings.”
The way the concept of professionalism developed, as recounted by John Burnham in “Medical History,” also played a role in excessively elevating the physician, assigning the most specialized body of knowledge to medical doctors and placing them at the top of the pyramid with highly restrictive rules for entrance to their ranks.
In its early years, the medical profession in Europe and the U.S. was far more diverse than it is today. Homeopathy, naturopathy, chiropractic approaches and many other philosophies coexisted in a patchwork quilt of services. Students served an apprenticeship with someone whose work they admired, and schools typically revolved around the teachings of their founder.
At the turn of the 20th century, however, medical education began focusing more upon applying the scientific method, laboratory experimentation and hands-on experience. Within this new environment, the American Medical Association (AMA) assumed greater power and influence, seeking to eliminate schools that failed to follow the new brand of systematized education it championed.
In 1904, the AMA created the Council on Medical Education (CME) to promote the restructuring of U.S. medical education. The organization subsequently appointed educational theorist Abraham Flexner to head a major survey of existing medical schools.
Over the course of 18 months, Flexner visited all 155 U.S. medical schools. He found only a few possessed the financial, laboratory and hospital facilities to carry out the new standardized education. In what today is referred to as “The Flexner Report,” he suggested public resources be concentrated in those few institutions. Freedom of thought and practice approach weren’t encouraged, and homeopathic medicine, for instance, was all but eradicated in the U.S. as a result.
The medical profession’s knee-jerk reaction to different approaches didn’t end there, however. Even as late as the 1950s, when Drs. Salk and Sabin each developed different vaccination approaches for treating polio (Salk’s with a killed-virus formula and Sabin with a live virus), Sabin was openly hostile to Salk and mounted a full-scale offensive to discredit his approach. As U.S. historian William O’Neill said in his 1989 book, “American High: The Years of Confidence 1945-1960,” the AMA’s call for mass vaccination in early 1962 employing Sabin’s live vaccine rather than Salk’s, “caused an unknown number of polio cases … [but] the medical establishment seemed not to mind, having gotten its own way at last.”
No stranger to being at odds with his own profession, Incao says often the best one can hope for is to simply be ignored. “I’ve had many patients who did better than their doctor told them they would, but when they go back to their doctor, he never wants to know what caused it,” he says. “That would complicate life too much.”
Well, surely, in the era of evidenced-based medicine all we have to do is look at the research and we’ll know which approaches work best, without the interference of bias or personal ambition, right?
Wrong. Consider, for example, studies conducted by John Ioannidis—a physician-researcher who has worked at the likes of Harvard, Johns Hopkins and the National Institutes of Health. He’s currently a professor and chairman of epidemiology at the University of Ioannina in Greece, where he oversees a group of medical researchers; not to mention, he’s also considered one of the world’s top experts on the credibility of medical research.
Most recently, Ioannidis’ work was prominently featured in an issue of The Atlantic magazine. As the article reports, “He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated and, often, flat-out wrong.”
Fearing the medical community might ignore his findings, Ioannidis next concentrated on 49 of the most highly regarded research findings in medicine over the previous 13 years. Although he only looked at the most revered studies, when 34 trials were re-tested, 14 were shown to be convincingly wrong. Shockingly, a third of the absolute best research was wrong.
To no small degree, medicine is far from an exact science. Even the best of the best miss the mark. Mistakes happen. Shouldn’t it be possible to learn from them?
A program started at the University of Michigan in 2001, which focuses on full disclosure of errors, says the answer to this question is an unequivocal “yes.” The University’s own media materials quote a study published in the Annals of Internal Medicine regarding their program showed, “full disclosure and compensation for medical errors resulted in a decrease in new claims for compensation (including lawsuits), time to claim resolution and lower liability costs.”
Richard C. Boothman, chief risk officer at UM and a coauthor of the study, points out that reducing costs is not the main motivation behind the policy. Changing the culture to encourage caregivers to admit mistakes also has improved patient safety, which is much more difficult to measure, he says.
“We cannot improve if we’re not honest about mistakes. By engaging the patient early—and mostly listening more than talking at first—we get a fuller view of what happened, a better view of what it looked like to the patient, facts that may not be apparent from the chart alone,” says Boothman.
It’s fairly easy to throw stones at medicine’s foibles—after all, its practitioners are easy targets up there at the proverbial “top of the heap.” But is Chiropractic proving itself any more willing to hold its practices up to the most stringent investigation and to learn from its own mistakes?
Think back to the Lana Lewis inquest, often called the “Canadian Stroke Case.” A chiropractor was accused of causing an arterial dissection in the patient with a chiropractic adjustment. Did the chiropractic profession, on the whole, react with an open mind of uncovering new information, or in a self-defensive posture against yet another attack by the medical establishment?
William Meeker, D.C., MPH and president of Palmer College of Chiropractic West Campus, followed the case while he was vice president for research at Palmer College of Chiropractic. “Reaction to these issues within Chiropractic depends on who you’re talking to,” he says. “Some people in political positions reacted defensively. People in practice probably denied the problem.”
According to Meeker, however, Chiropractic’s academia attempted to put the case into perspective. “We were accused of killing people by some critics, and I think there was a bit of a reaction to that to say, ‘Absolutely not. That’s not happening.’ But eventually, we saw it’s conceivable it could happen,” Meeker explains.
What concerned him most at the time, though, was that research into the issue was being conducted not by DCs, but by medical doctors. “A number of chiropractic publications looked into the issue and summarized the data and tried to compile the case theories,” Meeker says. “There has actually been a fair amount of scholarly publication on this issue, but what chiropractors didn’t tend to do and have been criticized for is not publishing actual patient cases. Those were almost always published by medical doctors.”
Ultimately, Meeker says research has shown the risk of going to a medical doctor and subsequently having a stroke, and the risk of going to a chiropractor and having a stroke is the same. “The theory today is that people with these developing conditions tend to have neck and headache pain, seek help for it and soon after have an event that’s [attributable to] an already underlying condition.”
The issue, though, lies in how willing was Chiropractic to search for the answers without first assuming a defensive posture?
Meeker believes such inquiry most easily occurs within academia. “You can actually have a reasonable discussion about past mistakes and new discoveries with chiropractors and medical doctors alike in the academic arena, but not in the political arena or when fighting over money,” he adds.
Of course, one can understand why chiropractors might be a little touchy when it comes to criticism from the medical establishment.
In his soon to be self-published book, “The Medical War Against Chiropractic,” JC Smith, a chiropractor in Warner Robins, Ga., details the systematic efforts of the AMA to destroy Chiropractic. In creating its Committee on Quackery in 1963, the AMA charged the group with “first, the containment of Chiropractic and, ultimately, the elimination of Chiropractic.”
Although Smith says the focus today is more on ignoring Chiropractic, the damage can also be great. “An article this April in the Los Angeles Times mentioned all the pros and cons of current medical approaches for back pain and totally excluded chiropractic care,” he says. “The Wall Street Journal did the same recently when it talked about unnecessary back surgery and never even mentioned the role of Chiropractic.”
Smith feels chiropractors today shy away from controversy, making them unwilling to submit their own techniques to rigorous evaluation. “Just like medical doctors, chiropractors like to rely on what they learned back in school,” he says. “People won’t put their methods up to their peers and have them evaluated.”
Is it possible medical doctors, chiropractors and other health care providers might one day be able to sit down with a patient and say, “Here’s what we believe is accurate about your situation today. Let’s look at the range of evidence available to us?”
To be sure, such an approach would put a greater burden on patients when there’s no rescue by a knight in a blue or white lab coat to save the day. But then again, it’s pretty clear that sense of relief and certainty always has been just a placebo effect, isn’t it?
And just as the wellness and alternative care movement has been largely driven by patient demand, perhaps the creation of a health care system that accepts criticism, learns from mistakes and incorporates new findings quickly into contemporary practice will be driven by patient demand too.
It should be possible, just so long as we’re brave enough to deal with uncertainty and look to MDs and DCs as partners, and not as gods.