Giving Doctors Grades

Jul 28, 2015 | 2015, Blog, July

www.nytimes.com

22Jauhar-master675ONE summer day 14 years ago, when I was a new cardiology fellow, my colleagues and I were discussing the case of an elderly man with worsening chest pains who had been transferred to our hospital to have coronary bypass surgery. We studied the information in his file: On an angiogram, his coronary arteries looked like sausage links, sectioned off by tight blockages. He had diabetes, high blood pressure and poor kidney function, and in the past he had suffered a heart attack and a stroke. Could the surgeons safely operate?

In most cases, surgeons have to actually see a patient to determine whether the benefits of surgery outweigh the risks. But in this case, a senior surgeon, on the basis of the file alone, said the patient was too “high risk.” The reason he gave was that state agencies monitoring surgical outcomes would penalize him for a bad result. He was referring to surgical “report cards,” a quality-improvement program that began in New York State in the early 1990s and has since spread to many other states.

The purpose of these report cards was to improve cardiac surgery by tracking surgical outcomes, sharing the results with hospitals and the public, and when necessary, placing surgeons or surgical programs on probation. The idea was that surgeons who did not measure up to their colleagues would be forced to improve.

But the report cards backfired. They often penalized surgeons, like the senior surgeon at my hospital, who were aggressive about treating very sick patients and thus incurred higher mortality rates. When the statistics were publicized, some talented surgeons with higher-than-expected mortality statistics lost their operating privileges, while others, whose risk aversion had earned them lower-than-predicted rates, used the report cards to promote their services in advertisements.

This was an insult that the senior surgeon at my hospital could no longer countenance. “The so-called best surgeons are only doing the most straightforward cases,” he said disdainfully.

Research since then has largely supported his claim. In 2003, a study published in the Journal of Political Economy compared coronary bypass surgeries in New York and Pennsylvania, states with mandatory surgical report cards, with the rest of the country. It found a significant amount of cherry picking in the states with mandatory report cards: Coronary bypass operations were being performed on healthier patients, and the sickest patients were often being turned away, resulting in “dramatically worsened health outcomes.”

“Mandatory reporting mechanisms,” the authors concluded, “inevitably give providers the incentive to decline to treat more difficult and complicated patients.” Surveys of cardiac surgeons in The New England Journal of Medicine and elsewhere have confirmed these findings. And studies from 2005 and 2013 have shown that report cards on interventional cardiologists who perform angioplasty procedures are having similar results.

Surgical report cards are a classic example of how a well-meaning program in medicine can have unintended consequences. Of course, formulas have been developed to try to adjust for the difficulty of surgical cases and level the playing field. For example, a patient undergoing coronary bypass surgery who has no other significant diseases has an average mortality risk of about 1 percent. If the patient also has severe kidney dysfunction and emphysema, the risk of death increases to 10 percent or more. However, many surgeons believe that such formulas still underestimate surgical risk and do not properly account for intangible factors, such as patient frailty.

The best surgeons tend to operate at teaching hospitals, where the patients are the most challenging, but you wouldn’t know it from mortality statistics. It’s like high school students’ being penalized for taking Advanced Placement courses. College admissions officers are supposed to adjust grade point averages for difficulty of coursework, but as with surgical report cards, the formulas are far from perfect.

The problem is compounded by the small number of operations — no more than 100 per year — that a typical cardiac surgeon performs. Basic statistics tell us that the “true” mortality rate of a surgeon is not what you measure after a small number of operations. The smaller the sample, the greater the deviation from the true average.

Report cards were supposed to protect patients by forcing surgeons to improve the quality of cardiac surgery. In many ways they have failed on this count. Ironically, there is little evidence that the public — as opposed to state agencies and hospitals — pays much attention to surgical report cards anyway. A recent survey found that only 6 percent of patients used such information about hospitals or physicians in making medical decisions.

It would appear that doctors, not patients, are the ones focused on doctors’ grades — and their focus is distorted and blurry at best.