On March 5 the New York Times carried an extraordinary opinion piece by Dr. Anne Armstrong-Coben, an assistant professor of pediatrics at Columbia. She takes a stand against the computerization of health care, writing that “In short, the computer depersonalizes medicine.” The core of her argument is that computers impede a doctor’s ability to do her job — to interact with patients, figure out what the medical situation is, communicate this information to colleagues, decide on an appropriate course of action, and see that it’s carried out. She acknowledges that “The benefits [of computerization] may be real…” but immediately follows this with “…but we should not sacrifice too much for them.” She cautions that “The personal relationships we build in primary care must remain a priority, because they are integral to improved health outcomes.”
On March 10 the Times posted eight letters in response to her piece. By my count, 6 of them wholeheartedly endorse her views. I do not.
Dr Armstrong-Coben mentions two specific pieces of health IT — the electronic medical record (EMR), or digital version of the classic medical chart, and the computerized physician order entry (CPOE) system used by a doctor in a hospital to order medications. And she has nothing good to say about either of them. Entering data into an EMR is much less convenient than writing on paper, and CPOE systems can generate errors. As she writes:
“A box clicked unintentionally is as detrimental as an order written illegibly — maybe worse because it looks official. It takes more effort and thought to write a prescription than to pull up a menu of medications and click a box. I have seen how choosing the wrong box can lead to the wrong drug being prescribed.”
I have to assume that as an experienced clinician she’s also seen how bad handwriting or a doctor’s ignorance about other prescriptions can lead to the wrong medicines being administered within a hospital. So which types of errors — computer-based or human-based — are more common? A rigorous and thorough study, published in 1998 by David Bates, Lucian Leape, and their colleagues and conducted at Boston’s Brigham and Women’s hospital, compared medication errors before and after CPOE was introduced. The researchers found that preventable adverse drug events — in other words, injuries stemming from medication errors — declined by 17 percent after CPOE was implemented.
These improvements are critically important because medical errors are both severe and dismayingly common. A 1995 study, also led by Leape and Bates, found that 6.5% of all patients admitted to two Boston hospitals suffered an injury during their stay, and that 28% of these injuries resulted from errors by health care providers. A third study found that 20% of all medical errors in hospitals — the largest category — were related to medication. This research also found that 13% of hospital injuries resulted in patient death.
When these are the facts, a 17% reduction in injury-causing errors becomes a big deal. As part of the homework for a case study that I wrote about CPOE introduction at a hospital I ask students to estimate how many deaths are likely to be averted if the application is deployed as successfully as was the case at the Brigham. A straightforward and conservative calculation reveals that the answer is about four deaths every year in that hospital alone. What responsible health care provider would resist such a technology?
It is absolutely true that current health IT is far from perfect; it can be difficult and confusing to use, hard to integrate smoothly into conversations and examinations, and programmed with bugs and/or bad medical information. But the perfect is the enemy of the good. I’ve never seen a perfect application or piece of hardware, but I’ve seen plenty that are on balance usable and beneficial. The technology so bad that it’s worse than no technology at all is an appealing bogeyman to some people, but a thankfully uncommon one in the real world. And CPOE systems and other health IT have come a long way since the landmark study was published in 1998.
Here’s a thought experiment: what if current state-of-the-art health IT, including EMRs and CPOE systems, suddenly appeared in tomorrow in every health care delivery facility in the country, along with sufficient training resources to get providers up to speed quickly with the new tools? What would be the impact on Americans’ health?
My strong belief is that health outcomes would improve quickly, substantially, and almost universally, and that the improvements would stick around over time. For one thing, many fewer people would die because of the kinds of preventable medication errors uncovered by Leape, Bates, and their colleagues. For another, it would be much more likely that thanks to EMRs all involved care givers would have access to the same information (and have access to it from wherever they are), and so make decisions and have conversations based on it.
In addition, patients themselves would have much more information about their own health. A paper chart-based world of medical care is an inconvenient one for patients. They have to ask their providers for copies, then cart them around as they move through our country’s fragmented health care system.
People’s willingness to do this, I’ve observed, is directly related to the severity of their health problems. Because I’ve been very fortunate with my health I can’t be bothered, and so don’t myself have any paper trail of my health and health care over time. The only information I do have is at patientgateway.org (a system sponsored by Massachusetts General Hospital and Partners HealthCare), which is populated by data from exactly the kinds of systems that Dr Armstrong-Coben disparages. She might feel inconvenienced by health IT, but I feel inconvenienced by health paper. And don’t my preferences matter when it comes to my health care?
I think that Google Health is a likely big deal because it gives me and all other patients a central repository for all the health data we accumulate over time, regardless of where it comes from, as long as it’s in digital form. I hope this effort takes off and goes in all kinds of directions, moving us as far as possible from a world where my health information sits in an assortment of hanging folders in offices I couldn’t find any more overseen by doctors whose names I don’t remember.
Unless I misread her badly, this is the world of health care that Dr Armstrong-Coben is advocating. I advocate something very different: a health care system that’s a lot more wired. I don’t pretend for a minute that digitizing the American health care industry would solve all of its problems, and I certainly agree that some things, some of them important, would be lost or compromised. But other things, also important, would be improved and we would become a significantly healthier society.
Do you agree? Leave a comment, please, and let us know what you believe about health IT and why you believe it.