Last week, The Joint Commission, the major regulator and standards-setter for hospitals and other health care providers, issued a "Sentinel Event Alert" on Safely implementing health information and converging technologies. Although the Commission has issued technology alerts before, this one is remarkable in how broad it is, covering the entire domain of HIT and the devices that are attached to the HIT system, including Clinical Decision Support systems.
In it, they noted that 25% of the medication error reports submitted to the US Pharmacopeia involved some kind of computer technology. (It is not clear if this is higher than expected given the prevalence of computerized medication systems in hospitals. Nonetheless, it is a big number.) They also provided a high-level list of 13 suggested actions for implementation and use of HIT that represent a kind of best practices. This list ought to be brought to the attention of everyone involved in these systems - especially the institutional leadership who need to find the funds for these 13 safety steps.
The good news is that Clinical Decision Support was not singled out as a particularly dangerous aspect of HIT (pharmacy took the brunt of the data-driven bad news). Nonetheless, all of us developing Decision Support Systems should keep these issues top-of-mind.
But, what if those IT systems weren't in place? Would errors be much higher? Are they saying that barcoding at the bedside is risky? If so, is it more or less risky than having an overburdened nurse trying to read bad handwriting and possibly giving the wrong med to the wrong patient or at the wrong dose or the wrong time?
ReplyDeleteMaybe the fact that I'm a programmer makes me skeptical and optimistic at the same time. I've worked with bad code and bad coders and mistakes happen in software at times. However, good software has the potential to greatly reduced errors, WHEN it is properly used as part of a workflow designed to reduce errors.
I think they're saying exactly what you are: IT is risky because of the possibility of bad code, bad coders, poor training, hardware failures, design errors, etc. Computers allow us to make thousands of fatal errors in the time it used to take to kill just a few patients.
ReplyDeleteIn the realm of barcoding, for example, what happens when there is a small error in the printing plant that puts the codes on the labels of the medication vials? One possibility is that an awful lot of patients get the wrong drug!
You can't just throw a lot of programming at a problem to make it go away. IT needs to be managed by thoughtful, well-trained people who are cognizant of workflow and a zillion other issues. When it is, it is liable to reduce errors overall. When it is not, it is liable to introduce all sorts of new errors.
So, just as you say, software is good, WHEN it is properly used. The Joint Commission needed to say it, too, because health care managers are often not as savvy about the risks of mismanagement as programmers!
I think that it's not just a matter of software being properly used, but even more importantly, being properly designed to support the doctor and the way the doctor works.
ReplyDeleteI cannot imagine that software, in my lifetime, if ever, will be able to replicate the volume of education, the rational thought processes, and especially the intuitive processes that doctors bring to the patient setting. Because of that, software systems should, in my opinion, be limited to what they can do well and do safely, and then allow doctors practice their art without impediments. No matter how many "rules" we think exist in medicine, there are times when an intelligent human should not be second-guessed or usurped by machines.