Wednesday, December 25, 2013

Cross-organization Patient Identification

A colleague asked me what I thought of this:

"To improve the quality and safety of patient care, we must develop a nationwide strategy to match the right patient to the right record every time," said Lisa Gallagher, HIMSS vice president of technology solutions, in a statement.

The innovator in residence, she said, "will create a framework for innovative technology and policy solutions to help provide consistent matching of patient health records and patient identification.”

I had two reactions, one after the other:
  1. That would be awesome (a nationwide strategy to match the right patient to the right record everytime).
  2. Good luck balancing privacy and accuracy.
I have been dealing with this issue for just about 29.5 years. That is a loooooong time. I have seen lots of ideas come and go. Alas, I have no great solution, but I do have a firm grasp on potential issues:
  • Very similar demographics: (two easily confused patients)
    • identical twin boys named John and James, for instance (yes, people do that)
    • father and son, same name, unlucky birth dates such 11/1/61 and 1/16/11. It happens and MANY clerks are so pleased to spot the "typo"
    • cousins born on the same day or with unlucky birth dates with the same name
    • mother & daughter have same name until marriage and updating the daughter obscures the mother, making the mother look like the maiden name version of the daughter
  • Very dissimilar demographics: (one patient looks like two)
    • maiden name to married name: add a birth date correction and all bets are off
    • legal name change, sometimes to deliberately leave behind the past--prison term, bad marriage, etc
    • heavy use of a nickname "Steve Jones" finally decides to go by "Steven Jones" because his dad, "Steven Jones," just died. Yikes.
  •  Privacy nut / identity theft: the patient deliberately gives false demographics or those of someone else.
I cannot imagine, in the absence of a national identity card, how a private American effort could even span different organizations within a given state, let alone across state boundaries.

Insurance companies could help, given their efforts to get bills paid across institutions, but I cannot see why they would and I can see why they wouldn't.

Man, I hope that I am wrong about this.

Monday, December 23, 2013

A Case for IT Scaffolding (Building Tools)

This post is an unadorned rant. The topic is the lack of software scaffolding I see in my consulting practice.

By "software scaffolding" I mean tools specific to a particular task or project.

 I see managers who are unable to appreciate the utility of such tools and who discourage their creation and use because building tools takes time and energy.

I see programmers reluctant to make debugging and validation easier with a tool because the tool is not on the project plan.

You can't really build a building of any size or complexity without a scaffold and a large part of getting construction done well, done on time and done at or below budget is having the support system in the plan and on the ground. I assert that the same is true of many IT projects.

Furthermore, I assert that this is true of Lab IT projects in particular where validation and on-going audit are so important. I find that many of my  bits of scaffolding live on to support making sure that the software keeps working, after I have confirmed that it worked in the first place.

I understand that some programmers go overboard: instead of creating good-enough tools, they create overly complicated and overly polished Swiss Army knives, which somtimes even get released because the super-tool ends up being the only way to do something.

But keeping this in check should be doable for a competent manager. Why then do I see so much resistance to building data examination tools, dumping reporting, debuggers?

Friday, December 20, 2013

Physical Security Dimension to Cybersecurity

A friend who is interested in cybersecurity drew my attention to the following item:

December 16, 2013

Attacking Online Poker Players

This story is about how at least two professional online poker players had their hotel rooms broken into and their computers infected with malware.
I agree with the conclusion:
So, what's the moral of the story? If you have a laptop that is used to move large amounts of money, take good care of it. Lock the keyboard when you step away. Put it in a safe when you're not around it, and encrypt the disk to prevent off-line access. Don't surf the web with it (use another laptop/device for that, they're relatively cheap). This advice is true whether you're a poker pro using a laptop for gaming or a business controller in a large company using the computer for wiring a large amount of funds.
The friend asked the following question: don't these same issues arise with medical records, eg the lab results I so often handle? Specifically, isn't physical security of personal devices a real issue?

The short answer is yes: God, yes! Yes. Yes.

The long answer is that I see two common issues:

Benign Neglect

In this scenario, doctors and other professionals forget that their spouses and kids and kid's friends may end up borrowing computers, laptops, smart phones or tablets which are also used by clinicians to review sensitive medical information.

We all know that traces left by legitimate access of this information can often be found, either by accident by intent, if you let other people fiddle with your device. We all know that we should take basic precautions:
  • Clean your browser cache.
  • Use passcodes and automatic inactivity locking and make the timeout period short.
  • Don't lend your devices if you can avoid and never lose physical control of them.
  • Be aware of how your data is backed up
But we don't all follow these guidelines and we don't follow them all the time. And we should.

Every time I create a system which offers a confidential medical report as a PDF, I cringe. I warn the users that they are responsible for the PDF once it hits their browser. I do the best I can to expire and obscure the PDFs. But I know that the average clinical user neither knows nor cares about ghost images of private health information (PHI) floating around on his or her devices.

Targeted Attacks

Alas, the following is true: major medical centers have celebrities as patients; celebrity PHI is very valuable; human beings take bribes. But that is a problem for HR. What about users who are targeted, especially in this day and age of personal devices at work? I know of such attacks in other domains, such as finance, but I do not know of any against my clinical users. But does that mean that it hasn't happened? Or that it won't?

I do my best to make sure that my smart phone never sees PHI and that my laptop is physically secure and regularly checked for malware. But that won't help my users. So what is our professional obligation here? How do we foster a greater awareness of the risks so that appropriate action can be taken?

There is no point in trying to scare clinicians into not using their shiny, powerful, useful toys. Instead, we need to figure out how to help them use those toys more safely.

Wednesday, December 18, 2013

Why Is Merging Test Catalogs So Hard?

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why is merging test catalogs so hard?"

This question fits O'Flaherty's first law of applied technology: "It is usually not the technology that takes the time."

This question arises in a number of contexts:
  • Lab Manuals across organization (eg hospital systems)
  • Lab Manuals across labs (eg main lab and satellite lab)
  • Ref lab interfaces (order from one catalog, process from another)
  • Historical data archives (showing results from a previous catalog)
The answer is always the same: assumptions and comparisons.

When considering a lab result, there is a huge amount of assumed context:
  1. the collection method sometimes matters, sometimes does not;
  2. the methodology sometimes matters, sometimes does not;
  3. the actual value sometimes matters, sometime only the High / Normal / Low flag;
  4. having a basis of comparison to prior results sometimes matters and sometimes does not
Take Vitamin D levels for example: there are a couple of ways to measure it (methodology) which vary in exactly what they measure and how accurately they measure it. If you are a specialist, these differences may be very important. If you are a generalist, these differences may be meaningless to you. If you are trying to provide historical context by evaluating a series of these results, you almost certainly assume that the methodology was constant, which ever methodology was used.

The problem arises when catalog A only has "Vitamin D" and catolog B has "Vitamin D (the cool one)" and "Vitamin D (the usual one)". How do work across the catalogs?

There is always renaming to enforce consistency, but that is not straightforward and it is often politically charged: someone wins and someone loses. Furthermore, changing the name of a test is problematic: should the clinician infer that the assay has changed somehow? If so, how?

There is add new tests to enforce consistency, but is also a qualified success: what if the new test is essentially the same as the old one? How does the clinician know that, how does the software which displays the results know that these two "different" tests are the same lab assay?

Worse, making these decisions requires many different skill sets: clinical, lab and data modelling. So why is merging test catalogs so hard: because there is so much to it and so much of it is not immediately apparent. Hello, Titantic, meet Ice Berg.

Wednesday, December 11, 2013

O'Flaherty's First Law of Applied Technology

FQA Wednesday again. In lieu of answering a specific question, today I would like to expound upon something someone else said. Not only is this utterance interesting and apropos, it is also already thought of, saving me some time and effort.

"It isn't the technology that takes the time." Douglas J. O'Flaherty (

Doug has an annoying habit of reducing my intricate and entertaining tales of technology deployment woe to this oft-repeated and simple phrase.

(It is oft-repeated because I complain alot about getting real-world solutions out the door, on to the lab floor and working as well as they should. But that is another story.)

What I understand him to mean is this: information technology is dynamic and evolving, but it is relatively straightforward and relatively well-understood. But in order to apply technology to the real-world business processes that would benefit from that technology, a number of generally poorly-managed, poorly-understood and complex tasks have to be accomplished first:
  • get input from stakeholders, who are often diverse and out of phase with each other
  • build a consensus from that input
  • turn that consensus into specifications which can be acted up
  • turn that consensus into action items which really are accomplished
Only at that point does the programming start. When mired in the trenches of implementation, I tend to forget all the pain that went into clearing the path for the implementation and instead I tend to focus on the process failures which leave me with bad specs.

As a technology guy who is constantly frustrated by project scheduling, I guess I do need pretty frequent reminding: it isn't the technology that takes the time.

Thursday, December 5, 2013

Medical Data Center Reliability

Well, it isn't Wednesday, but this tweet got me thinking about a FAQ:

The frequently asked question is this: why aren't Medical Data Centers...better? Given that their mission is so important, not just to an organization but to people's lives, why aren't hospitals and labs better at keeping their systems up and running?

The short answer is that they are trying too hard with an odd set of rules and a strange budgeting process.

Trying Too Hard
In my experience, small medical organizations can't afford good technology or good people, but large ones fail as well for the opposite reason: they are trying too hard.

Redundancy is the obvious way to get good up-time (reliability, uninterrupted service, however you like to characterize "running well"). But there are two different kinds of redundancy....

System Level Redundancy which means having more than one solution to a given problem: two independent, synchronized, preferably different systems doing the same job. If System A has a problem, System B takes over.
Component Level Redundancy which means having more than one component in the system doing a particular job. So you only have System A, but it has lots of redundant disks, and storage and whatever else you think you need.
The computers aboard the now-defunct Space Shuttle were an excellent example of System Level Redundancy: five independent computers, all doing the same tasks at the same time. In order for computer to fail on the Shuttle, all five systems would have to die at once. For good measure, they kept tabs on each other, so you had confidence that they were working properly.

Storage Area Networks are an example of Component Level Redundancy: you tell your server that it has a disk attached, but the "disk" is actually a computer which is sending your data to separate disks.

System Level redundancy is expensive (you need multiples of everything) and daunting (you have to maintain multiple systems and keep them in sync somehow.) But it covers all kinds of failures if you do it correctly: power problems, water mains breaking, etc.

Component Level Redundancy is cheaper (you only have extras of the parts you need) and more inviting (the complexity is hidden). But it makes the infrastructure more complicated and it does not help if a non-redundant component fails, eg water floods your primary data center (you have an offsite back up data center, right?).

Worse, if you choose Component Level Redundancy over and over, you end up with a mind-bogglingly complex infrastructure which punishes attempts to change it and works against attempts to debug it. Is the problem in the virtual machine, or the actual machine, or the network, or the SAN, or the app? Who knows? Let the Blame Roulette begin!

So in trying to be super-reliable, frugal and conservative, Medical data centers often up being flakey, expensive and conservative. At least they're conservative.

But I find that it often isn't their fault: senior management has taken charge of the ballooning tech budget and trying to make simple buying choices which are resulting in complicated situations. And complexity rarely leads to clarity or performance or long-term reliability. Just ask Google or Amazon.

UPDATE Dec-30-2013
This article by a DevOps guy gives some interesting perspective on the same phenomenon:

Wednesday, December 4, 2013

Why Can't Medical IT Systems Share Data Better?

It is FAQ Wednesday, when I try to get through the most plaintive of cries I encounter in the course of my workday.

Today's question is "why can't medical information systems share data better?"

This is a good question: why in this day and age of Webly interconnectivity are lab results and diagnostic images and calendar appointments and other data not easily accessible?

Specifically, let us consider why the prototypical Hospital Information System (HIS) cannot share data better (more effectively) with the prototypical Laboratory Information System (LIS).

There are two basic ways to share data between computer systems. I will call these two methods "linking" and "transferring." Let us call the system with the data the "server" and the system which wishes to display data from the server "the  client."

Linking is pretty easy and pretty useful: The client opens a window, sends a query to the server and the server replies with the data.

Transferring is more involved: the client gets data from the server, parses that data and loads that data into the client's own database where that data can found and used by the client's own software.

Linking is easier and real-time, but does not lend itself to a consistent look-and-feel. Transferring is harder and often done in batch mode, but it does lend itself to consistency and the data is available "natively" on the client, ie in more ways.

Since this is healthcare data, we have issues of privacy, authentication, display rules and all the requirements of HIPAA. This makes using Web technology, which is inherently insecure and open, a bit tricky. It also makes the transfer option more attractive: authentication across information systems is a pain and access logging across information systems, especially ones from different vendors, downright difficult.

However, transferring is more overhead to implement, more overhead to maintain and requires actually settling questions of data model mismatch.

We try to support linking when we can, but we often end up having to support someone else's authentication model and someone else's data model and someone else's design aesthetic. That's rather alot of flexibility which is why most bigger vendors won't go that route.

So here we are: most of the time the only data shared between systems is data everyone agrees must be shared. Data sharing tends to lag behind other developments and other requirements. It isn't better because it isn't easy to do.

Tuesday, December 3, 2013

"Lab Man" Is Too Vague

We recently were asked to fix a specific instance of a general Lab Man issue. Again.

The specific instance was providing phlebotomists with collection information.

The general issue is trying to make a "Lab Man" that is all things to all people.

I say that "Lab Man" is too vague because according to my observation, it refers to any of these options:

A Test Menu, from which clinicians select clinical tests to be done. This is all about ordering, so it includes helping a clinician figure out which tests to order, in addition to simply what code to use.
A Collection Manual, which gives the phelbotomist or nurse or lab tech information about how to collect, store and transport the specimen.
A Processing Manual, which gives a bench tech guidance about how to prepare and processing the specimen.
A Result Interpretation Reference to which clinicians can turn for help in understanding the clinical significance of a given result.
Sadly, many of our clients feel that this model "makes things too complicated" and that a single format will somehow handle all these functions. "Why do you make things complicated by having different formats and different documents?" we often hear from the Lab Man Committee. But to the Medical Director, "Lab Man" means one thing, to the Lab Manager "Lab Man" means another thing and to Outreach Coordinator "Lab Man" means yet another thing.

Strangely, few members of the committee dispute that clinicians don't want to wade through collection instructions when trying to figure out the significance of a result. Drawers don't want to browse a catalogue, they just want to go straight to the pre-selected assay on the order. Bench techs only rarely care about the clinical research supporting the use of a particular assay for a particular medical history; instead, they want the preparation and processing instructions as quickly as possible.

One of our clients has a fab new HIS (Epic) which they want to use as much as possible outside the lab. They did not want to complicate things, so their Lab Man is a Test Menu/Result Interpretation Reference hybrid. This hybrid format works for doctors. But it does not work for drawers (nurses and PAs and phlebotomists) who are also "outside the lab." No Collection Manual for them.

Collecting without a Collection Manual is not working out very well, so I proposed the following: let the web site choose the format based on the incoming user ID. Or based on the IP (we can tell internal from external and we can tell which external IPs are actually draw stations.)

Context is everything, and these days context-sensitivity is often pretty easy to provide, so why not take advantage?

Update Thursday, December 5, 2013
Client has decided to go with a hard-to-find second link on their HIS, in the hope that (a) drawers can find the Collection Manual link and (b) clinicians cannot find the Collection Manual link. Good luck to them, say I.