Monday, September 22, 2014

Advice For Pre-processing Lab Orders

Last weekend, in a supposedly social situation, I happened to run into a clinical pathologist and talk turned naturally to clinical lab informatics. (Yes, the rest of the table was pretty uninterested.)

This CP mentioned that his organization is in the early stages of processing clinical lab orders. This is a great idea, but depressingly infrequent. This is a great idea because:

    There are some orders which cannot be done (incorrect collection, eg) or will not be done (policy violation, eg) and the sooner you tell the ordering clinician that, the sooner they can do whatever makes sense.

    The earlier you flag a doomed order, the cheaper and easier it is to handle it.  Why waste precious human attention in the lab if a rule set can do the job so much faster and just as effectively?

I have lots of free advice--worth every penny, as they say--to offer on the tchnical implementation of these kinds of schemes. However, we were not able to cover the implementation advice in a social setting, much to every one else's undoubted relief. So instead I thought that I would pound out this blog post and send him a link.

What exactly are we talking about? Let us consider the usual life-cycle of the lab order:

(HIS=Hospital Information System; EMR=Electronic Medical Record; LIS=Laboratory Information System)

  1. HIS
    1. clinician logs into an HIS
    2. clinician identifies a patient
    3. clinician specifies one or more lab tests
    4. clinician submits the order to the LIS
    5. HIS generates a lab order
  2. Order Interface
    1. accepts order from HIS
    2. sends HIS order to the LIS
  3. LIS
    1. LIS receives the order
    2. specimen is collected
    3. lab receives the specimen
    4. specimen is prepped if need be
    5. assay is run
    6. result is verified & posts to the LIS
    7. result leaves the LIS, en route to HIS
  4. Result Interface
    1. accepts result from LIS
    2. sends result to the HIS
  5. HIS (full circle:
    1. clinician sees result, is enlightened
We are talking about adding a step to item 2 (Order interface) in between 2.1 and 2.2. Let us call this step "validate order". In practical terms we are talking about a relatively simple piece of software which applies rules to the HIS order to ensure that the order can be processed.

Conceptually, this software is *VERY* similar to the typical auto-verification module which applies rules to the raw lab result to ensure that the result can be reported.

When an order fails the pass the rules, it is cancelled and a comment added "Cancelled per lab policy: {explanation or link to explanation}".

Since a "hard stop" on clinician's practice of medicine makes everybody nervous, we usually start with the encoding lab policy in the rule set: if the lab is not going to the test anyway, there can be no harm in rejecting the order.

Since implementation is never flawless, we start out with "advisory mode': the module flags orders it _would_ cancel, so a lab tech can confirm that the logic is working. It is a good idea to have an advisory period for every new rule and to audit some percentage on an on-going basis.

So the enhanced model looks like this:

  1. HIS
    1. clinician logs into an HIS
    2. clinician identifies a patient
    3. clinician specifies one or more lab tests
    4. clinician submits the order to the LIS
    5. HIS generates a lab order
  2. Order Interface
    1. accepts order from HIS
    2. validate order
      1. order OK: sends HIS order to the LIS
      2. order failed: cancel with explanation
  3. LIS
    1. LIS receives the order
    2. specimen is collected
    3. lab receives the specimen
    4. specimen is prepped if need be
    5. assay is run
    6. result is verified & posts to the LIS
    7. result leaves the LIS, en route to HIS
  4. Result Interface
    1. accepts result from LIS
    2. sends result to the HIS
  5. HIS (full circle:
    1. clinician sees result, is enlightened
Done correctly, this technique saves the lab time, it gives the clinician feedback that is both speedy and educational and avoids unneeded collection events.

"Correctly" probably requires knowing the leading causes of cancellation and tracking causes of cancellation and confirming that as you add rules, you cut down on the manual cancellations.

There it is: how best to roll out this kind of thing, in my experience.

Friday, July 11, 2014

The Most Effective Guy In The Room

I am not a fan of the "smartest guy in the room" meme. I feel that it misses the point. In a business setting, all I care about is effectiveness. Most business problems are resolved by insight, experience, social skills, hard work and luck. Smartness is not usually much of a factor.

The point of being in such a room is to fix an issue, address a lack or form a plan of action to achieve something. Who is going to get us there, in a reasonable amount of time with a maximum of team buy-in? That is who I want to identify. That is who I want to run the meeting.

Since I am consultant and since I have been at this for 30 years, I have run into a wide range of styles and manners and approaches in meetings. There is more than one way to be effective, but I have worked with only a handful of people I thought were terrific runners of meetings. What they have in common is the ability to get at the core issue, to elicit suggestions and solutions, to distribute the necessary work and to keep everyone on task. I don't know how smart they were; it didn't really come up. I was too busy being focused on the job at hand.

As a technology guy, I am usually in meeting about technology and usually with people who are not technologists. So what is required is the ability to lay out technology issues in an accessible way, to facilitate group decisions about technology and to follow up in a way that non-technology people can relate to.

Phooey on smart. Who is the most effective person in the room?

Monday, June 30, 2014

App Retirement Party

Today marks the beginning of retirement for one of my apps--this one, in fact. As its Viking funeral ends in it sinks beneath the surface of the inky black waters of oblivion, I take a few minutes to ponder its useful life and its lessons.

Given its original mission, it was an astounding success: it allowed Outreach to dramatically grow its volume while greatly lowering the processing burden on the hospital lab.

Its secondary goal of paving the way for software in the draw station was also achieved, since rolling out its replacement was a fraction of the pain of introducing software to them in the first place.

Its tertiary goal of providing hard numbers with which to manage the collection operation seemed to be a glorious success, but since the organization did not bother to replace this functionality when they retired this app, perhaps this was as obviously valuable as I thought?

As an exercise in customization to support operating procedures, it also seemed to be a great success, but many of the most popular and most effective customizations are not being re-implemented, so their value was lower than it seems to me or their cost in the new environment is too high. The whole "this is better, because it cost more" movement baffles me and perhaps this is part of that.

The customizations which people most complain about losing are these:
  • the ability to automatically cancel ordered tests, with an explanation, of those tests are not going to be done anyway per Lab policy
  • the ability to automatically change  an order for an obsolete test with an order for its replacement, with documentation, if an equivalent replacement has been defined
  • the ability to handle clinical trials and research draws, to ensure anonymity and proper billing
Rest in peace, app: you did all that we asked and more.

Godspeed, former users: may your manual work-arounds be as painless as possible and your upgrades lead you, eventually, to the level of efficiency you once enjoyed.

Thursday, June 26, 2014

Still No Silver Bullet

I recently got re-acquainted with Ruby-on-Rails and this made me think of software building tools in general, which reminded me that I like to rant about how badly we creators of software classify, describe and choose our tools when confronted by a new task. This is that rant.

We do a horrible job of assessing our options before we start, explaining our choices to our colleagues (and bosses and clients), reacting to feedback as the project progresses.

Once upon a time, shortly after I got out of college, someone wrote an  excellent paper on a timely topic: software programmer productivity. The time was 1986, the author was Fred Brooks and the paper was "No Silver Bullet — Essence and Accidents of Software Engineering".

(For the Cliff Notes version, turn to Wikipedia: The paper is worth a read on its own, but who has time?)

The summary in the Wikipedia page is very good, and I will be stealing from it liberally in sum future rant about how badly managers tend to manage programmers and how badly programmers support their management. But you should read it now before the rant really gets going.

This is my banal insight: so little of the fundementals of creating software has changed in the intervening decades since 1986 that sometimes I want to cry. So much has changed about the incidentals that sometimes I want to cry. After 30+ years in the business, I was hoping for more improvement, not more novelty.

And boy, do we have novelty. We love novelty. We worship novelty. Take web UIs as an easy example: we have so many ways to make web pages, my head spins.
  • Want HTML with embedded code? Try PHP!
  • Want code with embedded HTML? Use Perl/CGI!
  • Want an abstract environment which separates HTML and code nearly completely? Ruby-on-Rails is for you.
  • How about an end-to-end integrated code & HTML environment? Microsoft Visual Studio is just the thing.
  • Want to try to side-step HTML and have some portability? Java, in theory, can do that.
  • Want to develop HTML with code woven into it? Why, Javascript was created just for this purpose.

I do not have a problem with more than one way to do something, especially if the ways have pros and cons. I am not an operating system bigot: I used VM/CMS for large batch jobs. I used VMS for file-oriented processing. I used Unix for distributed systems. I used MS-DOS for desktop apps. I used OS/2 and then Windows for snappier desktop apps. Each had its pros and cons and its appropriate uses.

The same was true for computer languages: I have used BASIC for matrix math, I have used C for file I/O, I have used Lisp for list processing, I have used PL/1 because IBM required that I use it.

Then somehow this idea of appropriateness faded away from computing: desktop=Windows, server=Unix. Then server=Windows Server for some, Unix for others. C++ or VB as the one-and-only programming language. Then other contenders for the one-and-only programming language.

We all understand that hammers are good for driving nails and screw drivers are good for driving screws. It is clear that screws are better connectors in some situations and nails in others. Who would hire a carpenter who only used one or the other?

But we hire "Drupal Shops" and "C/Windows Shops." We hire "Unix guys" or "Windows guys." We pretend that there is a single best tool for all jobs--and then argue pointlessly about which tool that might be.

Consider this statement from the creator of the Ruby programming language: (full text here:

Matsumoto has said that Ruby is designed for programmer productivity and fun, following the principles of good user interface design.[33] At a Google Tech Talk in 2008 Matsumoto further stated, "I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language."[34] He stresses that systems design needs to emphasize human, rather than computer, needs:[35]

A single tool to help every programmer in the world to be productive and happy? To me, that seems insane and it seems to reveal a worldview which I cannot support: the idea that there is a single best tool for all people for all problems in all environments. What hogwash.

I applaud the goal of creating good tools and making programmers in general more productive but I reject the notion that this job can every be done with a single stroke.

Wednesday, January 22, 2014


XKCD for Jan 20, 2014
This cartoon is amusing and represents, I am told, a common automation experience.

It ruffled my feathers a bit; automation is a big part of what we do for clinical labs and I know, I can prove, that we leave processes faster, cheaper and better when we are done with them.

This is, in part, because we go easy on the "rethinking" part. We like business process engineering, the careful defining and describing the working parts of a process as though that process were a machine. Through careful description comes understanding.

We fear business process re-engineering, the long and tedious process in which MBAs and frustrated worker bees compete to imagine a perfect world which software people are then to summon from the ether.

We understand that the pieces interact and interconnect. We also understand that predicting how it will all shake out is a very low-percentage business so instead of trying to fix the entire process, we go after low-hanging fruit and iterate.

Billing wants paper from which to work? Fine, give them cheap, simple reports and print them. Once they believe in the reports, they may be open to a web page instead of printing. Once they trust the web page, they be willing to consider a data feed. Once they have the data feed, who needs all this damned paper?

"Please stop printing those reports, nobody looks at them any more" is how our successful automation gigs end. But we don't start with "you guys need to ditch the paper, this is the 21st century!"

The clinical lab was slow to embrace automated analyzers and autoverification, but embrace them they did. The greying of workforce and lack of new blood means that it is time to take the automation behind the bench.

We know that some IT "automation" made data entry and other tasks slower and harder, but please don't tar us all with the same brush: there are plenty of tasks the computer can do faster and better, so you can concentrate on the other stuff.

Monday, January 20, 2014

Privacy Rules vs Care Delivery

In medical IT, we are often asked about HIPAA compliance, much in the way the Brothers Grimm probably asked little children about being in the woods: scare us into doing the "right" (ie legal liability lowering) thing.

When people say "HIPAA" generally I assume that they mean the privacy rule of HIPPA, specifically, which Wikipedia summarizes thusly:  (go here for the full text)

The HIPAA Privacy Rule regulates the use and disclosure of Protected Health Information (PHI) held by "covered entities" (generally, health care clearinghouses, employer sponsored health plans, health insurers, and medical service providers that engage in certain transactions.)[17] By regulation, the Department of Health and Human Services extended the HIPAA privacy rule to independent contractors of covered entities who fit within the definition of "business associates".[18] PHI is any information held by a covered entity which concerns health status, provision of health care, or payment for health care that can be linked to an individual.[19] This is interpreted rather broadly and includes any part of an individual's medical record or payment history. Covered entities must disclose PHI to the individual within 30 days upon request.[20] They also must disclose PHI when required to do so by law such as reporting suspected child abuse to state child welfare agencies.[21]

So is the primary goal to maintain privacy or deliver effective healthcare? If you said "both, of course!" then I must respectful say "balderdash!" I am well aware of the standard privacy advocate claim that one can easily do both, at the same time, with no loss of effectiveness. It is my experience that not only do the two goals not co-exist, they also work against each other in many instances.

In lab IT, this is most often the following tension: deliver lab results quickly to whoever might need them--nurses, PAs, MDs, NPs versus ensure that every access is by a care giver, specifically a care giver who is part of this patient's team. Even if the IT system user is a nurse who was doing something else when she was asked by a code team member to look up something on behalf of a caregiver who is currently not in a position to authenticate themselves.

When I ask privacy advocates how to balance these concerns the most common response is the claim that there is no problem: if IT does its job, then all required data will always be disclosed to the correct parties, but not the incorrect parties, in a timely manner. As someone who actually deploys systems in the real world, I find this answer supremely unhelpful.

When I ask security professionals how to balance these concerns, they ask me to restate my question as a risk:benefit statement, at which point they will help me figure out how much security to combat which risk. But when I respond that the risk is that security will interfere with the delivery of healthcare, I am referred to the standard menu of risks from which I may pick:
  • leaking information to blackmailers
  • leaking information to underwriters (insurers)
  • leaking information to the public

This company has a nice way to frame a conversation with a CISO, assuming that the organization is not a health care provider. You can find that conversation starter here:

But working in medical IT, I feel that I need a solution that takes into account some other considerations:
  • NOT disclosing information may harm someone, so I do not want to use solutions which assume that all disclosure is bad
  • disclosing information to unauthorized health care providers is often covered by other legal means, eg medical licensing so isn't that "breach" of rather low signifigance?
  • the information does not belong to the parent organization in the first place so taking steps to protect it must include ways to share it on demand
If anyone knows of a privacy policy, implementable for an actual lab information system, please let me know. I would love to stop trying to meet privacy rules in an environment where failure to disclose in a timely manner could kill someone.

Wednesday, January 15, 2014

Ew, Gross: Why Good Engineering is Sometimes Bad

I like to think of myself as a software architect and not a software engineer.

Engineers are often relieved by this, because they feel that I am not rigorous enough to be an engineer. Architects are often dismayed by this, because I don't wear pleated pants or style my hair.

The practical difference is that I feel comfortable in the creative, design end of things, which means that I often get called in when the engineers have failed.

Sadly, I have come to understand that much of the time, this failure is willful: while I may or may not have vision unmatched by most engineers, I certainly have a willingness to change dirty design diapers. The failure of the engineers, if you listen closely, is often really the sound of a young teenager saying "Ew, gross!"

A function for which I am paid, but which I do not particularly enjoy, is Engineer-to-Management translation. "Why do my IT people say X?" I hear. I try to be diplomatic (for me), but more and more often the honest answer is "The obvious solution strikes them as distasteful, so they don't want to do it."

My favourite example of this was a programmer who embraced object-orientatin to an absurd degree: he re-implemented basic math on top of the native facilities. His code defined ten numeroid objects, '0', '1', '2', '3', '4', '5', '6', '7', '8' and '9.' Of course he then had to define "addition," "subtraction," "division" and "multiplication" for these objects--what fun! His program gave the correct answer, but took 24 hours to run. Running with native integer support, the same program took a little under 8 hours. The programmer's response? "You resorted to an orthogonal formalism." That is the most learned-sounding "Ew, gross!" I have ever heard.

Other examples of holding my nose in the service of providing value include providing users with a dumb ASCII terminal long after such things had become uncool and yucky, or providing fax support despite the outdated nature of faxing.

Engineering is rule-based and rewards rule-following and so attracts rule-loving personalities. They tend to find rules comforting in addition to finding rules useful and productive.

The flip side is that, as a group, engineers are bad at value judgements. They are not inclined to break the rules and they certainly are not comfortable with solutions based on rule-breaking.

Alas, the real world--especially the clinical lab where we are trying to get it right very time--sometimes does not co-operate and requires us to pick best of a bad lot. Even worse, sometimes we need to pick any of a bad lot and get on with giving the users what they need and getting the job done.

Worst of all are the cases where the beautiful engineering is invisible to the users. All they see is that version 2 sucks worse than version 1, again. And again, engineering wants a cookie because version 2 has beautiful engineering and none of the rule-bending messiness that make version 1 so useful.

Sometimes engineering needs to get over itself and do what must be done. That is why I don't mind being scoffed at by hardcore engineers: they may be better at math, but I am better at recognizing when a rule is getting in the way of better solution. I know when to say "that is a good rule, that is a useful rule, that rule will keep you from tending to make a certain kind of mistake, but now is the time to break that precious rule."

Wednesday, January 8, 2014

Why Can You / Can't You Use The Cloud?

FAQ Wednesday is here again. Today's question: what about the Cloud and clinical labs?

This question has two variants:

  1. You can't use the Cloud for heath care data, can you--HIPAA, etc?
  2. Why can't you use the Cloud for my clinical lab interface?
Can You Use the Cloud?
The first question, which I take to mean, "is it within law and regulation to use the Cloud for PHI," is actually pretty easy to answer: yes. Does HIPAA restrict the options? Yes. Does HIPAA prohibit use of the Cloud? No.

We currently use Amazon Web Services as our Cloud vendor and they claim to be certified and everything. From


HIPAAAWS enables covered entities and their business associates subject to the U.S. Health Insurance Portability and Accountability Act (HIPAA) to leverage the secure AWS environment to process, maintain, and store protected health information and AWS will be signing business associate agreements with such customers. AWS also offers a HIPAA-focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of health information. The Creating HIPAA-Compliant Medical Data Applications with AWS whitepaper outlines how companies can use AWS to process systems that facilitate HIPAA and HITECH compliance. For more information on the AWS HIPAA compliance program please contact AWS Sales and Business Development.

But I expect that Google will keep up and this reference implies that they are:

In fact, we are counting on growing acceptance of Cloud implementations in health care, which is why we are currently developing Direct Interfaces.

Why Can't You Use the Cloud?
This is a slightly difference question, which I take to mean "in practical terms, what are the obstacles to Cloud-based interfacing?" The short answer is "the conservative nature of hospital and clinical lab IT culture." This is very linked to why lab interfacing in general is so hard: our industry punishes mistakes and does not reward innovation. So often, doing nothing is rewarded and thus fighting innovation tooth-and-nail is the norm.

(Since this is legal and low overhead and effective, we plan to step around the hospital and clinical lab IT organizations with our new Cloud-based lab connectivity venture, but that is another story.)

Friday, January 3, 2014

Data-Driven Virtuous Cycles

To paraphrase the Six Sigma religious tenet, "you can only manage what you can measure." While I think that this is somewhat over-simplified, there is certainly truth to it.

Specifically, I run into issues which can only be resolved with a solid metered context. Often, while debugging these issues, I build "scaffolding" which then lives on as the metering to drive a continuous monitoring process.

Some issues have many factors, related in murky and maddening ways and the only way to untangle the knot is to measure, find and fix something and then return to step one until your measure tells you that you are done.

Current Example
We are in the process of restoring functionality lost when a highly tuned system was replacing by something manifest worse--but cooler. One of the data elements lost was who collected the specimen. This turns out to be critical for many management functions.

The first reaction to our bug report was "nonsense! the new system is fabulous!".

The second reaction was "ok, looking at your data, we see that this is happening, but we have come up with a labor-intensive workaround and we will simply command people to follow the new procedure."

The third reaction was "ok, we see that some areas are not complying but we are going to scold them--and stop recording this data, because we don't need it anymore."

Needless to say, we are still collecting the data and still helping them police their spotty compliance. Someday, the meters will tell us that all is well and we can go back to relying on this data for our high value reports.

The Bad Old Days
This situation is sadly similar to our work with scanned requisition forms. When we deployed our draw station solution, we became part of the scanned req infrastructure. As the newest member of the team, we were immediately blamed for any and all missing reqs. In self-defence, I created an audit to trace scanned req activity, comparing expected with actual. We immediately made a number of interesting discoveries:
  1. I had some bugs, which I fixed and verfied as fixed
  2. Some users were not really on board with scanned reqs so we started to nag them
  3. Some of the orders for which we were blamed did not come through the draw station; the Front Bench decided to use our software to ensure compliance
  4. Some of the scanners were in need of service
  5. The placement of the bar codes on the page matters more than one would hope
With feedback and monitoring, the situation has improved dramatically and our req watchdog technology is actually still in service even as the LIS and draw station solution for which it was created are out of service and about to be retired, respectively.

Tube Tracking
I think that our tube tracking experience can also be seen as measurement leading to clarity and control, so I am including it.

Measure, management, repeat. Even when all is well, don't stop auditing and reviewing.

Wednesday, January 1, 2014

XML vs HL7

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why do so many systems use HL7 instead of XML?"

This is a good question with many possible answers, but this is my executive summary: XML is easy for humans to read and HL7 is easy for computers to process.

Medical IT is often short on power and long on functionality, so it is natural to avoid expensive to process and to embrace easy to process, even at the cost of human legibility. In my experience the people who wonder at the lack of XML are not career medical IT professionals.

XML is a mark up language, a structured tagged text format which descends directly from SGML. It was intended as a platform-independant document storage format, but has become a kind of universal data exchange format.

HL7 is a line-oriented, record and field-based text format which is rather reminiscent of serial line-oriented message formats of yore, such as ASTM which was already familiar to clinical lab people from instrument interfaces.

XML makes more-or-less self-documenting "trees" which can be displayed natively by most browsers, or "visualized" with a little javascript magic: There are lots of tools for working with XML and storing it.

In theory, XML is fault-intolerant: XML processing is supposed to halt at the first error encountered. This is not very robust, but in theory there should be no errors because you can write a format document type definition (DTD) which will allow people to make sure that they are sending and receiving data in exactly the way that you expect. If the XML document is made using a DTD and parsed with the same DTD, what could go wrong? And whoever created the data used a validator, such as on it before releasing it, right?

(In practice, I do not see very much strict adherence to document type definitions.)

HL7 makes nice, simple messages which can be easily processed by almost any programming language. I have written HL7 message processors in C, Perl, PHP, and BASIC.

So how do these formats look like side by side? Consider the following two samples:

HL7 Lab Result:
00000-0000|||||||86427531^^^03|SSN# HERE
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORY
CULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROA


OBX|2|CE|997231^RESULT 1^L||M415|||||N|F|||19980729160500|BN
OBX|3|CE|997232^RESULT 2^L||MR105|||||N|F|||19980729160500|BN


XML Lab Result:
<element name="lab-test-results">
                        A series of lab test results.
                <element name="when" type="d:approx-date-time" minOccurs="0">
                                The date and time of the results.
                <element name="lab-group" type="lab:lab-test-results-group-type" maxOccurs="unbounded">
                                    A set of lab results.
                <element name="ordered-by" type="t:Organization" minOccurs="0">
                                    The person or organization that ordered the lab tests.