Showing posts with label Ordering. Show all posts
Showing posts with label Ordering. Show all posts

Monday, September 22, 2014

Advice For Pre-processing Lab Orders

Last weekend, in a supposedly social situation, I happened to run into a clinical pathologist and talk turned naturally to clinical lab informatics. (Yes, the rest of the table was pretty uninterested.)

This CP mentioned that his organization is in the early stages of processing clinical lab orders. This is a great idea, but depressingly infrequent. This is a great idea because:

    There are some orders which cannot be done (incorrect collection, eg) or will not be done (policy violation, eg) and the sooner you tell the ordering clinician that, the sooner they can do whatever makes sense.

    The earlier you flag a doomed order, the cheaper and easier it is to handle it.  Why waste precious human attention in the lab if a rule set can do the job so much faster and just as effectively?

I have lots of free advice--worth every penny, as they say--to offer on the tchnical implementation of these kinds of schemes. However, we were not able to cover the implementation advice in a social setting, much to every one else's undoubted relief. So instead I thought that I would pound out this blog post and send him a link.

What exactly are we talking about? Let us consider the usual life-cycle of the lab order:

(HIS=Hospital Information System; EMR=Electronic Medical Record; LIS=Laboratory Information System)

  1. HIS
    1. clinician logs into an HIS
    2. clinician identifies a patient
    3. clinician specifies one or more lab tests
    4. clinician submits the order to the LIS
    5. HIS generates a lab order
  2. Order Interface
    1. accepts order from HIS
    2. sends HIS order to the LIS
  3. LIS
    1. LIS receives the order
    2. specimen is collected
    3. lab receives the specimen
    4. specimen is prepped if need be
    5. assay is run
    6. result is verified & posts to the LIS
    7. result leaves the LIS, en route to HIS
  4. Result Interface
    1. accepts result from LIS
    2. sends result to the HIS
  5. HIS (full circle:
    1. clinician sees result, is enlightened
We are talking about adding a step to item 2 (Order interface) in between 2.1 and 2.2. Let us call this step "validate order". In practical terms we are talking about a relatively simple piece of software which applies rules to the HIS order to ensure that the order can be processed.

Conceptually, this software is *VERY* similar to the typical auto-verification module which applies rules to the raw lab result to ensure that the result can be reported.

When an order fails the pass the rules, it is cancelled and a comment added "Cancelled per lab policy: {explanation or link to explanation}".

Since a "hard stop" on clinician's practice of medicine makes everybody nervous, we usually start with the encoding lab policy in the rule set: if the lab is not going to the test anyway, there can be no harm in rejecting the order.

Since implementation is never flawless, we start out with "advisory mode': the module flags orders it _would_ cancel, so a lab tech can confirm that the logic is working. It is a good idea to have an advisory period for every new rule and to audit some percentage on an on-going basis.

So the enhanced model looks like this:

  1. HIS
    1. clinician logs into an HIS
    2. clinician identifies a patient
    3. clinician specifies one or more lab tests
    4. clinician submits the order to the LIS
    5. HIS generates a lab order
  2. Order Interface
    1. accepts order from HIS
    2. validate order
      1. order OK: sends HIS order to the LIS
      2. order failed: cancel with explanation
  3. LIS
    1. LIS receives the order
    2. specimen is collected
    3. lab receives the specimen
    4. specimen is prepped if need be
    5. assay is run
    6. result is verified & posts to the LIS
    7. result leaves the LIS, en route to HIS
  4. Result Interface
    1. accepts result from LIS
    2. sends result to the HIS
  5. HIS (full circle:
    1. clinician sees result, is enlightened
Done correctly, this technique saves the lab time, it gives the clinician feedback that is both speedy and educational and avoids unneeded collection events.

"Correctly" probably requires knowing the leading causes of cancellation and tracking causes of cancellation and confirming that as you add rules, you cut down on the manual cancellations.

There it is: how best to roll out this kind of thing, in my experience.

Wednesday, December 18, 2013

Why Is Merging Test Catalogs So Hard?

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why is merging test catalogs so hard?"

This question fits O'Flaherty's first law of applied technology: "It is usually not the technology that takes the time."

This question arises in a number of contexts:
  • Lab Manuals across organization (eg hospital systems)
  • Lab Manuals across labs (eg main lab and satellite lab)
  • Ref lab interfaces (order from one catalog, process from another)
  • Historical data archives (showing results from a previous catalog)
The answer is always the same: assumptions and comparisons.

When considering a lab result, there is a huge amount of assumed context:
  1. the collection method sometimes matters, sometimes does not;
  2. the methodology sometimes matters, sometimes does not;
  3. the actual value sometimes matters, sometime only the High / Normal / Low flag;
  4. having a basis of comparison to prior results sometimes matters and sometimes does not
Take Vitamin D levels for example: there are a couple of ways to measure it (methodology) which vary in exactly what they measure and how accurately they measure it. If you are a specialist, these differences may be very important. If you are a generalist, these differences may be meaningless to you. If you are trying to provide historical context by evaluating a series of these results, you almost certainly assume that the methodology was constant, which ever methodology was used.

The problem arises when catalog A only has "Vitamin D" and catolog B has "Vitamin D (the cool one)" and "Vitamin D (the usual one)". How do work across the catalogs?

There is always renaming to enforce consistency, but that is not straightforward and it is often politically charged: someone wins and someone loses. Furthermore, changing the name of a test is problematic: should the clinician infer that the assay has changed somehow? If so, how?

There is add new tests to enforce consistency, but is also a qualified success: what if the new test is essentially the same as the old one? How does the clinician know that, how does the software which displays the results know that these two "different" tests are the same lab assay?

Worse, making these decisions requires many different skill sets: clinical, lab and data modelling. So why is merging test catalogs so hard: because there is so much to it and so much of it is not immediately apparent. Hello, Titantic, meet Ice Berg.

Wednesday, November 27, 2013

Why Are Lab Interfaces So Hard?

Taking a frequently asked question off the pile for today's post: "why are Lab Interfaces so hard?" Variants include "why do they take so long to get up and running?" and "why are they so expensive?" and "why don't they work better?"

Here is the most common form of clinical lab interface I encounter:

  • Secure network connection: TCP/IP over a Cisco VPN
  • Bi-directional: orders from client to lab, results from lab to client
    • client has a TCP/IP client for sending orders to the lab
    • lab has a TCP/IP server for receiving orders from the client
    • client has a TCP/IP server for receiving results
    • lab has a TCP/IP client to sending results
  • HL7: perfect it is not, but it is common and well-understood
  • Assay ID translation: someone has to map the client's test codes to the lab's, or vice-versa
None of this is difficult, technology-wise. I have written such interfaces from scratch in about 8 working hours. Provided that the specs are good, the set up and and programming is more tedious than challenging.

(The specs include the mapping of test codes across menus, by the way. Why that mapping often gets left as an exercise to the programmer I will never understand.)

It is becoming increasingly common to set a target of 24 hours for bringing up such an interface: if you have a working interface framework and a reasonably seasoned team, this should be quite doable.

So-called Direct Interfaces are becoming more talked about. I have yet to see one deployed in the real world in my consulting practice, but I like the idea. In theory this would cut do the network configuration time, but I suspect that the added x509 certificate wrangling will often more than make up for it.

So why does it take days or weeks or even months to get an interface up and running? In my experience, the roots causes are two: bureaucracy and misaligned agendas.

The bureaucracy means that I have spent the majority of my time on projects like this walking from cube to cube, paperwork in hand, looking for permission to connect to another organization. Or hours on the phone, chasing up-to-date, accurate specs and precise explanations of terms like "alternate patient ID".

The misaligned agendas make pulling together in the same direction difficult:

  1. The client's lab wants that labor-saving, time-saving, money-saving interface.
  2. The client IT department does not want to expose a chink in their armour, which chink will likely never get closed and which chink will be used by an interface which must run 24/7 and always be up.
So what IT hears is "buy a Cisco VPN license; pray that our Cisco VPN appliance has a spare connection; punch a hole in the firewall, but don't botch it leaving us open to hackers; assume responsibility for monitoring the health of that VPN tunnel."

To many IT ears, this sounds like a lot of work, some real money and an on-going obligation with little upside and huge potential downsides. So the network folks become what my firm calls "stoppable"--any hitch in the plan brings them to a halt. Who knows? If they delay long enough, maybe the project will go away....

So why is it so hard, take so long, often have never-resolved issues? Because navigating bureaucracy is exhausting and overcoming organizational inertia takes a long time.

Tuesday, November 26, 2013

NPI: No Office = Bad For Lab


Yesterday a sadly familiar issue landed on my desk: how to make NPIs work for lab results.

NPIs were brought to you by CMS, so they have to be good.


To quote from The National Provider Identifier (NPI): What You Need to Know

How Many NPIs Can a Sole Proprietor Have?
A sole proprietor is eligible for only one NPI, just like any other individual. For example, if a physician is a sole proprietor, the physician is eligible for only one NPI (the individual’s NPI), regardless of the number of different office locations the physician may have, whether the sole proprietorship has employees, and whether the Internal Revenue Service (IRS) has issued an EIN to the sole proprietorship so the employees’ W-2 forms can reflect the EIN instead of the sole proprietorship’s Taxpayer Identification Number (which is the sole proprietor’s SSN).
This is logical, but it is very bad for clinical laboratories. Physicians practicing medicine do not fit the model of citizens paying taxes: if I have seven offices, which I visit in rotation, I want the labs for patients who live in Shoretown to go to my office in Shoretown: sending them to any of my other offices creates work and runs the risk of me not having them where and when I need them. But sending them to all my offices creates a blizzard of faxes and a mountain of filing.

Unless, of course, my practice has invested in and correctly configured a centralized, distributed Electronic Medical Record with a real-time, bidirectional Lab interface. Which, very likely, is not the case. At least not from what I see in the field. At least not yet.

So here I am, stuck in the real world, creating yet another mapping from internal LIS ordering ID (which is office-specific) to NPI (which is not office-specific but which is required by many legal agreements).

The good news is that NPI excels at figuring out which providers merit a fruit basket for all their business: I can analyze the ordering patterns by practice pretty easily by associating NPIs together into a practice. I just can't help you with where to send that fruit basket. Or those lab results.

Friday, November 22, 2013

Google Maps for Data Visualization

We are looking into ways to make draw data and ordering statistics more useful to our clients. Perhaps a map? Check out this excellent tutorial on how to get started: I did and was very impressed.

http://www.w3schools.com/googleAPI/default.asp

Our proof-of-concept project has the following goals:
  1. a map of the catchment area
    1. marker for client
    2. marker for each draw station
  2. pop-up for each marker
    1. show the absolute activity for that location
    2. show the percentage activity for that location
  3. do the same thing with ordering clinicians instead of draw stations.
We are eager to see if the coolness translates into actual usefulness.

Tuesday, March 19, 2013

Ordering Off The Menu

Today we start implementing our solution for a universal lab problem: ordering off the menu. In other words, supporting orders for tests which your lab can either do, or send out, but which are not common enough to merit a test definition and/or an entry in your Lab Manual.

(In the trenches, we know that the explosion of possible assays will likely always outstrip the ability of labs to define tests and to maintain their Lab Manual, but the "not common enough" party line is so much more palatable to management that we use it as a courtesy to our clients.)

A truly useful test definition has the following aspects:
  1. an LIS definition to facilitate processing
  2. documentation, ideally in the external Lab Man
    1. what the test is called
    2. how to order it
    3. why order it
    4. when it is done
  3. documentation, hopefully in an internal Lab Man
    1. collection instructions
    2. handling & preparation instructions
    3. processing instructions 
  4. a billing definition to facilitate billing
An uncommon or new test might have to do without item 1 above for a while, because LIS test definitions are often complicated to do, difficult to validate and scarey to release.

More awkwardly, it is usually difficult to collaborate on test definitions in an LIS, although the ordering information make come from a Medical Director, the collection & handling instruction from the Front Bench and the processing instructions from the Lab Supervisor. Usually the LIS only provides good tools for the processing instructions since those are near and dear to LIS developers' hearts.

But when the lab falls too far behind in documenting new or uncommon assays, they make work for themselves when orders for these tests arrive. And this work is at every stop along the way: Customer Service to answer questions about ordering, the Front Bench to deal with miscollected or mishandled specimens, the Processing Bench to try to do the assay on whatever was collected, etc.

So we are providing a technological stepping-stone: what we call a pre-LIS test definition. We created a system to do the following:
  • attach different names to the MISC code
    • these looks like different tests to the users for ordering & processing
    • but these look like a MISC to the LIS for reporting
  • allow different users to enter different parts of the pre-LIS
    • include that information in the internal Lab Man
    • include that information in the external Lab Man
  • when the pre-LIS test definition is ready
    • automatically create an internal Lab task to build the test definition
    • automatically send information to Billing to request billing codes, etc
We feel that this is the best of both worlds: a rapid, easy-to-edit visible test definition that supports, rather than usurps, building LIS test definitions.

Monday, November 1, 2010

Laboratory Manual for SoftLab

The client has a shiny new SoftLab installation from SCC. What they do not have is an automatically generated Laboratory Manual (LM) to go with their new test definitions.

They want to use their SoftLab test definitions, so far as they go, because the LIS's test definitions are what is actually in use.

But they need more attributes to create a fully features LM and they do not want to create this HTML document by hand or with a Content Management System because they want the LIS test attributes refreshed automatically every night and new LM generated early every morning.

Furthermore, they want an internal version, containing internal processing notes and procedures, and an external version, without internal material and including contact information, etc.

I defined some terms to help them understand their problem:
  • core test attributes--the SoftLab test definitions
  • extended test attributes--the addition ones needed for the manual
Our solution for them has the following pieces:
  • a database into which to put these attributes
  • a SoftLab interface to get the core attributes
  • a desktop app to provide a rich UI for maintaining extended attributes
  • a template-driven HTML document creator which runs automatically
  • an interactive search tool to provide users with a way to search

 The system understands the differences between internal tests, orderable-only tests, resultable-ony tests, orderable & resultable tests and groups. Groups are automatically expanded as appropriate. The format and contents of the internal and external manuals varies as appropriate.

As part of getting this up and running, we provided tools for debugging the SoftLab definitions and validating the groups, especially with respect to groups of groups.

Monday, February 4, 2008

Custom Requisition Support

One of my colleagues provided the client with a very nice customer requisition creator:
  • a database which holds practices and maps practices to their favourite assays;
  • a UI to maintain the database;
  • a formatter to put the custom information onto requistion forms
The customized requisitions were a huge hit with the providers, but not with the phlebotomists: the providers get an easy way to order just what they want to order but the phlebotomists get order forms which vary widely, making entering those forms into the computer harder than they would like.

In order to better support draw station operations, I upgraded our draw station software to accept the practice ID as part of the patient greeting process. This allows the draw station app to put up a web page which matches the paper requisition in the phlebotomist's hand.

So if they want to order the test which is the fourth box on the second row on the paper, they click on the fourth box on the second row of the screen.

Since the draw station procedure is to highlight the assays with a yellow marker as the assays are entered, to ensure entry accuracy, I mimic that on the screen: when a box is checked off, that assay has a yellow background.

The feedback was immediate and positive: the users who are not that comfortable with computers were instantly comforted by the close corelation of the physical and virtual.

Friday, July 20, 2007

Homegrown LIS -> Ref Lab

Client has a homegrown LIS, but wants to have a bi-directional interface with major ref labs: ARUP and Mayo to start.

The homegrown LIS cannot be extended, so I created a piece of Middleware (MW) to bridge the gap.

On the LIS side, the MW appears to be an automated analyzer, to which orders flow and from which results come.

On the Ref Lab side, the MW appears to be an industry-standard, up-to-date LIS, speaking HL7 over TCP/IP.

It all works like a charm:
  1. A user  of the homegrown LIS places an order for a send out
  2. The MW detects the order as if it were an instrument
  3. The MW stores the order in its database
  4. The MW creates an HL7 order for the appropriate ref lab
  5. The MW sends the HL7 order on its way
  6. The Ref Lab interface receives the order
  7. The Ref Lab interface sends a result
  8. The MW receives the result and updates its database
  9. The MW creates a message to tell the homegrown LIS the result
  10. Any user of the homegrown LIS can see the result


The MW has three components: a database, a TCP/IP client to send orders and a TCP/IP sever to receive results.

The database allows for various automatically generated management reports and tracking of activity and back up of the received results.

Wednesday, October 4, 2000

Laboratory Manual for Homegrown LIS

The client is a large hospital-based clinical lab with a homegrown LIS--the only LIS they have ever known.

Since the LIS was designed in the mid-1970s, it lacks some modern amenities, such as support for exporting test definitions to an HIS, let alone the several (!) HISes in use by the parent academic medical center.

The client developed a standalone test definitions database using an MS-DOS database management system; our first job was to port that schema to MySQL under Linux.

The client developed a library of export formats so our next job was to catalogue, document and implement exports for these formats. We did the document on-line, as a web site, for ease of use.

Since the process was laborious and manual and involved inventory (printed documents), the client was used to producing their Lab Manual only when pressure to do so became intolerable. We made the Lab Manual creation process push-button and greatly extended the range of supported formats:
  • Letter-sized paper for bench use
  • Pocket-sized paper for clinicians who are married to paper
  • HTML for both public and internal formats
  • Palm PDA format for that user base
  • Palm or PocketPC format using the Castle development environment
We helped them migrate off of paper, although I keep a pocket manual on my desk to remind me of why computers are a good idea for this kind of thing.