Showing posts with label Interfacing. Show all posts
Showing posts with label Interfacing. Show all posts

Tuesday, July 21, 2015

System Interfaces Are Not Kitchen Renovations

Recently I had to write my part of an RFP. The topic was a system-to-system interface between two health care information systems. I went through the usual stages:
  1. Nail down my assumptions and their requirements
  2. Come up with a design to meet those requirements
  3. Translate the design into an implementation plan
  4. Put the implementation plan into a spreadsheet
  5. Make my best guess as to level of effort
  6. Have our project manager review my egotistical/optimistic assumptions
  7. Plug the estimated numbers into the spreadsheet
  8. Shrug and use the resulting dates and cost estimates
The result was all too predictable: push-back from the customer about the time and cost. In our amicable back-and-forth, which seemed to be driven on her side by a blind directive to challenge all prices of all kinds, I had an epiphany: software development in general and interfacing in particular is not a kitchen renovation, so why do customers act as if they were the same?

I have been on both sides of kitchen renovation and there are some similarities:
  • the customer is always impatient
  • the cost is hard to contain
  • accurately imagining the outcome of decisions is an uncommon skill
But there are some crucial differences:
  • the concept of kitchen is well-known and well-understood by most people
  • the elements of a kitchen are similarly familiar: sinks, cabinets, etc
  • examples of kitchens one likes can be found
  • in general the main user is also the main contact with the contractor
Why do I get huffy when people tell me I am padding my estimates? Because writing interfaces between complex systems is like the sand shifting beneath your feet. Sure, it is just another HL7 interface between an industry standard source system and a system of ours which is the intermediary system, but which has to export its data to a completely different industry-standard target system.

Thus we are linking industry standard system A (ISSA) to industry standard system B (ISSB): a piece of cake! Except....
  • ISSA has over 1,500 configurable parameters (literally).
  • ISSA was deployed almost five years ago and no one from that team is around.
  • ISSA's configuration was in flux those first few years.
These factors complicate my job because the source HL7 does not exactly match the spec. Further complications arise from the fact that the users have developed some local idioms, so the data elements are not being used in exactly the standard way.

On the target side, ISSB is still being configured, so I am trying to hit a moving target. Which of the local idioms serve a higher purpose (so they will stay) and which of them were to compensate for issues with ISSA? No one knows. What new idioms will evolve to compensate for issues with ISSB? No one can even guess.

So this is like remodelling a kitchen if the counters used to be suspended from the ceiling but now might be cantilevered from the walls and the water might be replaced by steam.

How long will it take to keep rewriting the interface so that the new system's evolving configuration and the customer's evolving needs are all met? I don't know; I wish I did. In the meantime, my good faith best guess will have to do.

Tuesday, July 14, 2015

Lab Automation vs IT Centralization

Over the past decade I have witnessed two trends in clinical lab computing which I think are two sides of the same coin:
  • Lab process automation is going down
  • IT is centralized, becoming more consolidated and less Lab-specific
 By "Lab process automation" I mean identifying repetitive tasks performed by humans and transferring those tasks to a computer or computers.

By centralized, I mean that the IT which serves the lab is now generally the same IT which serves all other parts of the organization.

I can see the appeal, especially to bean counters, of centralization: more control by execs, economy of scale, etc. But most of the IT groups I encounter are really all about office automation:
  • email
  • shared files
  • shared printers
  • remote access
These are all great if you are running a typical office, which is how everything seems to look from most C Suites.

Alas, the clinical lab is far closer in basic structure to a manufacturing plant than to a law office. Typical corporate IT is not good at process automation:
  • receiving orders
  • receiving specimens (raw material)
  • matching up specimens and orders
  • doing the assays (processing raw material)
  • serving up results (delivering finished product)
At the bench, email support and file-sharing are not very helpful; powerful and speedy instrument interfaces, audit support and throughput analysis are all much more helpful.

But centralized IT is not only oriented away from this kind of business process automation, they are punished for it: why are you spending time and money on lab-specific projects? If you get email to work on your boss's boss's boss's iPhone, you are a hero. If you figure out how to alert bench techs to specimens which need a smear, you are spending too much time in the lab.

Worse, as corporate IT completes its transition into being mostly about vendor management, the idea of doing what the vendors cannot or will not do--plugging the gaps between off-the-shelf products which gaps cause so much of the needless drudgery in lab work--becomes first unthinkable and then impossible.

Farewell, lab process automation: perhaps vendors will some day decide the interoperability is a goal and then you will live again. But I am not betting on it.

Thursday, January 8, 2015

Too Many Options = Project Problems

I am finishing up my first instrument interface in quite a while. It did not go as smoothly as it should have and the culprit was complexity, aided by the distraction of options.

Instrument Interfacing Then and Now
In days of yore, automated analyzers could be coaxed to spew out ASTM messages out of a simple serial port and you could capture that ASTM for what little processing was needed and then send that data to the LIS, the lab's computer system.

Then came network-capable instruments and HL7, which was an improvement in many ways but much more complicated. (More complexity: sometimes there was ASTM over network and rarely HL7 over serial.)

The LIMS concept took off: a kind of middleman between the instruments and the LIS. The lines became blurry: you could verify on the instrument, on the LIMS or in the LIS. Orders can from the HIS via either the LIS or the LIMS.

With each wave of change, I shrugged and did whatever was needed. Sometimes it was easier; sometimes it was harder. Often things improved for users. There was certainly more to debug when things went wrong.


New Instrument Interface
In theory, after a few years of progress, this instrument interface should have been easier than previous efforts, because automated analyzers are so much smarter than they used to be. Better yet, I controlled the LIS end in this case because I wrote it and I maintain it.

And it was easier, each of the three times I wrote it. Overall, the interfacing was not really any easier than in days gone by.

Why did we have to write the interface three times?
Version1: Real-time Instrument to LIS
We went through the pain of network configuration (not least punching a hole in at least one firewall) to connect the instrument to the LIS. I wrote the simple HL7 message processor to load the data as it came off the instrument. The HL7 was a little odd, but that was not a big deal. Hurray! Mission accomplished.

Version 2: Real-time LIMS to LIS
Then the users decided that they wanted to use the LIMS that they already had, because order and data entry was better on the LIMS than on the instrument. So they connected their instrument to the LIMS and we connected the LIMS to our LIS and I rewrote the HL7 handling to accommodate the variant HL7 produced by the LIMS. Hurray! again.

(Then some data arrived in the instrument HL7 dialect. Whoops! the users were experimenting with not using the LIMS. We didn't mind, did we? Then more data arrived in the LIMS dialect: yes, they were going to use the LIMS. Almost certainly.)

Version 3: Batch LIMS to LIS
Whoops. The users decided that they wanted the data entry on the LIMS, but to do verification on our LIS which implied to them that the results would be batched on the LIS. So I rewrote the network layer to cache the HL7 messages until the users felt that the results constituted a "batch". Hurray, except that now we are over budget and they have changed their deadline to allow the lab to change their procedures.




Conclusion
More options do not always make projects smoother or better--or even make consumers happy (see this Fast Company article for details).

As the Lab IT landscape becomes more cluttered, I find that projects are harder to define, longer to implement and more expensive to execute. I would like to say that the rise of the giant Commercial Off The Shelf (COTS) software is fixing this problem, but my experience is that COTS is making it worse.


So the next time your IT folks say "it isn't that simple" press them for details but be prepared to believe them.

Monday, September 22, 2014

Advice For Pre-processing Lab Orders

Last weekend, in a supposedly social situation, I happened to run into a clinical pathologist and talk turned naturally to clinical lab informatics. (Yes, the rest of the table was pretty uninterested.)

This CP mentioned that his organization is in the early stages of processing clinical lab orders. This is a great idea, but depressingly infrequent. This is a great idea because:

    There are some orders which cannot be done (incorrect collection, eg) or will not be done (policy violation, eg) and the sooner you tell the ordering clinician that, the sooner they can do whatever makes sense.

    The earlier you flag a doomed order, the cheaper and easier it is to handle it.  Why waste precious human attention in the lab if a rule set can do the job so much faster and just as effectively?

I have lots of free advice--worth every penny, as they say--to offer on the tchnical implementation of these kinds of schemes. However, we were not able to cover the implementation advice in a social setting, much to every one else's undoubted relief. So instead I thought that I would pound out this blog post and send him a link.

What exactly are we talking about? Let us consider the usual life-cycle of the lab order:

(HIS=Hospital Information System; EMR=Electronic Medical Record; LIS=Laboratory Information System)

  1. HIS
    1. clinician logs into an HIS
    2. clinician identifies a patient
    3. clinician specifies one or more lab tests
    4. clinician submits the order to the LIS
    5. HIS generates a lab order
  2. Order Interface
    1. accepts order from HIS
    2. sends HIS order to the LIS
  3. LIS
    1. LIS receives the order
    2. specimen is collected
    3. lab receives the specimen
    4. specimen is prepped if need be
    5. assay is run
    6. result is verified & posts to the LIS
    7. result leaves the LIS, en route to HIS
  4. Result Interface
    1. accepts result from LIS
    2. sends result to the HIS
  5. HIS (full circle:
    1. clinician sees result, is enlightened
We are talking about adding a step to item 2 (Order interface) in between 2.1 and 2.2. Let us call this step "validate order". In practical terms we are talking about a relatively simple piece of software which applies rules to the HIS order to ensure that the order can be processed.

Conceptually, this software is *VERY* similar to the typical auto-verification module which applies rules to the raw lab result to ensure that the result can be reported.

When an order fails the pass the rules, it is cancelled and a comment added "Cancelled per lab policy: {explanation or link to explanation}".

Since a "hard stop" on clinician's practice of medicine makes everybody nervous, we usually start with the encoding lab policy in the rule set: if the lab is not going to the test anyway, there can be no harm in rejecting the order.

Since implementation is never flawless, we start out with "advisory mode': the module flags orders it _would_ cancel, so a lab tech can confirm that the logic is working. It is a good idea to have an advisory period for every new rule and to audit some percentage on an on-going basis.

So the enhanced model looks like this:

  1. HIS
    1. clinician logs into an HIS
    2. clinician identifies a patient
    3. clinician specifies one or more lab tests
    4. clinician submits the order to the LIS
    5. HIS generates a lab order
  2. Order Interface
    1. accepts order from HIS
    2. validate order
      1. order OK: sends HIS order to the LIS
      2. order failed: cancel with explanation
  3. LIS
    1. LIS receives the order
    2. specimen is collected
    3. lab receives the specimen
    4. specimen is prepped if need be
    5. assay is run
    6. result is verified & posts to the LIS
    7. result leaves the LIS, en route to HIS
  4. Result Interface
    1. accepts result from LIS
    2. sends result to the HIS
  5. HIS (full circle:
    1. clinician sees result, is enlightened
Done correctly, this technique saves the lab time, it gives the clinician feedback that is both speedy and educational and avoids unneeded collection events.

"Correctly" probably requires knowing the leading causes of cancellation and tracking causes of cancellation and confirming that as you add rules, you cut down on the manual cancellations.

There it is: how best to roll out this kind of thing, in my experience.

Wednesday, January 8, 2014

Why Can You / Can't You Use The Cloud?

FAQ Wednesday is here again. Today's question: what about the Cloud and clinical labs?

This question has two variants:

  1. You can't use the Cloud for heath care data, can you--HIPAA, etc?
  2. Why can't you use the Cloud for my clinical lab interface?
Can You Use the Cloud?
The first question, which I take to mean, "is it within law and regulation to use the Cloud for PHI," is actually pretty easy to answer: yes. Does HIPAA restrict the options? Yes. Does HIPAA prohibit use of the Cloud? No.

We currently use Amazon Web Services as our Cloud vendor and they claim to be certified and everything. From http://aws.amazon.com/compliance/#case-studies:

HIPAA

HIPAAAWS enables covered entities and their business associates subject to the U.S. Health Insurance Portability and Accountability Act (HIPAA) to leverage the secure AWS environment to process, maintain, and store protected health information and AWS will be signing business associate agreements with such customers. AWS also offers a HIPAA-focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of health information. The Creating HIPAA-Compliant Medical Data Applications with AWS whitepaper outlines how companies can use AWS to process systems that facilitate HIPAA and HITECH compliance. For more information on the AWS HIPAA compliance program please contact AWS Sales and Business Development.

But I expect that Google will keep up and this reference implies that they are:

http://www.healthcareinfosecurity.com/google-amazon-adjust-to-hipaa-demands-a-6133

In fact, we are counting on growing acceptance of Cloud implementations in health care, which is why we are currently developing Direct Interfaces.

Why Can't You Use the Cloud?
This is a slightly difference question, which I take to mean "in practical terms, what are the obstacles to Cloud-based interfacing?" The short answer is "the conservative nature of hospital and clinical lab IT culture." This is very linked to why lab interfacing in general is so hard: our industry punishes mistakes and does not reward innovation. So often, doing nothing is rewarded and thus fighting innovation tooth-and-nail is the norm.

(Since this is legal and low overhead and effective, we plan to step around the hospital and clinical lab IT organizations with our new Cloud-based lab connectivity venture, but that is another story.)

Wednesday, January 1, 2014

XML vs HL7

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why do so many systems use HL7 instead of XML?"

This is a good question with many possible answers, but this is my executive summary: XML is easy for humans to read and HL7 is easy for computers to process.

Medical IT is often short on power and long on functionality, so it is natural to avoid expensive to process and to embrace easy to process, even at the cost of human legibility. In my experience the people who wonder at the lack of XML are not career medical IT professionals.

XML is a mark up language, a structured tagged text format which descends directly from SGML. It was intended as a platform-independant document storage format, but has become a kind of universal data exchange format.

HL7 is a line-oriented, record and field-based text format which is rather reminiscent of serial line-oriented message formats of yore, such as ASTM which was already familiar to clinical lab people from instrument interfaces.

XML makes more-or-less self-documenting "trees" which can be displayed natively by most browsers, or "visualized" with a little javascript magic: http://www.w3schools.com/xml/xml_to_html.asp There are lots of tools for working with XML and storing it.

In theory, XML is fault-intolerant: XML processing is supposed to halt at the first error encountered. This is not very robust, but in theory there should be no errors because you can write a format document type definition (DTD) which will allow people to make sure that they are sending and receiving data in exactly the way that you expect. If the XML document is made using a DTD and parsed with the same DTD, what could go wrong? And whoever created the data used a validator, such as http://www.w3schools.com/xml/xml_validator.asp on it before releasing it, right?

(In practice, I do not see very much strict adherence to document type definitions.)


HL7 makes nice, simple messages which can be easily processed by almost any programming language. I have written HL7 message processors in C, Perl, PHP, and BASIC.

So how do these formats look like side by side? Consider the following two samples:

HL7 Lab Result:
MSH|^~\&|LCS|LCA|LIS|TEST9999|199807311532||ORU^R01|3629|P|2.2
PID|2|2161348462|20809880170|1614614|20809880170^TESTPAT||19760924|M|||^^^^
00000-0000|||||||86427531^^^03|SSN# HERE
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORY
CULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROA
SRC:PENI|19980727000000||||||20809880170||19980730041800||BN|F

OBX|1|ST|008342^UPPER RESPIRATORY CULTURE^L||FINALREPORT|||||N|F||| 19980729160500|BN
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|2|8642753100012^LIS|20809880170^LCS|997602^.^L|||19980727175800||||G|||
19980727000000||||||20809880170||19980730041800|||F|997602|||008342

OBX|2|CE|997231^RESULT 1^L||M415|||||N|F|||19980729160500|BN
NTE|1|L|MORAXELLA (BRANHAMELLA) CATARRHALIS
NTE|2|L| HEAVY GROWTH
NTE|3|L| BETA LACTAMASE POSITIVE
OBX|3|CE|997232^RESULT 2^L||MR105|||||N|F|||19980729160500|BN
NTE|1|L|ROUTINE RESPIRATORY FLORA


(from http://www.corepointhealth.com/resource-center/hl7-resources/hl7-oru-message)

XML Lab Result:
<element name="lab-test-results">
        <complexType>
            <annotation>
                <documentation>
                    <summary>
                        A series of lab test results.
                    </summary>
                </documentation>
            </annotation>
            <sequence>
                <element name="when" type="d:approx-date-time" minOccurs="0">
                    <annotation>
                        <documentation>
                            <summary>
                                The date and time of the results.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
                <element name="lab-group" type="lab:lab-test-results-group-type" maxOccurs="unbounded">
                    <annotation>
                        <documentation>
                            <summary>
                                    A set of lab results.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
                <element name="ordered-by" type="t:Organization" minOccurs="0">
                    <annotation>
                        <documentation>
                            <summary>
                                    The person or organization that ordered the lab tests.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
            </sequence>
        </complexType>
    </element>

 (from http://social.msdn.microsoft.com/Forums/en-US/5003cf00-de7f-41ec-93a9-c04b14e41837/xml-schema-of-lab-test-results)

Wednesday, December 4, 2013

Why Can't Medical IT Systems Share Data Better?

It is FAQ Wednesday, when I try to get through the most plaintive of cries I encounter in the course of my workday.

Today's question is "why can't medical information systems share data better?"

This is a good question: why in this day and age of Webly interconnectivity are lab results and diagnostic images and calendar appointments and other data not easily accessible?

Specifically, let us consider why the prototypical Hospital Information System (HIS) cannot share data better (more effectively) with the prototypical Laboratory Information System (LIS).

There are two basic ways to share data between computer systems. I will call these two methods "linking" and "transferring." Let us call the system with the data the "server" and the system which wishes to display data from the server "the  client."

Linking is pretty easy and pretty useful: The client opens a window, sends a query to the server and the server replies with the data.

Transferring is more involved: the client gets data from the server, parses that data and loads that data into the client's own database where that data can found and used by the client's own software.

Linking is easier and real-time, but does not lend itself to a consistent look-and-feel. Transferring is harder and often done in batch mode, but it does lend itself to consistency and the data is available "natively" on the client, ie in more ways.

Since this is healthcare data, we have issues of privacy, authentication, display rules and all the requirements of HIPAA. This makes using Web technology, which is inherently insecure and open, a bit tricky. It also makes the transfer option more attractive: authentication across information systems is a pain and access logging across information systems, especially ones from different vendors, downright difficult.

However, transferring is more overhead to implement, more overhead to maintain and requires actually settling questions of data model mismatch.

We try to support linking when we can, but we often end up having to support someone else's authentication model and someone else's data model and someone else's design aesthetic. That's rather alot of flexibility which is why most bigger vendors won't go that route.

So here we are: most of the time the only data shared between systems is data everyone agrees must be shared. Data sharing tends to lag behind other developments and other requirements. It isn't better because it isn't easy to do.

Wednesday, November 27, 2013

Why Are Lab Interfaces So Hard?

Taking a frequently asked question off the pile for today's post: "why are Lab Interfaces so hard?" Variants include "why do they take so long to get up and running?" and "why are they so expensive?" and "why don't they work better?"

Here is the most common form of clinical lab interface I encounter:

  • Secure network connection: TCP/IP over a Cisco VPN
  • Bi-directional: orders from client to lab, results from lab to client
    • client has a TCP/IP client for sending orders to the lab
    • lab has a TCP/IP server for receiving orders from the client
    • client has a TCP/IP server for receiving results
    • lab has a TCP/IP client to sending results
  • HL7: perfect it is not, but it is common and well-understood
  • Assay ID translation: someone has to map the client's test codes to the lab's, or vice-versa
None of this is difficult, technology-wise. I have written such interfaces from scratch in about 8 working hours. Provided that the specs are good, the set up and and programming is more tedious than challenging.

(The specs include the mapping of test codes across menus, by the way. Why that mapping often gets left as an exercise to the programmer I will never understand.)

It is becoming increasingly common to set a target of 24 hours for bringing up such an interface: if you have a working interface framework and a reasonably seasoned team, this should be quite doable.

So-called Direct Interfaces are becoming more talked about. I have yet to see one deployed in the real world in my consulting practice, but I like the idea. In theory this would cut do the network configuration time, but I suspect that the added x509 certificate wrangling will often more than make up for it.

So why does it take days or weeks or even months to get an interface up and running? In my experience, the roots causes are two: bureaucracy and misaligned agendas.

The bureaucracy means that I have spent the majority of my time on projects like this walking from cube to cube, paperwork in hand, looking for permission to connect to another organization. Or hours on the phone, chasing up-to-date, accurate specs and precise explanations of terms like "alternate patient ID".

The misaligned agendas make pulling together in the same direction difficult:

  1. The client's lab wants that labor-saving, time-saving, money-saving interface.
  2. The client IT department does not want to expose a chink in their armour, which chink will likely never get closed and which chink will be used by an interface which must run 24/7 and always be up.
So what IT hears is "buy a Cisco VPN license; pray that our Cisco VPN appliance has a spare connection; punch a hole in the firewall, but don't botch it leaving us open to hackers; assume responsibility for monitoring the health of that VPN tunnel."

To many IT ears, this sounds like a lot of work, some real money and an on-going obligation with little upside and huge potential downsides. So the network folks become what my firm calls "stoppable"--any hitch in the plan brings them to a halt. Who knows? If they delay long enough, maybe the project will go away....

So why is it so hard, take so long, often have never-resolved issues? Because navigating bureaucracy is exhausting and overcoming organizational inertia takes a long time.

Thursday, November 21, 2013

Direct Interfacing?

This looks like an interesting idea for our lab connectivity start up:

http://directproject.org/faq.php?key=faq

Random connections over the Internet, secured by x509 certificates, with the payload format unspecified. Assuming we control the certificates, we can be confident that our connections are secure from others and from the right computers.

We would use HL7 to encode the payload, of course. But this gives us a validated and accepted model for transactions in a cloud-based environment. Interesting. I feel a proof-of-concept project coming, probably built on Amazon Web Services.

This architecture also lets us circle back to this at some point down the road:
http://www.mehi.masstech.org/health-it/health-it-learning-center

Wednesday, July 16, 2008

Phlebotomy Support

The client is a large hospital lab, recently given responsibility for Phlebotomy. Their problem is that the draw stations not only  are not on the LIS, the draw stations are not computerized at all.

Their problem is compounded by the fact that their LIS is not off-the-shelf but is rather long in the tooth.

Worse, the lab is entirely focused on inpatient specimens: their processes and procedures all assume an electronic order in the LIS followed by a tube labelled with an LIS collection label.

All we can count on at the draw stations is a secure network connection.

We bridge that gap in under two months:
  • custom Linux-based thin clients to provide
    • dependable known web browser
    • barcode label printing and report printing
  • a web app to
    • greet the patient, establishing a start for wait time
    • identify the patient using an up-to-date patient index
    • support finding and using existing electronic orders (clinics)
    • support turning a paper order into an electronic order
      • UI of assays
      • interface to LIS to place order as if from the HIS
    • print a collection label compatible with the legacy LIS
    • provide history of activity by draw station or across draw stations
    • support drop-offs
    • support clinical trials
    • support ordering synonyms to match community ordering habits
We roll out our app to their completely computer illiterate user base, many of whom have never used a mouse.

We provide management metrics for wait times and patient visits and a link to the Lab Man database so that the phlebotomists have up-to-date collection instructions. (More on that original Lab Man here.)

Mon Dec 1 13:35:10 EST 2008
Major release:
  • support EKG scheduling
  • support Accession Issues to link front bench and draw stations
  • support for Cc physicians
  • support ordering web page matching custom requisitions
Sat Jun 27 06:26:44 EDT 2009
Major release:
  • scanned req support: barcode and tracking and viewing
  • tube tracking
  • support cancelled code substitution
  • support lab protocol order changing automatically
Sat Dec 12 06:33:47 EST 2009
Major release:
  • support the new incoming LIS, SoftLab


Wed Feb 23 10:11:14 EST 2011
Major release:
  • support ABNs: UI to accept the data, process to generate the forms

Friday, July 20, 2007

Homegrown LIS -> Ref Lab

Client has a homegrown LIS, but wants to have a bi-directional interface with major ref labs: ARUP and Mayo to start.

The homegrown LIS cannot be extended, so I created a piece of Middleware (MW) to bridge the gap.

On the LIS side, the MW appears to be an automated analyzer, to which orders flow and from which results come.

On the Ref Lab side, the MW appears to be an industry-standard, up-to-date LIS, speaking HL7 over TCP/IP.

It all works like a charm:
  1. A user  of the homegrown LIS places an order for a send out
  2. The MW detects the order as if it were an instrument
  3. The MW stores the order in its database
  4. The MW creates an HL7 order for the appropriate ref lab
  5. The MW sends the HL7 order on its way
  6. The Ref Lab interface receives the order
  7. The Ref Lab interface sends a result
  8. The MW receives the result and updates its database
  9. The MW creates a message to tell the homegrown LIS the result
  10. Any user of the homegrown LIS can see the result


The MW has three components: a database, a TCP/IP client to send orders and a TCP/IP sever to receive results.

The database allows for various automatically generated management reports and tracking of activity and back up of the received results.

Friday, August 22, 2003

Cell-Dyn 3200 Automated Analyzer Interface

Today I started work on an instrument interface for Abbott's Cell-Dyn 3200 automated analyzer. Most of the work was done for me by the client, whose staff have developed a very solid framework in which to build instrument interfaces.

Automated analyzers are weird: they are sophisticated in every way but the way in which they handle I/O. This has been an introduction to some concepts for me:

  • ASTM E1381/95 a rather outdated, clearly serial protocol rather reminiscent of MS-DOS block-based communication protocols of yesteryear.
  • Worklists for analyzers which are derived from LIS orders
  • Analyzer flags and how to present them
  • Verification UIs
It has also been a pleasure to learn about automated analyzer interfaces from a master,  Steve Wardlaw, MD.