Big data is a hot topic right now--insert rant about new names for old ideas here--and that wave is finally breaking on the clinical lab shores. So the time is right for my "Big Data and Lab" manifesto. (Or at least an observation born of decades of experience.)
Big data has two jobs: archiving and analyzing. Both involve data on computers. There, I claim, the similarity ends. While it is tempting to try to kill both of these birds with a single stone, I assert that this is a terrible idea.
Specifically, I find that in order to archive data effectively, I need a free-form, attribute rich environment which accommodates evolving lab practice without losing the old or failing to capture the new. But in order to analyze effectively, I need a rigid, highly optimized and targeted environment, preferably with only the attributes I am analyzing and ideally with those attributes expressed in an easily selected and compared way.
In other words, I find that any environment rich enough to be a long-term archive is unwieldy for analysis and any environment optimized for analysis is very bad at holding all possible attributes.
Specifically, I have seen what happens to inflexible environments when a new LIS comes in, or a second new LIS and the programmers are struggling to fit new data into an old data model which was a reworking of an older data model. It ain't pretty--or easy to process or fast to process. I have also seen what happens when people, especially people without vast institutional knowledge, try to report on a flexible format with three different kinds of data in it. They get answers, but those answers are often missing entire classes of answers. "They used to code that HOW?" is a conversation I have had far too many times.
Yes, I am aware of Mongo & co and of the rise (and stalling) of XML databases and of the many environments which claim to be able to both. I have seen them all, I have tried them all and I have not changed my views.
So I use a two pronged approach: acquire and store the data in as free-form a manner as possible--structured text, such as raw HL7 or XML are great at this--and then extract what I need for any given analysis into a tradition (usually relational) database on which standard reporting tools work well and work quickly.
The biggest clinical lab-specific issue I find is the rise of complex results, results which are actually chunks of text with a clinical impression in them. For such results, we ended up cheating: tagging the results with keywords for later reporting and asking the clinicians to create simple codes to summarize the text.
I am eager to be proved wrong, because the two pronged approach is kind of a pain in the neck. But so far, I have seen no magic bullet that stands the test of time: either the reportable format is too rigid to hold evolving lab data or the flexible format is too slow to actually handle years and years of data.
If you feel that you have a better idea, I would love to hear it so leave a comment.
A blog about real-world solutions to common clinical lab IT issues by Brendan Hemingway.
Showing posts with label Resulting. Show all posts
Showing posts with label Resulting. Show all posts
Saturday, June 6, 2015
Wednesday, November 27, 2013
Why Are Lab Interfaces So Hard?
Taking a frequently asked question off the pile for today's post: "why are Lab Interfaces so hard?" Variants include "why do they take so long to get up and running?" and "why are they so expensive?" and "why don't they work better?"
Here is the most common form of clinical lab interface I encounter:
(The specs include the mapping of test codes across menus, by the way. Why that mapping often gets left as an exercise to the programmer I will never understand.)
It is becoming increasingly common to set a target of 24 hours for bringing up such an interface: if you have a working interface framework and a reasonably seasoned team, this should be quite doable.
So-called Direct Interfaces are becoming more talked about. I have yet to see one deployed in the real world in my consulting practice, but I like the idea. In theory this would cut do the network configuration time, but I suspect that the added x509 certificate wrangling will often more than make up for it.
So why does it take days or weeks or even months to get an interface up and running? In my experience, the roots causes are two: bureaucracy and misaligned agendas.
The bureaucracy means that I have spent the majority of my time on projects like this walking from cube to cube, paperwork in hand, looking for permission to connect to another organization. Or hours on the phone, chasing up-to-date, accurate specs and precise explanations of terms like "alternate patient ID".
The misaligned agendas make pulling together in the same direction difficult:
To many IT ears, this sounds like a lot of work, some real money and an on-going obligation with little upside and huge potential downsides. So the network folks become what my firm calls "stoppable"--any hitch in the plan brings them to a halt. Who knows? If they delay long enough, maybe the project will go away....
So why is it so hard, take so long, often have never-resolved issues? Because navigating bureaucracy is exhausting and overcoming organizational inertia takes a long time.
Here is the most common form of clinical lab interface I encounter:
- Secure network connection: TCP/IP over a Cisco VPN
- Bi-directional: orders from client to lab, results from lab to client
- client has a TCP/IP client for sending orders to the lab
- lab has a TCP/IP server for receiving orders from the client
- client has a TCP/IP server for receiving results
- lab has a TCP/IP client to sending results
- HL7: perfect it is not, but it is common and well-understood
- Assay ID translation: someone has to map the client's test codes to the lab's, or vice-versa
(The specs include the mapping of test codes across menus, by the way. Why that mapping often gets left as an exercise to the programmer I will never understand.)
It is becoming increasingly common to set a target of 24 hours for bringing up such an interface: if you have a working interface framework and a reasonably seasoned team, this should be quite doable.
So-called Direct Interfaces are becoming more talked about. I have yet to see one deployed in the real world in my consulting practice, but I like the idea. In theory this would cut do the network configuration time, but I suspect that the added x509 certificate wrangling will often more than make up for it.
So why does it take days or weeks or even months to get an interface up and running? In my experience, the roots causes are two: bureaucracy and misaligned agendas.
The bureaucracy means that I have spent the majority of my time on projects like this walking from cube to cube, paperwork in hand, looking for permission to connect to another organization. Or hours on the phone, chasing up-to-date, accurate specs and precise explanations of terms like "alternate patient ID".
The misaligned agendas make pulling together in the same direction difficult:
- The client's lab wants that labor-saving, time-saving, money-saving interface.
- The client IT department does not want to expose a chink in their armour, which chink will likely never get closed and which chink will be used by an interface which must run 24/7 and always be up.
To many IT ears, this sounds like a lot of work, some real money and an on-going obligation with little upside and huge potential downsides. So the network folks become what my firm calls "stoppable"--any hitch in the plan brings them to a halt. Who knows? If they delay long enough, maybe the project will go away....
So why is it so hard, take so long, often have never-resolved issues? Because navigating bureaucracy is exhausting and overcoming organizational inertia takes a long time.
Tuesday, November 26, 2013
NPI: No Office = Bad For Lab
Yesterday a sadly familiar issue landed on my desk: how to make NPIs work for lab results.
NPIs were brought to you by CMS, so they have to be good.
To quote from The National Provider Identifier (NPI): What You Need to Know
How Many NPIs Can a Sole Proprietor Have?This is logical, but it is very bad for clinical laboratories. Physicians practicing medicine do not fit the model of citizens paying taxes: if I have seven offices, which I visit in rotation, I want the labs for patients who live in Shoretown to go to my office in Shoretown: sending them to any of my other offices creates work and runs the risk of me not having them where and when I need them. But sending them to all my offices creates a blizzard of faxes and a mountain of filing.
A sole proprietor is eligible for only one NPI, just like any other individual. For example, if a physician is a sole proprietor, the physician is eligible for only one NPI (the individual’s NPI), regardless of the number of different office locations the physician may have, whether the sole proprietorship has employees, and whether the Internal Revenue Service (IRS) has issued an EIN to the sole proprietorship so the employees’ W-2 forms can reflect the EIN instead of the sole proprietorship’s Taxpayer Identification Number (which is the sole proprietor’s SSN).
Unless, of course, my practice has invested in and correctly configured a centralized, distributed Electronic Medical Record with a real-time, bidirectional Lab interface. Which, very likely, is not the case. At least not from what I see in the field. At least not yet.
So here I am, stuck in the real world, creating yet another mapping from internal LIS ordering ID (which is office-specific) to NPI (which is not office-specific but which is required by many legal agreements).
The good news is that NPI excels at figuring out which providers merit a fruit basket for all their business: I can analyze the ordering patterns by practice pretty easily by associating NPIs together into a practice. I just can't help you with where to send that fruit basket. Or those lab results.
Sunday, November 23, 2008
Cytochemistry Clinapp
The bone marrow clinapp went so well that I have whipped off another one, this one for cytochemistry interpretation. We are almost out of opportunities for this kind of automation: soon the only interpretive reports without an app will be the very low volume reports.
Wednesday, November 19, 2008
Bone Marrow Clinapp
Just completed another clinapp, this one for Bone Marrow interpretation.
It is gratifying how the basic framework continues to provide such good service, seven years after the first app was done.
Like its siblings, this app provides:
It is gratifying how the basic framework continues to provide such good service, seven years after the first app was done.
Like its siblings, this app provides:
- a work list of tech to-do (orders awaiting results)
- a work list of resident to-do (results awaiting preliminary interpretation)
- a work list of attending to-do (prelim interps awaiting review)
- same-screen access to relevant current lab results
- links to related clinapps reports for this patient
- context-sensitive access to the historical repository of lab results
- printable report PDF creation
- faxing support
- LIS interface
Friday, July 20, 2007
Homegrown LIS -> Ref Lab
Client has a homegrown LIS, but wants to have a bi-directional interface with major ref labs: ARUP and Mayo to start.
The homegrown LIS cannot be extended, so I created a piece of Middleware (MW) to bridge the gap.
On the LIS side, the MW appears to be an automated analyzer, to which orders flow and from which results come.
On the Ref Lab side, the MW appears to be an industry-standard, up-to-date LIS, speaking HL7 over TCP/IP.
It all works like a charm:
The MW has three components: a database, a TCP/IP client to send orders and a TCP/IP sever to receive results.
The database allows for various automatically generated management reports and tracking of activity and back up of the received results.
The homegrown LIS cannot be extended, so I created a piece of Middleware (MW) to bridge the gap.
On the LIS side, the MW appears to be an automated analyzer, to which orders flow and from which results come.
On the Ref Lab side, the MW appears to be an industry-standard, up-to-date LIS, speaking HL7 over TCP/IP.
It all works like a charm:
- A user of the homegrown LIS places an order for a send out
- The MW detects the order as if it were an instrument
- The MW stores the order in its database
- The MW creates an HL7 order for the appropriate ref lab
- The MW sends the HL7 order on its way
- The Ref Lab interface receives the order
- The Ref Lab interface sends a result
- The MW receives the result and updates its database
- The MW creates a message to tell the homegrown LIS the result
- Any user of the homegrown LIS can see the result
The MW has three components: a database, a TCP/IP client to send orders and a TCP/IP sever to receive results.
The database allows for various automatically generated management reports and tracking of activity and back up of the received results.
Saturday, December 16, 2006
Hemoglobin Analysis Report (HGB Clinapp)
Banged out another clinapp today, this one for Hemoglobin analysis. The framework is about 5 years old, but still performing like a champ.
The users also love the very high degree of consistency between the apps, which makes moving between interpretive reporting duties much easier for them.
The users also love the very high degree of consistency between the apps, which makes moving between interpretive reporting duties much easier for them.
Tuesday, April 4, 2006
SPEP & UPEP Clinapp
Sigh. Due to popular demand, another clinapp, hurray. This one's title is the longest yet: "immunofixation electrophoresis report"
The big wrinkle here is including the image of the gel produced by the automated analyzer: exciting for the users, but not that exciting for me since this is not very challenging in the groff context:
The big wrinkle here is including the image of the gel produced by the automated analyzer: exciting for the users, but not that exciting for me since this is not very challenging in the groff context:
- convert the instrument output to encapsulated PostScript
- include the EPS in the report
Wednesday, November 16, 2005
Smear Clinapp
Another day, another clinapp, or so it seems some days.
Today's interpretive report to be supported with an app is the Smear section, which produces a large number of reports all of which start with a blood slide and end with a clinical comment.
Man, I had no idea when I started this code base that it would turn into a mini-career.
Today's interpretive report to be supported with an app is the Smear section, which produces a large number of reports all of which start with a blood slide and end with a clinical comment.
Man, I had no idea when I started this code base that it would turn into a mini-career.
Tuesday, March 29, 2005
Science Project To Clinical Resource (ORU repository)
Today I turned an interesting science project into a treasured clinical resource, and got to take lots of credit in the process. Not a bad day!
A member of the client's computer staff had put together a fascinating science project: a collection of ORUs (HL7 result messages) in a Reiser file system on a standalone Linux box, accessible through a simple TCP/IP protocol.
The science project was fascinating:
My goal was to create a helper app for my clinapp environment, providing supporting lab results far beyond the current set. Since my environment was created to support clinical impressions of conditions which are often observed over years, the users were thrilled with the ability to scroll back in time from current to mid-1974.
Even in its first day, the helper app has become wildly popular and has sparked interest in adding links to the repository from other web applications. Hurray for the original author, on whose shoulders I am happy to stand.
A member of the client's computer staff had put together a fascinating science project: a collection of ORUs (HL7 result messages) in a Reiser file system on a standalone Linux box, accessible through a simple TCP/IP protocol.
The science project was fascinating:
- collecting the ORUs as an archival format because HL7 is flexible and somewhat changing
- storing the ORUs in a small text file named with the specimen ID
- adding a simple per-patient text file index to support reporting by patient
- using the Reiser file system to hold many (20 million+) small files efficiently
- using the TCP/IP protocol to make access from a web page easy
- using the interface, all historical results were output from the LIS and therefore input into this respository
- tapping into the HIS interface, this repository is kept as current as the HIS
My goal was to create a helper app for my clinapp environment, providing supporting lab results far beyond the current set. Since my environment was created to support clinical impressions of conditions which are often observed over years, the users were thrilled with the ability to scroll back in time from current to mid-1974.
Even in its first day, the helper app has become wildly popular and has sparked interest in adding links to the repository from other web applications. Hurray for the original author, on whose shoulders I am happy to stand.
Thursday, July 29, 2004
Coag Clinapp
Added another clinapp today, this one for Special Coagulation reports.
It has been a few years since the first clinapp, the one for flow cytometry, and I really have this down to a science: it took about 4 hours of work to set up and launch this one.
I wonder if and when rich data environments will be come standard in the basic LIS?
It has been a few years since the first clinapp, the one for flow cytometry, and I really have this down to a science: it took about 4 hours of work to set up and launch this one.
I wonder if and when rich data environments will be come standard in the basic LIS?
Friday, July 16, 2004
Molecular DX Clinapp
Oh, joy, another clinapp, this one for Molecular Diagnostics.
The underlying topic is interesting. The duplicate of previous work is not.
At least the users (techs, residents and attendings) are all quite enthusiastic. Cutting down turn-around-time by 50% and all that.
The underlying topic is interesting. The duplicate of previous work is not.
At least the users (techs, residents and attendings) are all quite enthusiastic. Cutting down turn-around-time by 50% and all that.
Friday, March 19, 2004
IFE Clinapp
Banged out another clinapp today, this one for IFE's. Victim of my own success: the client keeps finding new tasks well-suited to this code base. Satisfying to be providing value, but not the most exciting way to make a living: endlessly repeating oneself.
Monday, November 26, 2001
Flow Cytometry Reporting
Starting a new and interesting project: provide a web-based rich data environment to support clinicians making clinical impressions of flow cytometry results.
Flow cytometry is a new and exciting technology for analyzing cells along a number of dimensions, but that is not my area. My area is providing an environment in which the raw flow cytometry results, which are precise but not clear, can be interpreted.
In practice, this means providing the following functionality:
Flow cytometry is a new and exciting technology for analyzing cells along a number of dimensions, but that is not my area. My area is providing an environment in which the raw flow cytometry results, which are precise but not clear, can be interpreted.
In practice, this means providing the following functionality:
- a worklist for the section
- orders as they arrive (tech to-do)
- cases to be interpreted as they are run (clinician to-do)
- up-to-date patient demographics
- access to any previous reports on this patient
- access to related and support current lab results
- macros for impressions
- push-button creation of a printable clinical report as a PDF
- publishing verified impressions:
- automatic interfacing of impressions to the LIS
- automatic faxing of reports to the ordering clinician
- tech--can enter and edit results
- resident--can enter impressions, not results and cannot sign a report
- attending--can enter impressions, not results and can sign a report
Subscribe to:
Posts (Atom)