Wednesday, January 22, 2014

Automation/Domination

XKCD for Jan 20, 2014
This cartoon is amusing and represents, I am told, a common automation experience.

It ruffled my feathers a bit; automation is a big part of what we do for clinical labs and I know, I can prove, that we leave processes faster, cheaper and better when we are done with them.

This is, in part, because we go easy on the "rethinking" part. We like business process engineering, the careful defining and describing the working parts of a process as though that process were a machine. Through careful description comes understanding.

We fear business process re-engineering, the long and tedious process in which MBAs and frustrated worker bees compete to imagine a perfect world which software people are then to summon from the ether.

We understand that the pieces interact and interconnect. We also understand that predicting how it will all shake out is a very low-percentage business so instead of trying to fix the entire process, we go after low-hanging fruit and iterate.

Billing wants paper from which to work? Fine, give them cheap, simple reports and print them. Once they believe in the reports, they may be open to a web page instead of printing. Once they trust the web page, they be willing to consider a data feed. Once they have the data feed, who needs all this damned paper?

"Please stop printing those reports, nobody looks at them any more" is how our successful automation gigs end. But we don't start with "you guys need to ditch the paper, this is the 21st century!"

The clinical lab was slow to embrace automated analyzers and autoverification, but embrace them they did. The greying of workforce and lack of new blood means that it is time to take the automation behind the bench.

We know that some IT "automation" made data entry and other tasks slower and harder, but please don't tar us all with the same brush: there are plenty of tasks the computer can do faster and better, so you can concentrate on the other stuff.

Monday, January 20, 2014

Privacy Rules vs Care Delivery

In medical IT, we are often asked about HIPAA compliance, much in the way the Brothers Grimm probably asked little children about being in the woods: scare us into doing the "right" (ie legal liability lowering) thing.

When people say "HIPAA" generally I assume that they mean the privacy rule of HIPPA, specifically, which Wikipedia summarizes thusly:  (go here for the full text)


The HIPAA Privacy Rule regulates the use and disclosure of Protected Health Information (PHI) held by "covered entities" (generally, health care clearinghouses, employer sponsored health plans, health insurers, and medical service providers that engage in certain transactions.)[17] By regulation, the Department of Health and Human Services extended the HIPAA privacy rule to independent contractors of covered entities who fit within the definition of "business associates".[18] PHI is any information held by a covered entity which concerns health status, provision of health care, or payment for health care that can be linked to an individual.[19] This is interpreted rather broadly and includes any part of an individual's medical record or payment history. Covered entities must disclose PHI to the individual within 30 days upon request.[20] They also must disclose PHI when required to do so by law such as reporting suspected child abuse to state child welfare agencies.[21]

So is the primary goal to maintain privacy or deliver effective healthcare? If you said "both, of course!" then I must respectful say "balderdash!" I am well aware of the standard privacy advocate claim that one can easily do both, at the same time, with no loss of effectiveness. It is my experience that not only do the two goals not co-exist, they also work against each other in many instances.

In lab IT, this is most often the following tension: deliver lab results quickly to whoever might need them--nurses, PAs, MDs, NPs versus ensure that every access is by a care giver, specifically a care giver who is part of this patient's team. Even if the IT system user is a nurse who was doing something else when she was asked by a code team member to look up something on behalf of a caregiver who is currently not in a position to authenticate themselves.

When I ask privacy advocates how to balance these concerns the most common response is the claim that there is no problem: if IT does its job, then all required data will always be disclosed to the correct parties, but not the incorrect parties, in a timely manner. As someone who actually deploys systems in the real world, I find this answer supremely unhelpful.

When I ask security professionals how to balance these concerns, they ask me to restate my question as a risk:benefit statement, at which point they will help me figure out how much security to combat which risk. But when I respond that the risk is that security will interfere with the delivery of healthcare, I am referred to the standard menu of risks from which I may pick:
  • leaking information to blackmailers
  • leaking information to underwriters (insurers)
  • leaking information to the public

This company has a nice way to frame a conversation with a CISO, assuming that the organization is not a health care provider. You can find that conversation starter here: http://www.ey.com/GL/en/Services/Advisory/Cyber-security---Steps-you-should-take-now?utm_source=Outbrain&utm_medium=TextLink&utm_content=steps_ceo_ciso_Outbrain&utm_campaign=GISS_OLA_nov_dec_2013

But working in medical IT, I feel that I need a solution that takes into account some other considerations:
  • NOT disclosing information may harm someone, so I do not want to use solutions which assume that all disclosure is bad
  • disclosing information to unauthorized health care providers is often covered by other legal means, eg medical licensing so isn't that "breach" of rather low signifigance?
  • the information does not belong to the parent organization in the first place so taking steps to protect it must include ways to share it on demand
If anyone knows of a privacy policy, implementable for an actual lab information system, please let me know. I would love to stop trying to meet privacy rules in an environment where failure to disclose in a timely manner could kill someone.

Wednesday, January 15, 2014

Ew, Gross: Why Good Engineering is Sometimes Bad

I like to think of myself as a software architect and not a software engineer.

Engineers are often relieved by this, because they feel that I am not rigorous enough to be an engineer. Architects are often dismayed by this, because I don't wear pleated pants or style my hair.

The practical difference is that I feel comfortable in the creative, design end of things, which means that I often get called in when the engineers have failed.

Sadly, I have come to understand that much of the time, this failure is willful: while I may or may not have vision unmatched by most engineers, I certainly have a willingness to change dirty design diapers. The failure of the engineers, if you listen closely, is often really the sound of a young teenager saying "Ew, gross!"

A function for which I am paid, but which I do not particularly enjoy, is Engineer-to-Management translation. "Why do my IT people say X?" I hear. I try to be diplomatic (for me), but more and more often the honest answer is "The obvious solution strikes them as distasteful, so they don't want to do it."

My favourite example of this was a programmer who embraced object-orientatin to an absurd degree: he re-implemented basic math on top of the native facilities. His code defined ten numeroid objects, '0', '1', '2', '3', '4', '5', '6', '7', '8' and '9.' Of course he then had to define "addition," "subtraction," "division" and "multiplication" for these objects--what fun! His program gave the correct answer, but took 24 hours to run. Running with native integer support, the same program took a little under 8 hours. The programmer's response? "You resorted to an orthogonal formalism." That is the most learned-sounding "Ew, gross!" I have ever heard.

Other examples of holding my nose in the service of providing value include providing users with a dumb ASCII terminal long after such things had become uncool and yucky, or providing fax support despite the outdated nature of faxing.

Engineering is rule-based and rewards rule-following and so attracts rule-loving personalities. They tend to find rules comforting in addition to finding rules useful and productive.

The flip side is that, as a group, engineers are bad at value judgements. They are not inclined to break the rules and they certainly are not comfortable with solutions based on rule-breaking.

Alas, the real world--especially the clinical lab where we are trying to get it right very time--sometimes does not co-operate and requires us to pick best of a bad lot. Even worse, sometimes we need to pick any of a bad lot and get on with giving the users what they need and getting the job done.

Worst of all are the cases where the beautiful engineering is invisible to the users. All they see is that version 2 sucks worse than version 1, again. And again, engineering wants a cookie because version 2 has beautiful engineering and none of the rule-bending messiness that make version 1 so useful.


Sometimes engineering needs to get over itself and do what must be done. That is why I don't mind being scoffed at by hardcore engineers: they may be better at math, but I am better at recognizing when a rule is getting in the way of better solution. I know when to say "that is a good rule, that is a useful rule, that rule will keep you from tending to make a certain kind of mistake, but now is the time to break that precious rule."

Wednesday, January 8, 2014

Why Can You / Can't You Use The Cloud?

FAQ Wednesday is here again. Today's question: what about the Cloud and clinical labs?

This question has two variants:

  1. You can't use the Cloud for heath care data, can you--HIPAA, etc?
  2. Why can't you use the Cloud for my clinical lab interface?
Can You Use the Cloud?
The first question, which I take to mean, "is it within law and regulation to use the Cloud for PHI," is actually pretty easy to answer: yes. Does HIPAA restrict the options? Yes. Does HIPAA prohibit use of the Cloud? No.

We currently use Amazon Web Services as our Cloud vendor and they claim to be certified and everything. From http://aws.amazon.com/compliance/#case-studies:

HIPAA

HIPAAAWS enables covered entities and their business associates subject to the U.S. Health Insurance Portability and Accountability Act (HIPAA) to leverage the secure AWS environment to process, maintain, and store protected health information and AWS will be signing business associate agreements with such customers. AWS also offers a HIPAA-focused whitepaper for customers interested in learning more about how they can leverage AWS for the processing and storage of health information. The Creating HIPAA-Compliant Medical Data Applications with AWS whitepaper outlines how companies can use AWS to process systems that facilitate HIPAA and HITECH compliance. For more information on the AWS HIPAA compliance program please contact AWS Sales and Business Development.

But I expect that Google will keep up and this reference implies that they are:

http://www.healthcareinfosecurity.com/google-amazon-adjust-to-hipaa-demands-a-6133

In fact, we are counting on growing acceptance of Cloud implementations in health care, which is why we are currently developing Direct Interfaces.

Why Can't You Use the Cloud?
This is a slightly difference question, which I take to mean "in practical terms, what are the obstacles to Cloud-based interfacing?" The short answer is "the conservative nature of hospital and clinical lab IT culture." This is very linked to why lab interfacing in general is so hard: our industry punishes mistakes and does not reward innovation. So often, doing nothing is rewarded and thus fighting innovation tooth-and-nail is the norm.

(Since this is legal and low overhead and effective, we plan to step around the hospital and clinical lab IT organizations with our new Cloud-based lab connectivity venture, but that is another story.)

Friday, January 3, 2014

Data-Driven Virtuous Cycles

To paraphrase the Six Sigma religious tenet, "you can only manage what you can measure." While I think that this is somewhat over-simplified, there is certainly truth to it.

Specifically, I run into issues which can only be resolved with a solid metered context. Often, while debugging these issues, I build "scaffolding" which then lives on as the metering to drive a continuous monitoring process.

Some issues have many factors, related in murky and maddening ways and the only way to untangle the knot is to measure, find and fix something and then return to step one until your measure tells you that you are done.

Current Example
We are in the process of restoring functionality lost when a highly tuned system was replacing by something manifest worse--but cooler. One of the data elements lost was who collected the specimen. This turns out to be critical for many management functions.

The first reaction to our bug report was "nonsense! the new system is fabulous!".

The second reaction was "ok, looking at your data, we see that this is happening, but we have come up with a labor-intensive workaround and we will simply command people to follow the new procedure."

The third reaction was "ok, we see that some areas are not complying but we are going to scold them--and stop recording this data, because we don't need it anymore."

Needless to say, we are still collecting the data and still helping them police their spotty compliance. Someday, the meters will tell us that all is well and we can go back to relying on this data for our high value reports.

The Bad Old Days
This situation is sadly similar to our work with scanned requisition forms. When we deployed our draw station solution, we became part of the scanned req infrastructure. As the newest member of the team, we were immediately blamed for any and all missing reqs. In self-defence, I created an audit to trace scanned req activity, comparing expected with actual. We immediately made a number of interesting discoveries:
  1. I had some bugs, which I fixed and verfied as fixed
  2. Some users were not really on board with scanned reqs so we started to nag them
  3. Some of the orders for which we were blamed did not come through the draw station; the Front Bench decided to use our software to ensure compliance
  4. Some of the scanners were in need of service
  5. The placement of the bar codes on the page matters more than one would hope
With feedback and monitoring, the situation has improved dramatically and our req watchdog technology is actually still in service even as the LIS and draw station solution for which it was created are out of service and about to be retired, respectively.

Tube Tracking
I think that our tube tracking experience can also be seen as measurement leading to clarity and control, so I am including it.

Conclusion
Measure, management, repeat. Even when all is well, don't stop auditing and reviewing.

Wednesday, January 1, 2014

XML vs HL7

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why do so many systems use HL7 instead of XML?"

This is a good question with many possible answers, but this is my executive summary: XML is easy for humans to read and HL7 is easy for computers to process.

Medical IT is often short on power and long on functionality, so it is natural to avoid expensive to process and to embrace easy to process, even at the cost of human legibility. In my experience the people who wonder at the lack of XML are not career medical IT professionals.

XML is a mark up language, a structured tagged text format which descends directly from SGML. It was intended as a platform-independant document storage format, but has become a kind of universal data exchange format.

HL7 is a line-oriented, record and field-based text format which is rather reminiscent of serial line-oriented message formats of yore, such as ASTM which was already familiar to clinical lab people from instrument interfaces.

XML makes more-or-less self-documenting "trees" which can be displayed natively by most browsers, or "visualized" with a little javascript magic: http://www.w3schools.com/xml/xml_to_html.asp There are lots of tools for working with XML and storing it.

In theory, XML is fault-intolerant: XML processing is supposed to halt at the first error encountered. This is not very robust, but in theory there should be no errors because you can write a format document type definition (DTD) which will allow people to make sure that they are sending and receiving data in exactly the way that you expect. If the XML document is made using a DTD and parsed with the same DTD, what could go wrong? And whoever created the data used a validator, such as http://www.w3schools.com/xml/xml_validator.asp on it before releasing it, right?

(In practice, I do not see very much strict adherence to document type definitions.)


HL7 makes nice, simple messages which can be easily processed by almost any programming language. I have written HL7 message processors in C, Perl, PHP, and BASIC.

So how do these formats look like side by side? Consider the following two samples:

HL7 Lab Result:
MSH|^~\&|LCS|LCA|LIS|TEST9999|199807311532||ORU^R01|3629|P|2.2
PID|2|2161348462|20809880170|1614614|20809880170^TESTPAT||19760924|M|||^^^^
00000-0000|||||||86427531^^^03|SSN# HERE
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORY
CULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROA
SRC:PENI|19980727000000||||||20809880170||19980730041800||BN|F

OBX|1|ST|008342^UPPER RESPIRATORY CULTURE^L||FINALREPORT|||||N|F||| 19980729160500|BN
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|2|8642753100012^LIS|20809880170^LCS|997602^.^L|||19980727175800||||G|||
19980727000000||||||20809880170||19980730041800|||F|997602|||008342

OBX|2|CE|997231^RESULT 1^L||M415|||||N|F|||19980729160500|BN
NTE|1|L|MORAXELLA (BRANHAMELLA) CATARRHALIS
NTE|2|L| HEAVY GROWTH
NTE|3|L| BETA LACTAMASE POSITIVE
OBX|3|CE|997232^RESULT 2^L||MR105|||||N|F|||19980729160500|BN
NTE|1|L|ROUTINE RESPIRATORY FLORA


(from http://www.corepointhealth.com/resource-center/hl7-resources/hl7-oru-message)

XML Lab Result:
<element name="lab-test-results">
        <complexType>
            <annotation>
                <documentation>
                    <summary>
                        A series of lab test results.
                    </summary>
                </documentation>
            </annotation>
            <sequence>
                <element name="when" type="d:approx-date-time" minOccurs="0">
                    <annotation>
                        <documentation>
                            <summary>
                                The date and time of the results.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
                <element name="lab-group" type="lab:lab-test-results-group-type" maxOccurs="unbounded">
                    <annotation>
                        <documentation>
                            <summary>
                                    A set of lab results.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
                <element name="ordered-by" type="t:Organization" minOccurs="0">
                    <annotation>
                        <documentation>
                            <summary>
                                    The person or organization that ordered the lab tests.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
            </sequence>
        </complexType>
    </element>

 (from http://social.msdn.microsoft.com/Forums/en-US/5003cf00-de7f-41ec-93a9-c04b14e41837/xml-schema-of-lab-test-results)