Saturday, November 21, 2015

One Size Fits Few

Bless me, Father, for I have sinned. It has been about three weeks since my last Lab IT confession. I have harboured sympathy in my heart for my competitors.

How can this be? Allow me to complain. I mean, explain.

I am often asked why a given LIS is so [insert bad quality here]. Much as I often agree with specific complaints about specific systems, I usually feel a hidden stab of pity for the team which produced the system because that team was asked to provide a single solution (the software) for a large problem area ("the lab.")

The fact of the mater is that the typical case of "the Lab" is not a monolith. It is not even a collection of more-or-less similar parts; rather it is a patchwork of very different subgroups who happen to share a need for a particular kind of work area.

Specifically, the requirements of the different workflows vary over such a large area that I am not surprised that a "one size fits all" approach usually fails: either the one size is the wrong size or the one size is also a collection of loosely-affiliate software modules which interoperate rather poorly.

Consider those requirements: on one end is clinical Chemistry:
  • high volume, low relative cost (although vital to more valuable services)
  • highly automated (very reliable auto-verification and analyzers)
  • significant time pressure
  • well-suited to an industrial approach such as Six Sigma
  • high throughput required, turn-around-time is very important
At the other end is Microbiology or much Genetic and Molecular testing:
  • lower volume, higher relative value
  • difficult to automate
  • poorly suited to industrial process control
  • takes a long time--no way around it
  • yields an often complex result requiring expertise to characterize
 Throw Haematology into the middle somewhere. Immunology is somewhere closer to Microbiology. Slot your favourite lab section into the appropriate place on the fast/simple vs slow/complex continuum.

So how does one provide a solution which is optimized for both of these extremes? Very carefully.

All to often vendors pick a section, optimize for that section and then claim that all others sections are "easier" in some way so most systems are not optimized for most of the users. Why is there so much complaining about lab systems? Because the situation makes it inevitable. Perhaps we should be surprised that there is not more even more complaining about lab systems.

Monday, November 2, 2015

EMR Disappointment And Acceptance

This past summer I was on a train in the Paris area and happened to share a train car with someone who was also from the East Coast of the US. We chatted and found out that he was a doctor, which always makes me slightly tense. Sure enough, once I mention that I am a medical information systems specialist we somehow end up on the topic of how bad so many of them are.

Why is that? I assume so many health care professionals have so little regard for their Electronic Medical Records system and other e-tools of the trade because these tools are not very good.

At least these medical informations are not very good at supporting many of their users. These medical information systems probably excel at supporting corporate goals or IT departmental goals.

The specific complaints by my companion on the train were painfully typical:
  • the screen layout is cramped and crowded;
  • the history available on-line is too short;
  • the specialized information (labs, x-rays, etc) are not well presented;
  • the data is biased toward general medicine and internal medicine.
But what struck me about our conversation with his resignation. While we rehashed the usual complaints and frustrations with large Commercial Off-the-Shelf (COTS) systems, he was more resigned than anything else. He just doesn't expect anything more from the software supporting his efforts to deliver clinical care.

We expect great things from our smart phones. We have high standards for our desktops and laptops and tablets. But somehow we have come to accept mediocrity in our systems supporting clinical care. And since we accept it, there is little chance of correcting it.

At least I will have lots to talk about with random clinicians I run into on trains.

Tuesday, August 4, 2015

Too Much Tech, Not Enough Ology

This post is a rant about how the following common ideas combine to make bad IT, especially clinical lab IT:
  • Everyone should express their special uniqueness at all times
  • Tech is hard, ology (knowledge) is easy, so go with the Tech
  • There is always a better way to do it
Alas! I believe that in many clinical lab IT environments, none of these ideas holds true and that the combination of them is toxic.

We Are All Individuals
You may be a special snowflake, all unique and brimming with insight, but engineering is often about doing the standard thing in the standard way, so those who come after you can figure out what you did. Yes, if you produce something truly revolutionary, it MIGHT be worth the overhead of figuring it out, but it might not.

Consider the mighty Knuth-Morris-Pratt algorithm which was a real step forward in text string searching. However, sometimes we don't need a step forward: we need something we can understand and support.The mighty Knuth himself told a story of putting this algorithm into an operating system at Stanford and being proud, as a primarily theoretical guy, to have some working code in production. Except that when he went to see his code in action, someone had removed it and replaced it with a simple search. Because although the KMP search is demonstrably better in many situations, the strings being searched in most human/computer interactions are short, so who cares? Having to maintain code you don't understand, however, is also demonstrably bad. So Knuth had some real sympathy for the sys admin who did this. Engineering is about accuracy and dependability; learn the standard approach and only get creative when creativity is called for (and worth the overhead).

Tech is Hard, Expertise is Easy
Too often I see clients going with a young programmer because (a) they cost less by the hour and (b) the assumption is that newer programmers are "up-to-date" and doing things "the modern way." I suppose this makes sense in contexts where the domain expertise is somehow supplied by someone else, but all too often I see bright-but-ignorant (BBI) programmers implementing in shiny new development environments with their shiny new skills, but producing a dreadful user experience. The only goal in clinical lab IT is to support the lab, to lower mistakes, to raise through-put, to shift drudgery away from humans and onto computers. If you produce a cool new app which does not do that, then you have failed, no matter how cheaply or quickly or slickly you did the work.

I Did It Myself!
Clinical lab IT deals with confidential information. We are supposed to take all reasonable and customary measures to guard the information in our care. Yet we still end up with security problems caused by programmers either "taking a shot" at security, as though it were an interesting hobby, or doing their own version of some standard tool which already exists, has already been validated and which is already being updated and tested by other people. Don't roll your own if you don't have to--security especially but other important as well. You might enjoy the challenge, but this is a job, not a college course.

Conclusion
Pay for experience. Demand that all novelties be justified. Use what is freely and securely available before rolling your own. Stop the current trend toward "upgrades" which make things worse and endless new systems which do not advance the state of the art. Have the tech and the ology.

Tuesday, July 21, 2015

System Interfaces Are Not Kitchen Renovations

Recently I had to write my part of an RFP. The topic was a system-to-system interface between two health care information systems. I went through the usual stages:
  1. Nail down my assumptions and their requirements
  2. Come up with a design to meet those requirements
  3. Translate the design into an implementation plan
  4. Put the implementation plan into a spreadsheet
  5. Make my best guess as to level of effort
  6. Have our project manager review my egotistical/optimistic assumptions
  7. Plug the estimated numbers into the spreadsheet
  8. Shrug and use the resulting dates and cost estimates
The result was all too predictable: push-back from the customer about the time and cost. In our amicable back-and-forth, which seemed to be driven on her side by a blind directive to challenge all prices of all kinds, I had an epiphany: software development in general and interfacing in particular is not a kitchen renovation, so why do customers act as if they were the same?

I have been on both sides of kitchen renovation and there are some similarities:
  • the customer is always impatient
  • the cost is hard to contain
  • accurately imagining the outcome of decisions is an uncommon skill
But there are some crucial differences:
  • the concept of kitchen is well-known and well-understood by most people
  • the elements of a kitchen are similarly familiar: sinks, cabinets, etc
  • examples of kitchens one likes can be found
  • in general the main user is also the main contact with the contractor
Why do I get huffy when people tell me I am padding my estimates? Because writing interfaces between complex systems is like the sand shifting beneath your feet. Sure, it is just another HL7 interface between an industry standard source system and a system of ours which is the intermediary system, but which has to export its data to a completely different industry-standard target system.

Thus we are linking industry standard system A (ISSA) to industry standard system B (ISSB): a piece of cake! Except....
  • ISSA has over 1,500 configurable parameters (literally).
  • ISSA was deployed almost five years ago and no one from that team is around.
  • ISSA's configuration was in flux those first few years.
These factors complicate my job because the source HL7 does not exactly match the spec. Further complications arise from the fact that the users have developed some local idioms, so the data elements are not being used in exactly the standard way.

On the target side, ISSB is still being configured, so I am trying to hit a moving target. Which of the local idioms serve a higher purpose (so they will stay) and which of them were to compensate for issues with ISSA? No one knows. What new idioms will evolve to compensate for issues with ISSB? No one can even guess.

So this is like remodelling a kitchen if the counters used to be suspended from the ceiling but now might be cantilevered from the walls and the water might be replaced by steam.

How long will it take to keep rewriting the interface so that the new system's evolving configuration and the customer's evolving needs are all met? I don't know; I wish I did. In the meantime, my good faith best guess will have to do.

Tuesday, July 14, 2015

Lab Automation vs IT Centralization

Over the past decade I have witnessed two trends in clinical lab computing which I think are two sides of the same coin:
  • Lab process automation is going down
  • IT is centralized, becoming more consolidated and less Lab-specific
 By "Lab process automation" I mean identifying repetitive tasks performed by humans and transferring those tasks to a computer or computers.

By centralized, I mean that the IT which serves the lab is now generally the same IT which serves all other parts of the organization.

I can see the appeal, especially to bean counters, of centralization: more control by execs, economy of scale, etc. But most of the IT groups I encounter are really all about office automation:
  • email
  • shared files
  • shared printers
  • remote access
These are all great if you are running a typical office, which is how everything seems to look from most C Suites.

Alas, the clinical lab is far closer in basic structure to a manufacturing plant than to a law office. Typical corporate IT is not good at process automation:
  • receiving orders
  • receiving specimens (raw material)
  • matching up specimens and orders
  • doing the assays (processing raw material)
  • serving up results (delivering finished product)
At the bench, email support and file-sharing are not very helpful; powerful and speedy instrument interfaces, audit support and throughput analysis are all much more helpful.

But centralized IT is not only oriented away from this kind of business process automation, they are punished for it: why are you spending time and money on lab-specific projects? If you get email to work on your boss's boss's boss's iPhone, you are a hero. If you figure out how to alert bench techs to specimens which need a smear, you are spending too much time in the lab.

Worse, as corporate IT completes its transition into being mostly about vendor management, the idea of doing what the vendors cannot or will not do--plugging the gaps between off-the-shelf products which gaps cause so much of the needless drudgery in lab work--becomes first unthinkable and then impossible.

Farewell, lab process automation: perhaps vendors will some day decide the interoperability is a goal and then you will live again. But I am not betting on it.

Tuesday, June 23, 2015

Better Never Than Late

I often complain that the typical organizational IT infrastructure is too complicated and often not well-suited for the clinical lab. I am often asked to given an example, but so far my examples of complexity were, themselves, overly complicated and apparently quite boring.

Well, all that changed recently: now I have nifty example of what I mean.

One of our customers has a fabulous print spooling system: many pieces of hardware, many lines of code, all intended to ensure that your precious print job eventually emerges from a printer, no matter what issues may arise with your network. Best of all, you route all printing through it and it works automagically.

The fancy print job spooler is so smart, it can reroute print jobs from off-line printers to equivalent on-line printers. It is so smart it can hold onto a print job for you, "buffer" it, until an appropriate printer comes on-line.

Alas, neither of these features is a good fit for the clinical labs, at least for specimen labels. The ideal specimen label prints promptly and right beside the person who needs it. If the label cannot be printed, then the print job should disappear: the user will have had to use a hand-written label or some other downtime alternative. Printing the label latter is, at best, annoying. At worst, printing the label later is confusing and leads to mis-identified specimens. For this job, better never than late.

With effort, our client disabled the roaming print job feature, so the labels (almost) always print where they are needed. But the buffer cannot be turned off--it is the whole point of the spooler, after all--and so after downtime, the now-unwanted labels suddently come pumping out of the printers and if the draw station happens to be busy, the old labels mingle with the current labels and opportunities for serious errors abound.

Print spoolers are nifty. They serve office workers well. They are a standard part of today's smart IT infrastructure. But they don't serve the clinical lab in any real sense. The clinical lab is not a typical office environment: don't treat it like one.

Thursday, June 18, 2015

Rapid Failure Is Not Success

What makes software a success? Here is my list:
  1. It works: it does what at least one audience really wants done
  2. It exists: it was completed and deployed
  3. It lives: it can be updated or ported
  4. It pays: if it is for-profit, it does not lose money
I would claim that lots of software, by my definition, is failing. But failing on such a rapid timescale that people either do not notice or do not care.

In the dim and distant past, when I was in college, studying CS meant actually studying computer science. We used various computer programming languages, but we were expected to master the concepts and techniques, not merely the semantics of a given language or development environment. We were supposed to be flexible about "implementation details." We even called them "details" just to emphasize their relative importance.

As someone who has tested the programmer job waters regularly for over three decades now, I can tell you that being a software expert who is not inflexibly married to a particular method or environment is an out-of-date notion: it has been years since anyone wanted to know what I can do or what I have done. Now it is all about the "how". Now everyone wants to know if I am "a great fit" which seems to mean "exactly what we have already": a Java-head, a Rails guy, a Javascript geek, a C# maven, etc.

If you have issues that you have been unable to resolve, you don't need "a great fit" and you don't need more of the same: you need something different. You need to consider something new: new talents, new tools, new tenets. But even when a group has hit the wall and is stuck, I see fear of the new and desperate clinging to the old: our installed base! Our existing code! Think of the code!

When did this "there is only one way to do it!" philosophy become not just acceptable, but the norm? I paddle my canoe on the water and I drive my car on the highway and I do not view that as unnecessary overhead. Yes, I have to be familiar with two different kinds of vehicle, ostensibly for the same purpose (moving me around). But that is not a problem which needs fixing: I won't be putting wheels on my canoe any time soon.

I shake my head in wonderment at this backsliding, this devolving professionalism, this grunt-ification of our industry. Why are we headed toward being MacDonald's when we started out as fine dining? Can you imagine the technical debt that this mindlessness is piling up all around the world?

(Technical debt is my current favourite buzzword. I like the Wikipedia definition:

Technical debt (also known as design debt or code debt) is a recent metaphor referring to the eventual consequences of any system design, software architecture or software development within a codebase.

I used to wonder how the Johnny One Note model worked: don't you eventually have to pay the Piper? Don't you hit a hard limit on whatever single tool you have blessed? Don't you fail to meet your goals so obviously that your failure cannot be explained away with the torrent of jargon-filled gibberish that has become the hallmark of programmers' communications?

But now I have seen examples of how this obvious failure is avoided: business and development cycles have become so rapid that we can use every failure as a reason to move to the next development model or technology: we can leave the technical debt behind by walking out of our old house, defaulting on our mortgage, and buying a new house. Better yet, we can hope that we are at another company entirely when the technical debt comes due.

Welcome to MacProgrammer's! May I take your order? Just as long whatever you order is on our very limited menu.

Saturday, June 6, 2015

Big Data and Lab

Big data is a hot topic right now--insert rant about new names for old ideas here--and that wave is finally breaking on the clinical lab shores. So the time is right for my "Big Data and Lab" manifesto. (Or at least an observation born of decades of experience.)

Big data has two jobs: archiving and analyzing. Both involve data on computers. There, I claim, the similarity ends. While it is tempting to try to kill both of these birds with a single stone, I assert that this is a terrible idea.

Specifically, I find that in order to archive data effectively, I need a free-form, attribute rich environment which accommodates evolving lab practice without losing the old or failing to capture the new. But in order to analyze effectively, I need a rigid, highly optimized and targeted environment, preferably with only the attributes I am analyzing and ideally with those attributes expressed in an easily selected and compared way.

In other words, I find that any environment rich enough to be a long-term archive is unwieldy for analysis and any environment optimized for analysis is very bad at holding all possible attributes.

Specifically, I have seen what happens to inflexible environments when a new LIS comes in, or a second new LIS and the programmers are struggling to fit new data into an old data model which was a reworking of an older data model. It ain't pretty--or easy to process or fast to process. I have also seen what happens when people, especially people without vast institutional knowledge, try to report on a flexible format with three different kinds of data in it. They get answers, but those answers are often missing entire classes of answers. "They used to code that HOW?" is a conversation I have had far too many times.

Yes, I am aware of Mongo & co and of the rise (and stalling) of XML databases and of the many environments which claim to be able to both. I have seen them all, I have tried them all and I have not changed my views.

So I use a two pronged approach: acquire and store the data in as free-form a manner as possible--structured text, such as raw HL7 or XML are great at this--and then extract what I need for any given analysis into a tradition (usually relational) database on which standard reporting tools work well and work quickly.

The biggest clinical lab-specific issue I find is the rise of complex results, results which are actually chunks of text with a clinical impression in them. For such results, we ended up cheating: tagging the results with keywords for later reporting and asking the clinicians to create simple codes to summarize the text.

I am eager to be proved wrong, because the two pronged approach is kind of a pain in the neck. But so far, I have seen no magic bullet that stands the test of time: either the reportable format is too rigid to hold evolving lab data or the flexible format is too slow to actually handle years and years of data.

If you feel that you have a better idea, I would love to hear it so leave a comment.

Saturday, February 28, 2015

Too Many Hats

The clinical lab has not been immune to the budget pressure and  management thrashing around that has been endemic in American businesses for the past decade--rapid reorganization, loss of headcount, etc.

Working in the clinical lab space, we have seen jobs eliminated even as millions of dollars are reallocated to information technology: a new LIS or a new LIMS or even a new HIS with an LIS add-on.


For the most part, this new tech has not delivered on its ambitious promises of greater productivity so we have seen Lab forced to handle more and more complex orders with fewer people. Not surprising, then, that we also see an apparent decline in professionalism and procedure.

This puzzled me for a long time: why do the Labs with whom we interact seem to be getting worse and worse at their jobs? If you, as Lab management, are going to have too few people, at least keep the good ones!

Things have declined so much that recently we had to revise our project management planning to include doing lots of what we consider to be the client's work for them: basic validation, vendor relations, systems  integration, etc.

Why are we being forced to do so much of this? (If we don't do it, it goes undone, sometimes with serious consequences which are supposedly our fault.) Why are clients who are trying to save money happy to pay us to do this work that once they happily did for themselves?


I do not think that that answer is that Lab people are losing capability. Instead, I think that the answer is this: too few people leads to wearing too many hats. Wearing too many hats leads to doing jobs with too little attention and for which one has too little aptitude or training. 

So I think that the terrible Sys Admin I recently had to suffer with is actually an overburdened but excellent bench tech.

Similarly, that incompetent IT manager from earlier this year is a solid tech supervisor trying to do IT management as an afterthought to supervising a lab.

I bet that the hopeless systems integrator whose incompetence forced me to rewrite some code three times is a more correctly termed a lab director whose PhD in genetics did not cover systems integration or software specification or data flow analysis.

Lab is more than down-sized: it has been wrong-sized and it is overly invested in IT. At least that is how it looks from my IT keyboard. So I will try not to shoot the messenger--or the over-taxed Lab person doing someone else's job. After all, I would make a pretty bad bench tech or supervisor or director: God willing, no one will ever ask me to try.

Thursday, January 8, 2015

Too Many Options = Project Problems

I am finishing up my first instrument interface in quite a while. It did not go as smoothly as it should have and the culprit was complexity, aided by the distraction of options.

Instrument Interfacing Then and Now
In days of yore, automated analyzers could be coaxed to spew out ASTM messages out of a simple serial port and you could capture that ASTM for what little processing was needed and then send that data to the LIS, the lab's computer system.

Then came network-capable instruments and HL7, which was an improvement in many ways but much more complicated. (More complexity: sometimes there was ASTM over network and rarely HL7 over serial.)

The LIMS concept took off: a kind of middleman between the instruments and the LIS. The lines became blurry: you could verify on the instrument, on the LIMS or in the LIS. Orders can from the HIS via either the LIS or the LIMS.

With each wave of change, I shrugged and did whatever was needed. Sometimes it was easier; sometimes it was harder. Often things improved for users. There was certainly more to debug when things went wrong.


New Instrument Interface
In theory, after a few years of progress, this instrument interface should have been easier than previous efforts, because automated analyzers are so much smarter than they used to be. Better yet, I controlled the LIS end in this case because I wrote it and I maintain it.

And it was easier, each of the three times I wrote it. Overall, the interfacing was not really any easier than in days gone by.

Why did we have to write the interface three times?
Version1: Real-time Instrument to LIS
We went through the pain of network configuration (not least punching a hole in at least one firewall) to connect the instrument to the LIS. I wrote the simple HL7 message processor to load the data as it came off the instrument. The HL7 was a little odd, but that was not a big deal. Hurray! Mission accomplished.

Version 2: Real-time LIMS to LIS
Then the users decided that they wanted to use the LIMS that they already had, because order and data entry was better on the LIMS than on the instrument. So they connected their instrument to the LIMS and we connected the LIMS to our LIS and I rewrote the HL7 handling to accommodate the variant HL7 produced by the LIMS. Hurray! again.

(Then some data arrived in the instrument HL7 dialect. Whoops! the users were experimenting with not using the LIMS. We didn't mind, did we? Then more data arrived in the LIMS dialect: yes, they were going to use the LIMS. Almost certainly.)

Version 3: Batch LIMS to LIS
Whoops. The users decided that they wanted the data entry on the LIMS, but to do verification on our LIS which implied to them that the results would be batched on the LIS. So I rewrote the network layer to cache the HL7 messages until the users felt that the results constituted a "batch". Hurray, except that now we are over budget and they have changed their deadline to allow the lab to change their procedures.




Conclusion
More options do not always make projects smoother or better--or even make consumers happy (see this Fast Company article for details).

As the Lab IT landscape becomes more cluttered, I find that projects are harder to define, longer to implement and more expensive to execute. I would like to say that the rise of the giant Commercial Off The Shelf (COTS) software is fixing this problem, but my experience is that COTS is making it worse.


So the next time your IT folks say "it isn't that simple" press them for details but be prepared to believe them.