Showing posts with label Rant. Show all posts
Showing posts with label Rant. Show all posts

Monday, January 4, 2016

Achievement vs Activity

Ah, a new year, a fresh start, a clean slate! Welcome, 2016.

Now let the ranting begin, because I have observed something I don't like and reticence is so last year.

New Year's day I attended a party. At that party I met a number of fellow human beings and I struck up a conversation with one of them in the line for chili. As is its wont, the conversation turned to our professions: he a clinician and I a medical information systems specialist. After the initial explanation of what I _actually_ do (all of it--design, project management, coding, deployment, documentation, whatever is required), we ended up at the usual place: what sucks about Medical IT.

(Does this happen to smart phone people? I bet lots of people tell them how awesome smart phones are. No one ever tells me how awesome clinical information systems are.)

For a refreshing change, this clinician does not really care about HIPPA (as an ophthalmologist he is not under the kind of time pressure that, say ER folks are, so the additional security is not that big a deal), he finds the user interfaces adequate (although he could do without the mind-numbing repetition of drug warnings whenever he prescribes drugs) and he feels that the systems he uses are adequately powerful (however he did note that he sees fewer patients than he did 10 years ago and that he is almost certain that he is not delivering substantially better care to compensate for his lower productivity).

What did bother him is the fact that his practice has two full-time IT people for about 15 providers and that these two full-time IT people were overwhelmed, so really they might need to hire a third at some point.

I have no idea how good these two IT people are, or how efficient, or how productive. They are probably perfectly lovely people. This rant is not about them. This rant is about the understandable but regrettable use of activity as the measure of an IT group, instead of achievement.

I can see why people who are not IT experts use activity as a measure: activity is something one can often see, even if one does not understand what one is seeing. If you do not understand what an IT group does, then watching how they spend their time is probably a reasonable short cut.

But managers of IT groups should not be doing this: managers of IT groups should be tracking what gets done (achievement) and how many person hours it takes (productivity). This is why we have the concept of milestones, and why we have project plans.

I suppose that a technical support group can be managed merely by monitoring its activity, but without a concept of outcomes, you end up here:

http://dilbert.com/strip/1999-08-04

Worse, if you judge your IT folks by their activity alone, then staff who take a long time will seem better than staff who take a short time. In other words, you will select for being slow without any particular reason for being good. You will drive away talent and attract dead wood. Then you will find that you need ever more IT people but that your IT is ever less effective.


A concrete example may help:

We saw an organization invest in a super-duper printing system, because their execs work in offices and offices like to print and offices hate it when their print jobs fail. There was furious activity needed to get this system rolled out, with no discernible benefit since print jobs generally print even without massive and expensive printing systems.

Then the massive and expensive printing system's back end (printing) went down for six hours (which was theoretically impossible, but there you are) while the front end (accepting print jobs) stayed up. and clinical work had to go on, so paperwork and labels were done by hand as people had been trained to do long ago. Apps kept automatically queuing print jobs--admission reports, lab reports, patient ID labels, specimen container labels, etc--and people kept writing things down by hand.

Then the massive and expensive printing system came back up and did what it was born to do: printed those out-of-date print jobs. This amounted to a denial of service attack, as labels and paperwork which did not match the patients in front of them spewed out in front of clerks and nurses and PAs and doctors and lab techs. As they often say in clinical work: better never than late.

So  IT was scolded and in response a fail-safe was developed and deployed, essentially a way to shut off the massive and expensive printing system. This was another burst of furious activity, the net effect of which was to make it possible to emulate never having installed the printing system at all.

This was a huge amount of activity, for which the participants were lauded and rewarded. I question, however, how much of an achievement it was.

Keep your eyes on the prize and constantly renew your commitment to your goals or you will end up with too much going on and too little to show for it.

Saturday, November 21, 2015

One Size Fits Few

Bless me, Father, for I have sinned. It has been about three weeks since my last Lab IT confession. I have harboured sympathy in my heart for my competitors.

How can this be? Allow me to complain. I mean, explain.

I am often asked why a given LIS is so [insert bad quality here]. Much as I often agree with specific complaints about specific systems, I usually feel a hidden stab of pity for the team which produced the system because that team was asked to provide a single solution (the software) for a large problem area ("the lab.")

The fact of the mater is that the typical case of "the Lab" is not a monolith. It is not even a collection of more-or-less similar parts; rather it is a patchwork of very different subgroups who happen to share a need for a particular kind of work area.

Specifically, the requirements of the different workflows vary over such a large area that I am not surprised that a "one size fits all" approach usually fails: either the one size is the wrong size or the one size is also a collection of loosely-affiliate software modules which interoperate rather poorly.

Consider those requirements: on one end is clinical Chemistry:
  • high volume, low relative cost (although vital to more valuable services)
  • highly automated (very reliable auto-verification and analyzers)
  • significant time pressure
  • well-suited to an industrial approach such as Six Sigma
  • high throughput required, turn-around-time is very important
At the other end is Microbiology or much Genetic and Molecular testing:
  • lower volume, higher relative value
  • difficult to automate
  • poorly suited to industrial process control
  • takes a long time--no way around it
  • yields an often complex result requiring expertise to characterize
 Throw Haematology into the middle somewhere. Immunology is somewhere closer to Microbiology. Slot your favourite lab section into the appropriate place on the fast/simple vs slow/complex continuum.

So how does one provide a solution which is optimized for both of these extremes? Very carefully.

All to often vendors pick a section, optimize for that section and then claim that all others sections are "easier" in some way so most systems are not optimized for most of the users. Why is there so much complaining about lab systems? Because the situation makes it inevitable. Perhaps we should be surprised that there is not more even more complaining about lab systems.

Monday, November 2, 2015

EMR Disappointment And Acceptance

This past summer I was on a train in the Paris area and happened to share a train car with someone who was also from the East Coast of the US. We chatted and found out that he was a doctor, which always makes me slightly tense. Sure enough, once I mention that I am a medical information systems specialist we somehow end up on the topic of how bad so many of them are.

Why is that? I assume so many health care professionals have so little regard for their Electronic Medical Records system and other e-tools of the trade because these tools are not very good.

At least these medical informations are not very good at supporting many of their users. These medical information systems probably excel at supporting corporate goals or IT departmental goals.

The specific complaints by my companion on the train were painfully typical:
  • the screen layout is cramped and crowded;
  • the history available on-line is too short;
  • the specialized information (labs, x-rays, etc) are not well presented;
  • the data is biased toward general medicine and internal medicine.
But what struck me about our conversation with his resignation. While we rehashed the usual complaints and frustrations with large Commercial Off-the-Shelf (COTS) systems, he was more resigned than anything else. He just doesn't expect anything more from the software supporting his efforts to deliver clinical care.

We expect great things from our smart phones. We have high standards for our desktops and laptops and tablets. But somehow we have come to accept mediocrity in our systems supporting clinical care. And since we accept it, there is little chance of correcting it.

At least I will have lots to talk about with random clinicians I run into on trains.

Tuesday, August 4, 2015

Too Much Tech, Not Enough Ology

This post is a rant about how the following common ideas combine to make bad IT, especially clinical lab IT:
  • Everyone should express their special uniqueness at all times
  • Tech is hard, ology (knowledge) is easy, so go with the Tech
  • There is always a better way to do it
Alas! I believe that in many clinical lab IT environments, none of these ideas holds true and that the combination of them is toxic.

We Are All Individuals
You may be a special snowflake, all unique and brimming with insight, but engineering is often about doing the standard thing in the standard way, so those who come after you can figure out what you did. Yes, if you produce something truly revolutionary, it MIGHT be worth the overhead of figuring it out, but it might not.

Consider the mighty Knuth-Morris-Pratt algorithm which was a real step forward in text string searching. However, sometimes we don't need a step forward: we need something we can understand and support.The mighty Knuth himself told a story of putting this algorithm into an operating system at Stanford and being proud, as a primarily theoretical guy, to have some working code in production. Except that when he went to see his code in action, someone had removed it and replaced it with a simple search. Because although the KMP search is demonstrably better in many situations, the strings being searched in most human/computer interactions are short, so who cares? Having to maintain code you don't understand, however, is also demonstrably bad. So Knuth had some real sympathy for the sys admin who did this. Engineering is about accuracy and dependability; learn the standard approach and only get creative when creativity is called for (and worth the overhead).

Tech is Hard, Expertise is Easy
Too often I see clients going with a young programmer because (a) they cost less by the hour and (b) the assumption is that newer programmers are "up-to-date" and doing things "the modern way." I suppose this makes sense in contexts where the domain expertise is somehow supplied by someone else, but all too often I see bright-but-ignorant (BBI) programmers implementing in shiny new development environments with their shiny new skills, but producing a dreadful user experience. The only goal in clinical lab IT is to support the lab, to lower mistakes, to raise through-put, to shift drudgery away from humans and onto computers. If you produce a cool new app which does not do that, then you have failed, no matter how cheaply or quickly or slickly you did the work.

I Did It Myself!
Clinical lab IT deals with confidential information. We are supposed to take all reasonable and customary measures to guard the information in our care. Yet we still end up with security problems caused by programmers either "taking a shot" at security, as though it were an interesting hobby, or doing their own version of some standard tool which already exists, has already been validated and which is already being updated and tested by other people. Don't roll your own if you don't have to--security especially but other important as well. You might enjoy the challenge, but this is a job, not a college course.

Conclusion
Pay for experience. Demand that all novelties be justified. Use what is freely and securely available before rolling your own. Stop the current trend toward "upgrades" which make things worse and endless new systems which do not advance the state of the art. Have the tech and the ology.

Tuesday, July 21, 2015

System Interfaces Are Not Kitchen Renovations

Recently I had to write my part of an RFP. The topic was a system-to-system interface between two health care information systems. I went through the usual stages:
  1. Nail down my assumptions and their requirements
  2. Come up with a design to meet those requirements
  3. Translate the design into an implementation plan
  4. Put the implementation plan into a spreadsheet
  5. Make my best guess as to level of effort
  6. Have our project manager review my egotistical/optimistic assumptions
  7. Plug the estimated numbers into the spreadsheet
  8. Shrug and use the resulting dates and cost estimates
The result was all too predictable: push-back from the customer about the time and cost. In our amicable back-and-forth, which seemed to be driven on her side by a blind directive to challenge all prices of all kinds, I had an epiphany: software development in general and interfacing in particular is not a kitchen renovation, so why do customers act as if they were the same?

I have been on both sides of kitchen renovation and there are some similarities:
  • the customer is always impatient
  • the cost is hard to contain
  • accurately imagining the outcome of decisions is an uncommon skill
But there are some crucial differences:
  • the concept of kitchen is well-known and well-understood by most people
  • the elements of a kitchen are similarly familiar: sinks, cabinets, etc
  • examples of kitchens one likes can be found
  • in general the main user is also the main contact with the contractor
Why do I get huffy when people tell me I am padding my estimates? Because writing interfaces between complex systems is like the sand shifting beneath your feet. Sure, it is just another HL7 interface between an industry standard source system and a system of ours which is the intermediary system, but which has to export its data to a completely different industry-standard target system.

Thus we are linking industry standard system A (ISSA) to industry standard system B (ISSB): a piece of cake! Except....
  • ISSA has over 1,500 configurable parameters (literally).
  • ISSA was deployed almost five years ago and no one from that team is around.
  • ISSA's configuration was in flux those first few years.
These factors complicate my job because the source HL7 does not exactly match the spec. Further complications arise from the fact that the users have developed some local idioms, so the data elements are not being used in exactly the standard way.

On the target side, ISSB is still being configured, so I am trying to hit a moving target. Which of the local idioms serve a higher purpose (so they will stay) and which of them were to compensate for issues with ISSA? No one knows. What new idioms will evolve to compensate for issues with ISSB? No one can even guess.

So this is like remodelling a kitchen if the counters used to be suspended from the ceiling but now might be cantilevered from the walls and the water might be replaced by steam.

How long will it take to keep rewriting the interface so that the new system's evolving configuration and the customer's evolving needs are all met? I don't know; I wish I did. In the meantime, my good faith best guess will have to do.

Tuesday, July 14, 2015

Lab Automation vs IT Centralization

Over the past decade I have witnessed two trends in clinical lab computing which I think are two sides of the same coin:
  • Lab process automation is going down
  • IT is centralized, becoming more consolidated and less Lab-specific
 By "Lab process automation" I mean identifying repetitive tasks performed by humans and transferring those tasks to a computer or computers.

By centralized, I mean that the IT which serves the lab is now generally the same IT which serves all other parts of the organization.

I can see the appeal, especially to bean counters, of centralization: more control by execs, economy of scale, etc. But most of the IT groups I encounter are really all about office automation:
  • email
  • shared files
  • shared printers
  • remote access
These are all great if you are running a typical office, which is how everything seems to look from most C Suites.

Alas, the clinical lab is far closer in basic structure to a manufacturing plant than to a law office. Typical corporate IT is not good at process automation:
  • receiving orders
  • receiving specimens (raw material)
  • matching up specimens and orders
  • doing the assays (processing raw material)
  • serving up results (delivering finished product)
At the bench, email support and file-sharing are not very helpful; powerful and speedy instrument interfaces, audit support and throughput analysis are all much more helpful.

But centralized IT is not only oriented away from this kind of business process automation, they are punished for it: why are you spending time and money on lab-specific projects? If you get email to work on your boss's boss's boss's iPhone, you are a hero. If you figure out how to alert bench techs to specimens which need a smear, you are spending too much time in the lab.

Worse, as corporate IT completes its transition into being mostly about vendor management, the idea of doing what the vendors cannot or will not do--plugging the gaps between off-the-shelf products which gaps cause so much of the needless drudgery in lab work--becomes first unthinkable and then impossible.

Farewell, lab process automation: perhaps vendors will some day decide the interoperability is a goal and then you will live again. But I am not betting on it.

Tuesday, June 23, 2015

Better Never Than Late

I often complain that the typical organizational IT infrastructure is too complicated and often not well-suited for the clinical lab. I am often asked to given an example, but so far my examples of complexity were, themselves, overly complicated and apparently quite boring.

Well, all that changed recently: now I have nifty example of what I mean.

One of our customers has a fabulous print spooling system: many pieces of hardware, many lines of code, all intended to ensure that your precious print job eventually emerges from a printer, no matter what issues may arise with your network. Best of all, you route all printing through it and it works automagically.

The fancy print job spooler is so smart, it can reroute print jobs from off-line printers to equivalent on-line printers. It is so smart it can hold onto a print job for you, "buffer" it, until an appropriate printer comes on-line.

Alas, neither of these features is a good fit for the clinical labs, at least for specimen labels. The ideal specimen label prints promptly and right beside the person who needs it. If the label cannot be printed, then the print job should disappear: the user will have had to use a hand-written label or some other downtime alternative. Printing the label latter is, at best, annoying. At worst, printing the label later is confusing and leads to mis-identified specimens. For this job, better never than late.

With effort, our client disabled the roaming print job feature, so the labels (almost) always print where they are needed. But the buffer cannot be turned off--it is the whole point of the spooler, after all--and so after downtime, the now-unwanted labels suddently come pumping out of the printers and if the draw station happens to be busy, the old labels mingle with the current labels and opportunities for serious errors abound.

Print spoolers are nifty. They serve office workers well. They are a standard part of today's smart IT infrastructure. But they don't serve the clinical lab in any real sense. The clinical lab is not a typical office environment: don't treat it like one.

Thursday, June 18, 2015

Rapid Failure Is Not Success

What makes software a success? Here is my list:
  1. It works: it does what at least one audience really wants done
  2. It exists: it was completed and deployed
  3. It lives: it can be updated or ported
  4. It pays: if it is for-profit, it does not lose money
I would claim that lots of software, by my definition, is failing. But failing on such a rapid timescale that people either do not notice or do not care.

In the dim and distant past, when I was in college, studying CS meant actually studying computer science. We used various computer programming languages, but we were expected to master the concepts and techniques, not merely the semantics of a given language or development environment. We were supposed to be flexible about "implementation details." We even called them "details" just to emphasize their relative importance.

As someone who has tested the programmer job waters regularly for over three decades now, I can tell you that being a software expert who is not inflexibly married to a particular method or environment is an out-of-date notion: it has been years since anyone wanted to know what I can do or what I have done. Now it is all about the "how". Now everyone wants to know if I am "a great fit" which seems to mean "exactly what we have already": a Java-head, a Rails guy, a Javascript geek, a C# maven, etc.

If you have issues that you have been unable to resolve, you don't need "a great fit" and you don't need more of the same: you need something different. You need to consider something new: new talents, new tools, new tenets. But even when a group has hit the wall and is stuck, I see fear of the new and desperate clinging to the old: our installed base! Our existing code! Think of the code!

When did this "there is only one way to do it!" philosophy become not just acceptable, but the norm? I paddle my canoe on the water and I drive my car on the highway and I do not view that as unnecessary overhead. Yes, I have to be familiar with two different kinds of vehicle, ostensibly for the same purpose (moving me around). But that is not a problem which needs fixing: I won't be putting wheels on my canoe any time soon.

I shake my head in wonderment at this backsliding, this devolving professionalism, this grunt-ification of our industry. Why are we headed toward being MacDonald's when we started out as fine dining? Can you imagine the technical debt that this mindlessness is piling up all around the world?

(Technical debt is my current favourite buzzword. I like the Wikipedia definition:

Technical debt (also known as design debt or code debt) is a recent metaphor referring to the eventual consequences of any system design, software architecture or software development within a codebase.

I used to wonder how the Johnny One Note model worked: don't you eventually have to pay the Piper? Don't you hit a hard limit on whatever single tool you have blessed? Don't you fail to meet your goals so obviously that your failure cannot be explained away with the torrent of jargon-filled gibberish that has become the hallmark of programmers' communications?

But now I have seen examples of how this obvious failure is avoided: business and development cycles have become so rapid that we can use every failure as a reason to move to the next development model or technology: we can leave the technical debt behind by walking out of our old house, defaulting on our mortgage, and buying a new house. Better yet, we can hope that we are at another company entirely when the technical debt comes due.

Welcome to MacProgrammer's! May I take your order? Just as long whatever you order is on our very limited menu.

Saturday, June 6, 2015

Big Data and Lab

Big data is a hot topic right now--insert rant about new names for old ideas here--and that wave is finally breaking on the clinical lab shores. So the time is right for my "Big Data and Lab" manifesto. (Or at least an observation born of decades of experience.)

Big data has two jobs: archiving and analyzing. Both involve data on computers. There, I claim, the similarity ends. While it is tempting to try to kill both of these birds with a single stone, I assert that this is a terrible idea.

Specifically, I find that in order to archive data effectively, I need a free-form, attribute rich environment which accommodates evolving lab practice without losing the old or failing to capture the new. But in order to analyze effectively, I need a rigid, highly optimized and targeted environment, preferably with only the attributes I am analyzing and ideally with those attributes expressed in an easily selected and compared way.

In other words, I find that any environment rich enough to be a long-term archive is unwieldy for analysis and any environment optimized for analysis is very bad at holding all possible attributes.

Specifically, I have seen what happens to inflexible environments when a new LIS comes in, or a second new LIS and the programmers are struggling to fit new data into an old data model which was a reworking of an older data model. It ain't pretty--or easy to process or fast to process. I have also seen what happens when people, especially people without vast institutional knowledge, try to report on a flexible format with three different kinds of data in it. They get answers, but those answers are often missing entire classes of answers. "They used to code that HOW?" is a conversation I have had far too many times.

Yes, I am aware of Mongo & co and of the rise (and stalling) of XML databases and of the many environments which claim to be able to both. I have seen them all, I have tried them all and I have not changed my views.

So I use a two pronged approach: acquire and store the data in as free-form a manner as possible--structured text, such as raw HL7 or XML are great at this--and then extract what I need for any given analysis into a tradition (usually relational) database on which standard reporting tools work well and work quickly.

The biggest clinical lab-specific issue I find is the rise of complex results, results which are actually chunks of text with a clinical impression in them. For such results, we ended up cheating: tagging the results with keywords for later reporting and asking the clinicians to create simple codes to summarize the text.

I am eager to be proved wrong, because the two pronged approach is kind of a pain in the neck. But so far, I have seen no magic bullet that stands the test of time: either the reportable format is too rigid to hold evolving lab data or the flexible format is too slow to actually handle years and years of data.

If you feel that you have a better idea, I would love to hear it so leave a comment.

Saturday, February 28, 2015

Too Many Hats

The clinical lab has not been immune to the budget pressure and  management thrashing around that has been endemic in American businesses for the past decade--rapid reorganization, loss of headcount, etc.

Working in the clinical lab space, we have seen jobs eliminated even as millions of dollars are reallocated to information technology: a new LIS or a new LIMS or even a new HIS with an LIS add-on.


For the most part, this new tech has not delivered on its ambitious promises of greater productivity so we have seen Lab forced to handle more and more complex orders with fewer people. Not surprising, then, that we also see an apparent decline in professionalism and procedure.

This puzzled me for a long time: why do the Labs with whom we interact seem to be getting worse and worse at their jobs? If you, as Lab management, are going to have too few people, at least keep the good ones!

Things have declined so much that recently we had to revise our project management planning to include doing lots of what we consider to be the client's work for them: basic validation, vendor relations, systems  integration, etc.

Why are we being forced to do so much of this? (If we don't do it, it goes undone, sometimes with serious consequences which are supposedly our fault.) Why are clients who are trying to save money happy to pay us to do this work that once they happily did for themselves?


I do not think that that answer is that Lab people are losing capability. Instead, I think that the answer is this: too few people leads to wearing too many hats. Wearing too many hats leads to doing jobs with too little attention and for which one has too little aptitude or training. 

So I think that the terrible Sys Admin I recently had to suffer with is actually an overburdened but excellent bench tech.

Similarly, that incompetent IT manager from earlier this year is a solid tech supervisor trying to do IT management as an afterthought to supervising a lab.

I bet that the hopeless systems integrator whose incompetence forced me to rewrite some code three times is a more correctly termed a lab director whose PhD in genetics did not cover systems integration or software specification or data flow analysis.

Lab is more than down-sized: it has been wrong-sized and it is overly invested in IT. At least that is how it looks from my IT keyboard. So I will try not to shoot the messenger--or the over-taxed Lab person doing someone else's job. After all, I would make a pretty bad bench tech or supervisor or director: God willing, no one will ever ask me to try.

Thursday, January 8, 2015

Too Many Options = Project Problems

I am finishing up my first instrument interface in quite a while. It did not go as smoothly as it should have and the culprit was complexity, aided by the distraction of options.

Instrument Interfacing Then and Now
In days of yore, automated analyzers could be coaxed to spew out ASTM messages out of a simple serial port and you could capture that ASTM for what little processing was needed and then send that data to the LIS, the lab's computer system.

Then came network-capable instruments and HL7, which was an improvement in many ways but much more complicated. (More complexity: sometimes there was ASTM over network and rarely HL7 over serial.)

The LIMS concept took off: a kind of middleman between the instruments and the LIS. The lines became blurry: you could verify on the instrument, on the LIMS or in the LIS. Orders can from the HIS via either the LIS or the LIMS.

With each wave of change, I shrugged and did whatever was needed. Sometimes it was easier; sometimes it was harder. Often things improved for users. There was certainly more to debug when things went wrong.


New Instrument Interface
In theory, after a few years of progress, this instrument interface should have been easier than previous efforts, because automated analyzers are so much smarter than they used to be. Better yet, I controlled the LIS end in this case because I wrote it and I maintain it.

And it was easier, each of the three times I wrote it. Overall, the interfacing was not really any easier than in days gone by.

Why did we have to write the interface three times?
Version1: Real-time Instrument to LIS
We went through the pain of network configuration (not least punching a hole in at least one firewall) to connect the instrument to the LIS. I wrote the simple HL7 message processor to load the data as it came off the instrument. The HL7 was a little odd, but that was not a big deal. Hurray! Mission accomplished.

Version 2: Real-time LIMS to LIS
Then the users decided that they wanted to use the LIMS that they already had, because order and data entry was better on the LIMS than on the instrument. So they connected their instrument to the LIMS and we connected the LIMS to our LIS and I rewrote the HL7 handling to accommodate the variant HL7 produced by the LIMS. Hurray! again.

(Then some data arrived in the instrument HL7 dialect. Whoops! the users were experimenting with not using the LIMS. We didn't mind, did we? Then more data arrived in the LIMS dialect: yes, they were going to use the LIMS. Almost certainly.)

Version 3: Batch LIMS to LIS
Whoops. The users decided that they wanted the data entry on the LIMS, but to do verification on our LIS which implied to them that the results would be batched on the LIS. So I rewrote the network layer to cache the HL7 messages until the users felt that the results constituted a "batch". Hurray, except that now we are over budget and they have changed their deadline to allow the lab to change their procedures.




Conclusion
More options do not always make projects smoother or better--or even make consumers happy (see this Fast Company article for details).

As the Lab IT landscape becomes more cluttered, I find that projects are harder to define, longer to implement and more expensive to execute. I would like to say that the rise of the giant Commercial Off The Shelf (COTS) software is fixing this problem, but my experience is that COTS is making it worse.


So the next time your IT folks say "it isn't that simple" press them for details but be prepared to believe them.

Friday, July 11, 2014

The Most Effective Guy In The Room

I am not a fan of the "smartest guy in the room" meme. I feel that it misses the point. In a business setting, all I care about is effectiveness. Most business problems are resolved by insight, experience, social skills, hard work and luck. Smartness is not usually much of a factor.

The point of being in such a room is to fix an issue, address a lack or form a plan of action to achieve something. Who is going to get us there, in a reasonable amount of time with a maximum of team buy-in? That is who I want to identify. That is who I want to run the meeting.

Since I am consultant and since I have been at this for 30 years, I have run into a wide range of styles and manners and approaches in meetings. There is more than one way to be effective, but I have worked with only a handful of people I thought were terrific runners of meetings. What they have in common is the ability to get at the core issue, to elicit suggestions and solutions, to distribute the necessary work and to keep everyone on task. I don't know how smart they were; it didn't really come up. I was too busy being focused on the job at hand.

As a technology guy, I am usually in meeting about technology and usually with people who are not technologists. So what is required is the ability to lay out technology issues in an accessible way, to facilitate group decisions about technology and to follow up in a way that non-technology people can relate to.

Phooey on smart. Who is the most effective person in the room?

Monday, June 30, 2014

App Retirement Party

Today marks the beginning of retirement for one of my apps--this one, in fact. As its Viking funeral ends in it sinks beneath the surface of the inky black waters of oblivion, I take a few minutes to ponder its useful life and its lessons.

Given its original mission, it was an astounding success: it allowed Outreach to dramatically grow its volume while greatly lowering the processing burden on the hospital lab.

Its secondary goal of paving the way for software in the draw station was also achieved, since rolling out its replacement was a fraction of the pain of introducing software to them in the first place.

Its tertiary goal of providing hard numbers with which to manage the collection operation seemed to be a glorious success, but since the organization did not bother to replace this functionality when they retired this app, perhaps this was as obviously valuable as I thought?

As an exercise in customization to support operating procedures, it also seemed to be a great success, but many of the most popular and most effective customizations are not being re-implemented, so their value was lower than it seems to me or their cost in the new environment is too high. The whole "this is better, because it cost more" movement baffles me and perhaps this is part of that.

The customizations which people most complain about losing are these:
  • the ability to automatically cancel ordered tests, with an explanation, of those tests are not going to be done anyway per Lab policy
  • the ability to automatically change  an order for an obsolete test with an order for its replacement, with documentation, if an equivalent replacement has been defined
  • the ability to handle clinical trials and research draws, to ensure anonymity and proper billing
Rest in peace, app: you did all that we asked and more.

Godspeed, former users: may your manual work-arounds be as painless as possible and your upgrades lead you, eventually, to the level of efficiency you once enjoyed.

Thursday, June 26, 2014

Still No Silver Bullet

I recently got re-acquainted with Ruby-on-Rails and this made me think of software building tools in general, which reminded me that I like to rant about how badly we creators of software classify, describe and choose our tools when confronted by a new task. This is that rant.

We do a horrible job of assessing our options before we start, explaining our choices to our colleagues (and bosses and clients), reacting to feedback as the project progresses.

Once upon a time, shortly after I got out of college, someone wrote an  excellent paper on a timely topic: software programmer productivity. The time was 1986, the author was Fred Brooks and the paper was "No Silver Bullet — Essence and Accidents of Software Engineering".

(For the Cliff Notes version, turn to Wikipedia: http://en.wikipedia.org/wiki/No_Silver_Bullet. The paper is worth a read on its own, but who has time?)

The summary in the Wikipedia page is very good, and I will be stealing from it liberally in sum future rant about how badly managers tend to manage programmers and how badly programmers support their management. But you should read it now before the rant really gets going.

This is my banal insight: so little of the fundementals of creating software has changed in the intervening decades since 1986 that sometimes I want to cry. So much has changed about the incidentals that sometimes I want to cry. After 30+ years in the business, I was hoping for more improvement, not more novelty.

And boy, do we have novelty. We love novelty. We worship novelty. Take web UIs as an easy example: we have so many ways to make web pages, my head spins.
  • Want HTML with embedded code? Try PHP!
  • Want code with embedded HTML? Use Perl/CGI!
  • Want an abstract environment which separates HTML and code nearly completely? Ruby-on-Rails is for you.
  • How about an end-to-end integrated code & HTML environment? Microsoft Visual Studio is just the thing.
  • Want to try to side-step HTML and have some portability? Java, in theory, can do that.
  • Want to develop HTML with code woven into it? Why, Javascript was created just for this purpose.

I do not have a problem with more than one way to do something, especially if the ways have pros and cons. I am not an operating system bigot: I used VM/CMS for large batch jobs. I used VMS for file-oriented processing. I used Unix for distributed systems. I used MS-DOS for desktop apps. I used OS/2 and then Windows for snappier desktop apps. Each had its pros and cons and its appropriate uses.

The same was true for computer languages: I have used BASIC for matrix math, I have used C for file I/O, I have used Lisp for list processing, I have used PL/1 because IBM required that I use it.

Then somehow this idea of appropriateness faded away from computing: desktop=Windows, server=Unix. Then server=Windows Server for some, Unix for others. C++ or VB as the one-and-only programming language. Then other contenders for the one-and-only programming language.

We all understand that hammers are good for driving nails and screw drivers are good for driving screws. It is clear that screws are better connectors in some situations and nails in others. Who would hire a carpenter who only used one or the other?

But we hire "Drupal Shops" and "C/Windows Shops." We hire "Unix guys" or "Windows guys." We pretend that there is a single best tool for all jobs--and then argue pointlessly about which tool that might be.

Consider this statement from the creator of the Ruby programming language: (full text here: http://en.wikipedia.org/wiki/Ruby_%28programming_language%29)

Matsumoto has said that Ruby is designed for programmer productivity and fun, following the principles of good user interface design.[33] At a Google Tech Talk in 2008 Matsumoto further stated, "I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language."[34] He stresses that systems design needs to emphasize human, rather than computer, needs:[35]

A single tool to help every programmer in the world to be productive and happy? To me, that seems insane and it seems to reveal a worldview which I cannot support: the idea that there is a single best tool for all people for all problems in all environments. What hogwash.

I applaud the goal of creating good tools and making programmers in general more productive but I reject the notion that this job can every be done with a single stroke.

Wednesday, January 22, 2014

Automation/Domination

XKCD for Jan 20, 2014
This cartoon is amusing and represents, I am told, a common automation experience.

It ruffled my feathers a bit; automation is a big part of what we do for clinical labs and I know, I can prove, that we leave processes faster, cheaper and better when we are done with them.

This is, in part, because we go easy on the "rethinking" part. We like business process engineering, the careful defining and describing the working parts of a process as though that process were a machine. Through careful description comes understanding.

We fear business process re-engineering, the long and tedious process in which MBAs and frustrated worker bees compete to imagine a perfect world which software people are then to summon from the ether.

We understand that the pieces interact and interconnect. We also understand that predicting how it will all shake out is a very low-percentage business so instead of trying to fix the entire process, we go after low-hanging fruit and iterate.

Billing wants paper from which to work? Fine, give them cheap, simple reports and print them. Once they believe in the reports, they may be open to a web page instead of printing. Once they trust the web page, they be willing to consider a data feed. Once they have the data feed, who needs all this damned paper?

"Please stop printing those reports, nobody looks at them any more" is how our successful automation gigs end. But we don't start with "you guys need to ditch the paper, this is the 21st century!"

The clinical lab was slow to embrace automated analyzers and autoverification, but embrace them they did. The greying of workforce and lack of new blood means that it is time to take the automation behind the bench.

We know that some IT "automation" made data entry and other tasks slower and harder, but please don't tar us all with the same brush: there are plenty of tasks the computer can do faster and better, so you can concentrate on the other stuff.

Monday, January 20, 2014

Privacy Rules vs Care Delivery

In medical IT, we are often asked about HIPAA compliance, much in the way the Brothers Grimm probably asked little children about being in the woods: scare us into doing the "right" (ie legal liability lowering) thing.

When people say "HIPAA" generally I assume that they mean the privacy rule of HIPPA, specifically, which Wikipedia summarizes thusly:  (go here for the full text)


The HIPAA Privacy Rule regulates the use and disclosure of Protected Health Information (PHI) held by "covered entities" (generally, health care clearinghouses, employer sponsored health plans, health insurers, and medical service providers that engage in certain transactions.)[17] By regulation, the Department of Health and Human Services extended the HIPAA privacy rule to independent contractors of covered entities who fit within the definition of "business associates".[18] PHI is any information held by a covered entity which concerns health status, provision of health care, or payment for health care that can be linked to an individual.[19] This is interpreted rather broadly and includes any part of an individual's medical record or payment history. Covered entities must disclose PHI to the individual within 30 days upon request.[20] They also must disclose PHI when required to do so by law such as reporting suspected child abuse to state child welfare agencies.[21]

So is the primary goal to maintain privacy or deliver effective healthcare? If you said "both, of course!" then I must respectful say "balderdash!" I am well aware of the standard privacy advocate claim that one can easily do both, at the same time, with no loss of effectiveness. It is my experience that not only do the two goals not co-exist, they also work against each other in many instances.

In lab IT, this is most often the following tension: deliver lab results quickly to whoever might need them--nurses, PAs, MDs, NPs versus ensure that every access is by a care giver, specifically a care giver who is part of this patient's team. Even if the IT system user is a nurse who was doing something else when she was asked by a code team member to look up something on behalf of a caregiver who is currently not in a position to authenticate themselves.

When I ask privacy advocates how to balance these concerns the most common response is the claim that there is no problem: if IT does its job, then all required data will always be disclosed to the correct parties, but not the incorrect parties, in a timely manner. As someone who actually deploys systems in the real world, I find this answer supremely unhelpful.

When I ask security professionals how to balance these concerns, they ask me to restate my question as a risk:benefit statement, at which point they will help me figure out how much security to combat which risk. But when I respond that the risk is that security will interfere with the delivery of healthcare, I am referred to the standard menu of risks from which I may pick:
  • leaking information to blackmailers
  • leaking information to underwriters (insurers)
  • leaking information to the public

This company has a nice way to frame a conversation with a CISO, assuming that the organization is not a health care provider. You can find that conversation starter here: http://www.ey.com/GL/en/Services/Advisory/Cyber-security---Steps-you-should-take-now?utm_source=Outbrain&utm_medium=TextLink&utm_content=steps_ceo_ciso_Outbrain&utm_campaign=GISS_OLA_nov_dec_2013

But working in medical IT, I feel that I need a solution that takes into account some other considerations:
  • NOT disclosing information may harm someone, so I do not want to use solutions which assume that all disclosure is bad
  • disclosing information to unauthorized health care providers is often covered by other legal means, eg medical licensing so isn't that "breach" of rather low signifigance?
  • the information does not belong to the parent organization in the first place so taking steps to protect it must include ways to share it on demand
If anyone knows of a privacy policy, implementable for an actual lab information system, please let me know. I would love to stop trying to meet privacy rules in an environment where failure to disclose in a timely manner could kill someone.

Wednesday, January 15, 2014

Ew, Gross: Why Good Engineering is Sometimes Bad

I like to think of myself as a software architect and not a software engineer.

Engineers are often relieved by this, because they feel that I am not rigorous enough to be an engineer. Architects are often dismayed by this, because I don't wear pleated pants or style my hair.

The practical difference is that I feel comfortable in the creative, design end of things, which means that I often get called in when the engineers have failed.

Sadly, I have come to understand that much of the time, this failure is willful: while I may or may not have vision unmatched by most engineers, I certainly have a willingness to change dirty design diapers. The failure of the engineers, if you listen closely, is often really the sound of a young teenager saying "Ew, gross!"

A function for which I am paid, but which I do not particularly enjoy, is Engineer-to-Management translation. "Why do my IT people say X?" I hear. I try to be diplomatic (for me), but more and more often the honest answer is "The obvious solution strikes them as distasteful, so they don't want to do it."

My favourite example of this was a programmer who embraced object-orientatin to an absurd degree: he re-implemented basic math on top of the native facilities. His code defined ten numeroid objects, '0', '1', '2', '3', '4', '5', '6', '7', '8' and '9.' Of course he then had to define "addition," "subtraction," "division" and "multiplication" for these objects--what fun! His program gave the correct answer, but took 24 hours to run. Running with native integer support, the same program took a little under 8 hours. The programmer's response? "You resorted to an orthogonal formalism." That is the most learned-sounding "Ew, gross!" I have ever heard.

Other examples of holding my nose in the service of providing value include providing users with a dumb ASCII terminal long after such things had become uncool and yucky, or providing fax support despite the outdated nature of faxing.

Engineering is rule-based and rewards rule-following and so attracts rule-loving personalities. They tend to find rules comforting in addition to finding rules useful and productive.

The flip side is that, as a group, engineers are bad at value judgements. They are not inclined to break the rules and they certainly are not comfortable with solutions based on rule-breaking.

Alas, the real world--especially the clinical lab where we are trying to get it right very time--sometimes does not co-operate and requires us to pick best of a bad lot. Even worse, sometimes we need to pick any of a bad lot and get on with giving the users what they need and getting the job done.

Worst of all are the cases where the beautiful engineering is invisible to the users. All they see is that version 2 sucks worse than version 1, again. And again, engineering wants a cookie because version 2 has beautiful engineering and none of the rule-bending messiness that make version 1 so useful.


Sometimes engineering needs to get over itself and do what must be done. That is why I don't mind being scoffed at by hardcore engineers: they may be better at math, but I am better at recognizing when a rule is getting in the way of better solution. I know when to say "that is a good rule, that is a useful rule, that rule will keep you from tending to make a certain kind of mistake, but now is the time to break that precious rule."

Friday, January 3, 2014

Data-Driven Virtuous Cycles

To paraphrase the Six Sigma religious tenet, "you can only manage what you can measure." While I think that this is somewhat over-simplified, there is certainly truth to it.

Specifically, I run into issues which can only be resolved with a solid metered context. Often, while debugging these issues, I build "scaffolding" which then lives on as the metering to drive a continuous monitoring process.

Some issues have many factors, related in murky and maddening ways and the only way to untangle the knot is to measure, find and fix something and then return to step one until your measure tells you that you are done.

Current Example
We are in the process of restoring functionality lost when a highly tuned system was replacing by something manifest worse--but cooler. One of the data elements lost was who collected the specimen. This turns out to be critical for many management functions.

The first reaction to our bug report was "nonsense! the new system is fabulous!".

The second reaction was "ok, looking at your data, we see that this is happening, but we have come up with a labor-intensive workaround and we will simply command people to follow the new procedure."

The third reaction was "ok, we see that some areas are not complying but we are going to scold them--and stop recording this data, because we don't need it anymore."

Needless to say, we are still collecting the data and still helping them police their spotty compliance. Someday, the meters will tell us that all is well and we can go back to relying on this data for our high value reports.

The Bad Old Days
This situation is sadly similar to our work with scanned requisition forms. When we deployed our draw station solution, we became part of the scanned req infrastructure. As the newest member of the team, we were immediately blamed for any and all missing reqs. In self-defence, I created an audit to trace scanned req activity, comparing expected with actual. We immediately made a number of interesting discoveries:
  1. I had some bugs, which I fixed and verfied as fixed
  2. Some users were not really on board with scanned reqs so we started to nag them
  3. Some of the orders for which we were blamed did not come through the draw station; the Front Bench decided to use our software to ensure compliance
  4. Some of the scanners were in need of service
  5. The placement of the bar codes on the page matters more than one would hope
With feedback and monitoring, the situation has improved dramatically and our req watchdog technology is actually still in service even as the LIS and draw station solution for which it was created are out of service and about to be retired, respectively.

Tube Tracking
I think that our tube tracking experience can also be seen as measurement leading to clarity and control, so I am including it.

Conclusion
Measure, management, repeat. Even when all is well, don't stop auditing and reviewing.

Wednesday, January 1, 2014

XML vs HL7

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why do so many systems use HL7 instead of XML?"

This is a good question with many possible answers, but this is my executive summary: XML is easy for humans to read and HL7 is easy for computers to process.

Medical IT is often short on power and long on functionality, so it is natural to avoid expensive to process and to embrace easy to process, even at the cost of human legibility. In my experience the people who wonder at the lack of XML are not career medical IT professionals.

XML is a mark up language, a structured tagged text format which descends directly from SGML. It was intended as a platform-independant document storage format, but has become a kind of universal data exchange format.

HL7 is a line-oriented, record and field-based text format which is rather reminiscent of serial line-oriented message formats of yore, such as ASTM which was already familiar to clinical lab people from instrument interfaces.

XML makes more-or-less self-documenting "trees" which can be displayed natively by most browsers, or "visualized" with a little javascript magic: http://www.w3schools.com/xml/xml_to_html.asp There are lots of tools for working with XML and storing it.

In theory, XML is fault-intolerant: XML processing is supposed to halt at the first error encountered. This is not very robust, but in theory there should be no errors because you can write a format document type definition (DTD) which will allow people to make sure that they are sending and receiving data in exactly the way that you expect. If the XML document is made using a DTD and parsed with the same DTD, what could go wrong? And whoever created the data used a validator, such as http://www.w3schools.com/xml/xml_validator.asp on it before releasing it, right?

(In practice, I do not see very much strict adherence to document type definitions.)


HL7 makes nice, simple messages which can be easily processed by almost any programming language. I have written HL7 message processors in C, Perl, PHP, and BASIC.

So how do these formats look like side by side? Consider the following two samples:

HL7 Lab Result:
MSH|^~\&|LCS|LCA|LIS|TEST9999|199807311532||ORU^R01|3629|P|2.2
PID|2|2161348462|20809880170|1614614|20809880170^TESTPAT||19760924|M|||^^^^
00000-0000|||||||86427531^^^03|SSN# HERE
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|1|8642753100012^LIS|20809880170^LCS|008342^UPPER RESPIRATORY
CULTURE^L|||19980727175800||||||SS#634748641 CH14885 SRC:THROA
SRC:PENI|19980727000000||||||20809880170||19980730041800||BN|F

OBX|1|ST|008342^UPPER RESPIRATORY CULTURE^L||FINALREPORT|||||N|F||| 19980729160500|BN
ORC|NW|8642753100012^LIS|20809880170^LCS||||||19980727000000|||HAVILAND
OBR|2|8642753100012^LIS|20809880170^LCS|997602^.^L|||19980727175800||||G|||
19980727000000||||||20809880170||19980730041800|||F|997602|||008342

OBX|2|CE|997231^RESULT 1^L||M415|||||N|F|||19980729160500|BN
NTE|1|L|MORAXELLA (BRANHAMELLA) CATARRHALIS
NTE|2|L| HEAVY GROWTH
NTE|3|L| BETA LACTAMASE POSITIVE
OBX|3|CE|997232^RESULT 2^L||MR105|||||N|F|||19980729160500|BN
NTE|1|L|ROUTINE RESPIRATORY FLORA


(from http://www.corepointhealth.com/resource-center/hl7-resources/hl7-oru-message)

XML Lab Result:
<element name="lab-test-results">
        <complexType>
            <annotation>
                <documentation>
                    <summary>
                        A series of lab test results.
                    </summary>
                </documentation>
            </annotation>
            <sequence>
                <element name="when" type="d:approx-date-time" minOccurs="0">
                    <annotation>
                        <documentation>
                            <summary>
                                The date and time of the results.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
                <element name="lab-group" type="lab:lab-test-results-group-type" maxOccurs="unbounded">
                    <annotation>
                        <documentation>
                            <summary>
                                    A set of lab results.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
                <element name="ordered-by" type="t:Organization" minOccurs="0">
                    <annotation>
                        <documentation>
                            <summary>
                                    The person or organization that ordered the lab tests.
                            </summary>
                        </documentation>
                    </annotation>
                </element>
            </sequence>
        </complexType>
    </element>

 (from http://social.msdn.microsoft.com/Forums/en-US/5003cf00-de7f-41ec-93a9-c04b14e41837/xml-schema-of-lab-test-results)

Wednesday, December 25, 2013

Cross-organization Patient Identification

A colleague asked me what I thought of this:

http://www.healthcareitnews.com/news/himss-hhs-join-forces-patient-id

"To improve the quality and safety of patient care, we must develop a nationwide strategy to match the right patient to the right record every time," said Lisa Gallagher, HIMSS vice president of technology solutions, in a statement.

The innovator in residence, she said, "will create a framework for innovative technology and policy solutions to help provide consistent matching of patient health records and patient identification.”

I had two reactions, one after the other:
  1. That would be awesome (a nationwide strategy to match the right patient to the right record everytime).
  2. Good luck balancing privacy and accuracy.
I have been dealing with this issue for just about 29.5 years. That is a loooooong time. I have seen lots of ideas come and go. Alas, I have no great solution, but I do have a firm grasp on potential issues:
  • Very similar demographics: (two easily confused patients)
    • identical twin boys named John and James, for instance (yes, people do that)
    • father and son, same name, unlucky birth dates such 11/1/61 and 1/16/11. It happens and MANY clerks are so pleased to spot the "typo"
    • cousins born on the same day or with unlucky birth dates with the same name
    • mother & daughter have same name until marriage and updating the daughter obscures the mother, making the mother look like the maiden name version of the daughter
  • Very dissimilar demographics: (one patient looks like two)
    • maiden name to married name: add a birth date correction and all bets are off
    • legal name change, sometimes to deliberately leave behind the past--prison term, bad marriage, etc
    • heavy use of a nickname "Steve Jones" finally decides to go by "Steven Jones" because his dad, "Steven Jones," just died. Yikes.
  •  Privacy nut / identity theft: the patient deliberately gives false demographics or those of someone else.
I cannot imagine, in the absence of a national identity card, how a private American effort could even span different organizations within a given state, let alone across state boundaries.

Insurance companies could help, given their efforts to get bills paid across institutions, but I cannot see why they would and I can see why they wouldn't.

Man, I hope that I am wrong about this.