I often complain that the typical organizational IT infrastructure is too complicated and often not well-suited for the clinical lab. I am often asked to given an example, but so far my examples of complexity were, themselves, overly complicated and apparently quite boring.
Well, all that changed recently: now I have nifty example of what I mean.
One of our customers has a fabulous print spooling system: many pieces of hardware, many lines of code, all intended to ensure that your precious print job eventually emerges from a printer, no matter what issues may arise with your network. Best of all, you route all printing through it and it works automagically.
The fancy print job spooler is so smart, it can reroute print jobs from off-line printers to equivalent on-line printers. It is so smart it can hold onto a print job for you, "buffer" it, until an appropriate printer comes on-line.
Alas, neither of these features is a good fit for the clinical labs, at least for specimen labels. The ideal specimen label prints promptly and right beside the person who needs it. If the label cannot be printed, then the print job should disappear: the user will have had to use a hand-written label or some other downtime alternative. Printing the label latter is, at best, annoying. At worst, printing the label later is confusing and leads to mis-identified specimens. For this job, better never than late.
With effort, our client disabled the roaming print job feature, so the labels (almost) always print where they are needed. But the buffer cannot be turned off--it is the whole point of the spooler, after all--and so after downtime, the now-unwanted labels suddently come pumping out of the printers and if the draw station happens to be busy, the old labels mingle with the current labels and opportunities for serious errors abound.
Print spoolers are nifty. They serve office workers well. They are a standard part of today's smart IT infrastructure. But they don't serve the clinical lab in any real sense. The clinical lab is not a typical office environment: don't treat it like one.
A blog about real-world solutions to common clinical lab IT issues by Brendan Hemingway.
Tuesday, June 23, 2015
Thursday, June 18, 2015
Rapid Failure Is Not Success
What makes software a success? Here is my list:
In the dim and distant past, when I was in college, studying CS meant actually studying computer science. We used various computer programming languages, but we were expected to master the concepts and techniques, not merely the semantics of a given language or development environment. We were supposed to be flexible about "implementation details." We even called them "details" just to emphasize their relative importance.
As someone who has tested the programmer job waters regularly for over three decades now, I can tell you that being a software expert who is not inflexibly married to a particular method or environment is an out-of-date notion: it has been years since anyone wanted to know what I can do or what I have done. Now it is all about the "how". Now everyone wants to know if I am "a great fit" which seems to mean "exactly what we have already": a Java-head, a Rails guy, a Javascript geek, a C# maven, etc.
If you have issues that you have been unable to resolve, you don't need "a great fit" and you don't need more of the same: you need something different. You need to consider something new: new talents, new tools, new tenets. But even when a group has hit the wall and is stuck, I see fear of the new and desperate clinging to the old: our installed base! Our existing code! Think of the code!
When did this "there is only one way to do it!" philosophy become not just acceptable, but the norm? I paddle my canoe on the water and I drive my car on the highway and I do not view that as unnecessary overhead. Yes, I have to be familiar with two different kinds of vehicle, ostensibly for the same purpose (moving me around). But that is not a problem which needs fixing: I won't be putting wheels on my canoe any time soon.
I shake my head in wonderment at this backsliding, this devolving professionalism, this grunt-ification of our industry. Why are we headed toward being MacDonald's when we started out as fine dining? Can you imagine the technical debt that this mindlessness is piling up all around the world?
(Technical debt is my current favourite buzzword. I like the Wikipedia definition:
- It works: it does what at least one audience really wants done
- It exists: it was completed and deployed
- It lives: it can be updated or ported
- It pays: if it is for-profit, it does not lose money
In the dim and distant past, when I was in college, studying CS meant actually studying computer science. We used various computer programming languages, but we were expected to master the concepts and techniques, not merely the semantics of a given language or development environment. We were supposed to be flexible about "implementation details." We even called them "details" just to emphasize their relative importance.
As someone who has tested the programmer job waters regularly for over three decades now, I can tell you that being a software expert who is not inflexibly married to a particular method or environment is an out-of-date notion: it has been years since anyone wanted to know what I can do or what I have done. Now it is all about the "how". Now everyone wants to know if I am "a great fit" which seems to mean "exactly what we have already": a Java-head, a Rails guy, a Javascript geek, a C# maven, etc.
If you have issues that you have been unable to resolve, you don't need "a great fit" and you don't need more of the same: you need something different. You need to consider something new: new talents, new tools, new tenets. But even when a group has hit the wall and is stuck, I see fear of the new and desperate clinging to the old: our installed base! Our existing code! Think of the code!
When did this "there is only one way to do it!" philosophy become not just acceptable, but the norm? I paddle my canoe on the water and I drive my car on the highway and I do not view that as unnecessary overhead. Yes, I have to be familiar with two different kinds of vehicle, ostensibly for the same purpose (moving me around). But that is not a problem which needs fixing: I won't be putting wheels on my canoe any time soon.
I shake my head in wonderment at this backsliding, this devolving professionalism, this grunt-ification of our industry. Why are we headed toward being MacDonald's when we started out as fine dining? Can you imagine the technical debt that this mindlessness is piling up all around the world?
(Technical debt is my current favourite buzzword. I like the Wikipedia definition:
Technical debt (also known as design debt or code debt) is a recent metaphor referring to the eventual consequences of any system design, software architecture or software development within a codebase.
I used to wonder how the Johnny One Note model worked: don't you eventually have to pay the Piper? Don't you hit a hard limit on whatever single tool you have blessed? Don't you fail to meet your goals so obviously that your failure cannot be explained away with the torrent of jargon-filled gibberish that has become the hallmark of programmers' communications?
But now I have seen examples of how this obvious failure is avoided: business and development cycles have become so rapid that we can use every failure as a reason to move to the next development model or technology: we can leave the technical debt behind by walking out of our old house, defaulting on our mortgage, and buying a new house. Better yet, we can hope that we are at another company entirely when the technical debt comes due.
Welcome to MacProgrammer's! May I take your order? Just as long whatever you order is on our very limited menu.
Saturday, June 6, 2015
Big Data and Lab
Big data is a hot topic right now--insert rant about new names for old ideas here--and that wave is finally breaking on the clinical lab shores. So the time is right for my "Big Data and Lab" manifesto. (Or at least an observation born of decades of experience.)
Big data has two jobs: archiving and analyzing. Both involve data on computers. There, I claim, the similarity ends. While it is tempting to try to kill both of these birds with a single stone, I assert that this is a terrible idea.
Specifically, I find that in order to archive data effectively, I need a free-form, attribute rich environment which accommodates evolving lab practice without losing the old or failing to capture the new. But in order to analyze effectively, I need a rigid, highly optimized and targeted environment, preferably with only the attributes I am analyzing and ideally with those attributes expressed in an easily selected and compared way.
In other words, I find that any environment rich enough to be a long-term archive is unwieldy for analysis and any environment optimized for analysis is very bad at holding all possible attributes.
Specifically, I have seen what happens to inflexible environments when a new LIS comes in, or a second new LIS and the programmers are struggling to fit new data into an old data model which was a reworking of an older data model. It ain't pretty--or easy to process or fast to process. I have also seen what happens when people, especially people without vast institutional knowledge, try to report on a flexible format with three different kinds of data in it. They get answers, but those answers are often missing entire classes of answers. "They used to code that HOW?" is a conversation I have had far too many times.
Yes, I am aware of Mongo & co and of the rise (and stalling) of XML databases and of the many environments which claim to be able to both. I have seen them all, I have tried them all and I have not changed my views.
So I use a two pronged approach: acquire and store the data in as free-form a manner as possible--structured text, such as raw HL7 or XML are great at this--and then extract what I need for any given analysis into a tradition (usually relational) database on which standard reporting tools work well and work quickly.
The biggest clinical lab-specific issue I find is the rise of complex results, results which are actually chunks of text with a clinical impression in them. For such results, we ended up cheating: tagging the results with keywords for later reporting and asking the clinicians to create simple codes to summarize the text.
I am eager to be proved wrong, because the two pronged approach is kind of a pain in the neck. But so far, I have seen no magic bullet that stands the test of time: either the reportable format is too rigid to hold evolving lab data or the flexible format is too slow to actually handle years and years of data.
If you feel that you have a better idea, I would love to hear it so leave a comment.
Big data has two jobs: archiving and analyzing. Both involve data on computers. There, I claim, the similarity ends. While it is tempting to try to kill both of these birds with a single stone, I assert that this is a terrible idea.
Specifically, I find that in order to archive data effectively, I need a free-form, attribute rich environment which accommodates evolving lab practice without losing the old or failing to capture the new. But in order to analyze effectively, I need a rigid, highly optimized and targeted environment, preferably with only the attributes I am analyzing and ideally with those attributes expressed in an easily selected and compared way.
In other words, I find that any environment rich enough to be a long-term archive is unwieldy for analysis and any environment optimized for analysis is very bad at holding all possible attributes.
Specifically, I have seen what happens to inflexible environments when a new LIS comes in, or a second new LIS and the programmers are struggling to fit new data into an old data model which was a reworking of an older data model. It ain't pretty--or easy to process or fast to process. I have also seen what happens when people, especially people without vast institutional knowledge, try to report on a flexible format with three different kinds of data in it. They get answers, but those answers are often missing entire classes of answers. "They used to code that HOW?" is a conversation I have had far too many times.
Yes, I am aware of Mongo & co and of the rise (and stalling) of XML databases and of the many environments which claim to be able to both. I have seen them all, I have tried them all and I have not changed my views.
So I use a two pronged approach: acquire and store the data in as free-form a manner as possible--structured text, such as raw HL7 or XML are great at this--and then extract what I need for any given analysis into a tradition (usually relational) database on which standard reporting tools work well and work quickly.
The biggest clinical lab-specific issue I find is the rise of complex results, results which are actually chunks of text with a clinical impression in them. For such results, we ended up cheating: tagging the results with keywords for later reporting and asking the clinicians to create simple codes to summarize the text.
I am eager to be proved wrong, because the two pronged approach is kind of a pain in the neck. But so far, I have seen no magic bullet that stands the test of time: either the reportable format is too rigid to hold evolving lab data or the flexible format is too slow to actually handle years and years of data.
If you feel that you have a better idea, I would love to hear it so leave a comment.
Subscribe to:
Posts (Atom)