Wednesday, December 25, 2013

Cross-organization Patient Identification

A colleague asked me what I thought of this:

http://www.healthcareitnews.com/news/himss-hhs-join-forces-patient-id

"To improve the quality and safety of patient care, we must develop a nationwide strategy to match the right patient to the right record every time," said Lisa Gallagher, HIMSS vice president of technology solutions, in a statement.

The innovator in residence, she said, "will create a framework for innovative technology and policy solutions to help provide consistent matching of patient health records and patient identification.”

I had two reactions, one after the other:
  1. That would be awesome (a nationwide strategy to match the right patient to the right record everytime).
  2. Good luck balancing privacy and accuracy.
I have been dealing with this issue for just about 29.5 years. That is a loooooong time. I have seen lots of ideas come and go. Alas, I have no great solution, but I do have a firm grasp on potential issues:
  • Very similar demographics: (two easily confused patients)
    • identical twin boys named John and James, for instance (yes, people do that)
    • father and son, same name, unlucky birth dates such 11/1/61 and 1/16/11. It happens and MANY clerks are so pleased to spot the "typo"
    • cousins born on the same day or with unlucky birth dates with the same name
    • mother & daughter have same name until marriage and updating the daughter obscures the mother, making the mother look like the maiden name version of the daughter
  • Very dissimilar demographics: (one patient looks like two)
    • maiden name to married name: add a birth date correction and all bets are off
    • legal name change, sometimes to deliberately leave behind the past--prison term, bad marriage, etc
    • heavy use of a nickname "Steve Jones" finally decides to go by "Steven Jones" because his dad, "Steven Jones," just died. Yikes.
  •  Privacy nut / identity theft: the patient deliberately gives false demographics or those of someone else.
I cannot imagine, in the absence of a national identity card, how a private American effort could even span different organizations within a given state, let alone across state boundaries.

Insurance companies could help, given their efforts to get bills paid across institutions, but I cannot see why they would and I can see why they wouldn't.

Man, I hope that I am wrong about this.

Monday, December 23, 2013

A Case for IT Scaffolding (Building Tools)

This post is an unadorned rant. The topic is the lack of software scaffolding I see in my consulting practice.

By "software scaffolding" I mean tools specific to a particular task or project.

 I see managers who are unable to appreciate the utility of such tools and who discourage their creation and use because building tools takes time and energy.

I see programmers reluctant to make debugging and validation easier with a tool because the tool is not on the project plan.

You can't really build a building of any size or complexity without a scaffold and a large part of getting construction done well, done on time and done at or below budget is having the support system in the plan and on the ground. I assert that the same is true of many IT projects.

Furthermore, I assert that this is true of Lab IT projects in particular where validation and on-going audit are so important. I find that many of my  bits of scaffolding live on to support making sure that the software keeps working, after I have confirmed that it worked in the first place.

I understand that some programmers go overboard: instead of creating good-enough tools, they create overly complicated and overly polished Swiss Army knives, which somtimes even get released because the super-tool ends up being the only way to do something.

But keeping this in check should be doable for a competent manager. Why then do I see so much resistance to building data examination tools, dumping reporting, debuggers?

Friday, December 20, 2013

Physical Security Dimension to Cybersecurity

A friend who is interested in cybersecurity drew my attention to the following item:

https://www.schneier.com/blog/archives/2013/12/attacking_onlin.html

December 16, 2013

Attacking Online Poker Players

This story is about how at least two professional online poker players had their hotel rooms broken into and their computers infected with malware.
I agree with the conclusion:
So, what's the moral of the story? If you have a laptop that is used to move large amounts of money, take good care of it. Lock the keyboard when you step away. Put it in a safe when you're not around it, and encrypt the disk to prevent off-line access. Don't surf the web with it (use another laptop/device for that, they're relatively cheap). This advice is true whether you're a poker pro using a laptop for gaming or a business controller in a large company using the computer for wiring a large amount of funds.
The friend asked the following question: don't these same issues arise with medical records, eg the lab results I so often handle? Specifically, isn't physical security of personal devices a real issue?

The short answer is yes: God, yes! Yes. Yes.

The long answer is that I see two common issues:

Benign Neglect

In this scenario, doctors and other professionals forget that their spouses and kids and kid's friends may end up borrowing computers, laptops, smart phones or tablets which are also used by clinicians to review sensitive medical information.

We all know that traces left by legitimate access of this information can often be found, either by accident by intent, if you let other people fiddle with your device. We all know that we should take basic precautions:
  • Clean your browser cache.
  • Use passcodes and automatic inactivity locking and make the timeout period short.
  • Don't lend your devices if you can avoid and never lose physical control of them.
  • Be aware of how your data is backed up
But we don't all follow these guidelines and we don't follow them all the time. And we should.

Every time I create a system which offers a confidential medical report as a PDF, I cringe. I warn the users that they are responsible for the PDF once it hits their browser. I do the best I can to expire and obscure the PDFs. But I know that the average clinical user neither knows nor cares about ghost images of private health information (PHI) floating around on his or her devices.

Targeted Attacks

Alas, the following is true: major medical centers have celebrities as patients; celebrity PHI is very valuable; human beings take bribes. But that is a problem for HR. What about users who are targeted, especially in this day and age of personal devices at work? I know of such attacks in other domains, such as finance, but I do not know of any against my clinical users. But does that mean that it hasn't happened? Or that it won't?

I do my best to make sure that my smart phone never sees PHI and that my laptop is physically secure and regularly checked for malware. But that won't help my users. So what is our professional obligation here? How do we foster a greater awareness of the risks so that appropriate action can be taken?

There is no point in trying to scare clinicians into not using their shiny, powerful, useful toys. Instead, we need to figure out how to help them use those toys more safely.

Wednesday, December 18, 2013

Why Is Merging Test Catalogs So Hard?

It is FAQ Wednesday, when I take a FAQ off the pile and address it.

Today's frequently asked question is "why is merging test catalogs so hard?"

This question fits O'Flaherty's first law of applied technology: "It is usually not the technology that takes the time."

This question arises in a number of contexts:
  • Lab Manuals across organization (eg hospital systems)
  • Lab Manuals across labs (eg main lab and satellite lab)
  • Ref lab interfaces (order from one catalog, process from another)
  • Historical data archives (showing results from a previous catalog)
The answer is always the same: assumptions and comparisons.

When considering a lab result, there is a huge amount of assumed context:
  1. the collection method sometimes matters, sometimes does not;
  2. the methodology sometimes matters, sometimes does not;
  3. the actual value sometimes matters, sometime only the High / Normal / Low flag;
  4. having a basis of comparison to prior results sometimes matters and sometimes does not
Take Vitamin D levels for example: there are a couple of ways to measure it (methodology) which vary in exactly what they measure and how accurately they measure it. If you are a specialist, these differences may be very important. If you are a generalist, these differences may be meaningless to you. If you are trying to provide historical context by evaluating a series of these results, you almost certainly assume that the methodology was constant, which ever methodology was used.

The problem arises when catalog A only has "Vitamin D" and catolog B has "Vitamin D (the cool one)" and "Vitamin D (the usual one)". How do work across the catalogs?

There is always renaming to enforce consistency, but that is not straightforward and it is often politically charged: someone wins and someone loses. Furthermore, changing the name of a test is problematic: should the clinician infer that the assay has changed somehow? If so, how?

There is add new tests to enforce consistency, but is also a qualified success: what if the new test is essentially the same as the old one? How does the clinician know that, how does the software which displays the results know that these two "different" tests are the same lab assay?

Worse, making these decisions requires many different skill sets: clinical, lab and data modelling. So why is merging test catalogs so hard: because there is so much to it and so much of it is not immediately apparent. Hello, Titantic, meet Ice Berg.

Wednesday, December 11, 2013

O'Flaherty's First Law of Applied Technology

FQA Wednesday again. In lieu of answering a specific question, today I would like to expound upon something someone else said. Not only is this utterance interesting and apropos, it is also already thought of, saving me some time and effort.

"It isn't the technology that takes the time." Douglas J. O'Flaherty (https://twitter.com/DouglasOF)

Doug has an annoying habit of reducing my intricate and entertaining tales of technology deployment woe to this oft-repeated and simple phrase.

(It is oft-repeated because I complain alot about getting real-world solutions out the door, on to the lab floor and working as well as they should. But that is another story.)

What I understand him to mean is this: information technology is dynamic and evolving, but it is relatively straightforward and relatively well-understood. But in order to apply technology to the real-world business processes that would benefit from that technology, a number of generally poorly-managed, poorly-understood and complex tasks have to be accomplished first:
  • get input from stakeholders, who are often diverse and out of phase with each other
  • build a consensus from that input
  • turn that consensus into specifications which can be acted up
  • turn that consensus into action items which really are accomplished
Only at that point does the programming start. When mired in the trenches of implementation, I tend to forget all the pain that went into clearing the path for the implementation and instead I tend to focus on the process failures which leave me with bad specs.

As a technology guy who is constantly frustrated by project scheduling, I guess I do need pretty frequent reminding: it isn't the technology that takes the time.

Thursday, December 5, 2013

Medical Data Center Reliability

Well, it isn't Wednesday, but this tweet got me thinking about a FAQ:


The frequently asked question is this: why aren't Medical Data Centers...better? Given that their mission is so important, not just to an organization but to people's lives, why aren't hospitals and labs better at keeping their systems up and running?

The short answer is that they are trying too hard with an odd set of rules and a strange budgeting process.

Trying Too Hard
In my experience, small medical organizations can't afford good technology or good people, but large ones fail as well for the opposite reason: they are trying too hard.

Redundancy is the obvious way to get good up-time (reliability, uninterrupted service, however you like to characterize "running well"). But there are two different kinds of redundancy....

System Level Redundancy which means having more than one solution to a given problem: two independent, synchronized, preferably different systems doing the same job. If System A has a problem, System B takes over.
Component Level Redundancy which means having more than one component in the system doing a particular job. So you only have System A, but it has lots of redundant disks, and storage and whatever else you think you need.
The computers aboard the now-defunct Space Shuttle were an excellent example of System Level Redundancy: five independent computers, all doing the same tasks at the same time. In order for computer to fail on the Shuttle, all five systems would have to die at once. For good measure, they kept tabs on each other, so you had confidence that they were working properly.

Storage Area Networks are an example of Component Level Redundancy: you tell your server that it has a disk attached, but the "disk" is actually a computer which is sending your data to separate disks.

System Level redundancy is expensive (you need multiples of everything) and daunting (you have to maintain multiple systems and keep them in sync somehow.) But it covers all kinds of failures if you do it correctly: power problems, water mains breaking, etc.

Component Level Redundancy is cheaper (you only have extras of the parts you need) and more inviting (the complexity is hidden). But it makes the infrastructure more complicated and it does not help if a non-redundant component fails, eg water floods your primary data center (you have an offsite back up data center, right?).

Worse, if you choose Component Level Redundancy over and over, you end up with a mind-bogglingly complex infrastructure which punishes attempts to change it and works against attempts to debug it. Is the problem in the virtual machine, or the actual machine, or the network, or the SAN, or the app? Who knows? Let the Blame Roulette begin!

So in trying to be super-reliable, frugal and conservative, Medical data centers often up being flakey, expensive and conservative. At least they're conservative.

But I find that it often isn't their fault: senior management has taken charge of the ballooning tech budget and trying to make simple buying choices which are resulting in complicated situations. And complexity rarely leads to clarity or performance or long-term reliability. Just ask Google or Amazon.

UPDATE Dec-30-2013
This article by a DevOps guy gives some interesting perspective on the same phenomenon:

http://redmonk.com/dberkholz/2013/05/03/devops-and-cloud-a-view-from-outside-the-bay-area-bubble/

Wednesday, December 4, 2013

Why Can't Medical IT Systems Share Data Better?

It is FAQ Wednesday, when I try to get through the most plaintive of cries I encounter in the course of my workday.

Today's question is "why can't medical information systems share data better?"

This is a good question: why in this day and age of Webly interconnectivity are lab results and diagnostic images and calendar appointments and other data not easily accessible?

Specifically, let us consider why the prototypical Hospital Information System (HIS) cannot share data better (more effectively) with the prototypical Laboratory Information System (LIS).

There are two basic ways to share data between computer systems. I will call these two methods "linking" and "transferring." Let us call the system with the data the "server" and the system which wishes to display data from the server "the  client."

Linking is pretty easy and pretty useful: The client opens a window, sends a query to the server and the server replies with the data.

Transferring is more involved: the client gets data from the server, parses that data and loads that data into the client's own database where that data can found and used by the client's own software.

Linking is easier and real-time, but does not lend itself to a consistent look-and-feel. Transferring is harder and often done in batch mode, but it does lend itself to consistency and the data is available "natively" on the client, ie in more ways.

Since this is healthcare data, we have issues of privacy, authentication, display rules and all the requirements of HIPAA. This makes using Web technology, which is inherently insecure and open, a bit tricky. It also makes the transfer option more attractive: authentication across information systems is a pain and access logging across information systems, especially ones from different vendors, downright difficult.

However, transferring is more overhead to implement, more overhead to maintain and requires actually settling questions of data model mismatch.

We try to support linking when we can, but we often end up having to support someone else's authentication model and someone else's data model and someone else's design aesthetic. That's rather alot of flexibility which is why most bigger vendors won't go that route.

So here we are: most of the time the only data shared between systems is data everyone agrees must be shared. Data sharing tends to lag behind other developments and other requirements. It isn't better because it isn't easy to do.

Tuesday, December 3, 2013

"Lab Man" Is Too Vague

We recently were asked to fix a specific instance of a general Lab Man issue. Again.

The specific instance was providing phlebotomists with collection information.

The general issue is trying to make a "Lab Man" that is all things to all people.

I say that "Lab Man" is too vague because according to my observation, it refers to any of these options:

A Test Menu, from which clinicians select clinical tests to be done. This is all about ordering, so it includes helping a clinician figure out which tests to order, in addition to simply what code to use.
A Collection Manual, which gives the phelbotomist or nurse or lab tech information about how to collect, store and transport the specimen.
A Processing Manual, which gives a bench tech guidance about how to prepare and processing the specimen.
A Result Interpretation Reference to which clinicians can turn for help in understanding the clinical significance of a given result.
Sadly, many of our clients feel that this model "makes things too complicated" and that a single format will somehow handle all these functions. "Why do you make things complicated by having different formats and different documents?" we often hear from the Lab Man Committee. But to the Medical Director, "Lab Man" means one thing, to the Lab Manager "Lab Man" means another thing and to Outreach Coordinator "Lab Man" means yet another thing.

Strangely, few members of the committee dispute that clinicians don't want to wade through collection instructions when trying to figure out the significance of a result. Drawers don't want to browse a catalogue, they just want to go straight to the pre-selected assay on the order. Bench techs only rarely care about the clinical research supporting the use of a particular assay for a particular medical history; instead, they want the preparation and processing instructions as quickly as possible.

One of our clients has a fab new HIS (Epic) which they want to use as much as possible outside the lab. They did not want to complicate things, so their Lab Man is a Test Menu/Result Interpretation Reference hybrid. This hybrid format works for doctors. But it does not work for drawers (nurses and PAs and phlebotomists) who are also "outside the lab." No Collection Manual for them.

Collecting without a Collection Manual is not working out very well, so I proposed the following: let the web site choose the format based on the incoming user ID. Or based on the IP (we can tell internal from external and we can tell which external IPs are actually draw stations.)

Context is everything, and these days context-sensitivity is often pretty easy to provide, so why not take advantage?

Update Thursday, December 5, 2013
Client has decided to go with a hard-to-find second link on their HIS, in the hope that (a) drawers can find the Collection Manual link and (b) clinicians cannot find the Collection Manual link. Good luck to them, say I.

Wednesday, November 27, 2013

Why Are Lab Interfaces So Hard?

Taking a frequently asked question off the pile for today's post: "why are Lab Interfaces so hard?" Variants include "why do they take so long to get up and running?" and "why are they so expensive?" and "why don't they work better?"

Here is the most common form of clinical lab interface I encounter:

  • Secure network connection: TCP/IP over a Cisco VPN
  • Bi-directional: orders from client to lab, results from lab to client
    • client has a TCP/IP client for sending orders to the lab
    • lab has a TCP/IP server for receiving orders from the client
    • client has a TCP/IP server for receiving results
    • lab has a TCP/IP client to sending results
  • HL7: perfect it is not, but it is common and well-understood
  • Assay ID translation: someone has to map the client's test codes to the lab's, or vice-versa
None of this is difficult, technology-wise. I have written such interfaces from scratch in about 8 working hours. Provided that the specs are good, the set up and and programming is more tedious than challenging.

(The specs include the mapping of test codes across menus, by the way. Why that mapping often gets left as an exercise to the programmer I will never understand.)

It is becoming increasingly common to set a target of 24 hours for bringing up such an interface: if you have a working interface framework and a reasonably seasoned team, this should be quite doable.

So-called Direct Interfaces are becoming more talked about. I have yet to see one deployed in the real world in my consulting practice, but I like the idea. In theory this would cut do the network configuration time, but I suspect that the added x509 certificate wrangling will often more than make up for it.

So why does it take days or weeks or even months to get an interface up and running? In my experience, the roots causes are two: bureaucracy and misaligned agendas.

The bureaucracy means that I have spent the majority of my time on projects like this walking from cube to cube, paperwork in hand, looking for permission to connect to another organization. Or hours on the phone, chasing up-to-date, accurate specs and precise explanations of terms like "alternate patient ID".

The misaligned agendas make pulling together in the same direction difficult:

  1. The client's lab wants that labor-saving, time-saving, money-saving interface.
  2. The client IT department does not want to expose a chink in their armour, which chink will likely never get closed and which chink will be used by an interface which must run 24/7 and always be up.
So what IT hears is "buy a Cisco VPN license; pray that our Cisco VPN appliance has a spare connection; punch a hole in the firewall, but don't botch it leaving us open to hackers; assume responsibility for monitoring the health of that VPN tunnel."

To many IT ears, this sounds like a lot of work, some real money and an on-going obligation with little upside and huge potential downsides. So the network folks become what my firm calls "stoppable"--any hitch in the plan brings them to a halt. Who knows? If they delay long enough, maybe the project will go away....

So why is it so hard, take so long, often have never-resolved issues? Because navigating bureaucracy is exhausting and overcoming organizational inertia takes a long time.

Tuesday, November 26, 2013

NPI: No Office = Bad For Lab


Yesterday a sadly familiar issue landed on my desk: how to make NPIs work for lab results.

NPIs were brought to you by CMS, so they have to be good.


To quote from The National Provider Identifier (NPI): What You Need to Know

How Many NPIs Can a Sole Proprietor Have?
A sole proprietor is eligible for only one NPI, just like any other individual. For example, if a physician is a sole proprietor, the physician is eligible for only one NPI (the individual’s NPI), regardless of the number of different office locations the physician may have, whether the sole proprietorship has employees, and whether the Internal Revenue Service (IRS) has issued an EIN to the sole proprietorship so the employees’ W-2 forms can reflect the EIN instead of the sole proprietorship’s Taxpayer Identification Number (which is the sole proprietor’s SSN).
This is logical, but it is very bad for clinical laboratories. Physicians practicing medicine do not fit the model of citizens paying taxes: if I have seven offices, which I visit in rotation, I want the labs for patients who live in Shoretown to go to my office in Shoretown: sending them to any of my other offices creates work and runs the risk of me not having them where and when I need them. But sending them to all my offices creates a blizzard of faxes and a mountain of filing.

Unless, of course, my practice has invested in and correctly configured a centralized, distributed Electronic Medical Record with a real-time, bidirectional Lab interface. Which, very likely, is not the case. At least not from what I see in the field. At least not yet.

So here I am, stuck in the real world, creating yet another mapping from internal LIS ordering ID (which is office-specific) to NPI (which is not office-specific but which is required by many legal agreements).

The good news is that NPI excels at figuring out which providers merit a fruit basket for all their business: I can analyze the ordering patterns by practice pretty easily by associating NPIs together into a practice. I just can't help you with where to send that fruit basket. Or those lab results.

Friday, November 22, 2013

Google Maps for Data Visualization

We are looking into ways to make draw data and ordering statistics more useful to our clients. Perhaps a map? Check out this excellent tutorial on how to get started: I did and was very impressed.

http://www.w3schools.com/googleAPI/default.asp

Our proof-of-concept project has the following goals:
  1. a map of the catchment area
    1. marker for client
    2. marker for each draw station
  2. pop-up for each marker
    1. show the absolute activity for that location
    2. show the percentage activity for that location
  3. do the same thing with ordering clinicians instead of draw stations.
We are eager to see if the coolness translates into actual usefulness.

Thursday, November 21, 2013

Direct Interfacing?

This looks like an interesting idea for our lab connectivity start up:

http://directproject.org/faq.php?key=faq

Random connections over the Internet, secured by x509 certificates, with the payload format unspecified. Assuming we control the certificates, we can be confident that our connections are secure from others and from the right computers.

We would use HL7 to encode the payload, of course. But this gives us a validated and accepted model for transactions in a cloud-based environment. Interesting. I feel a proof-of-concept project coming, probably built on Amazon Web Services.

This architecture also lets us circle back to this at some point down the road:
http://www.mehi.masstech.org/health-it/health-it-learning-center

Provider EHRs

When I consider this whitepaper from Athena Health, I am struck by the fact that Electronic Health Record (EHR) remains such a vague term with regard to the audience. I can't imagine talking about actual instances of EHRs, as opposed to the generic concept of EHR, without regard to the EHR user. I would say that lumping in solo practicioners, small practices and large practices is a bad idea, an example of the fallacy that "if we solve the most complex case, all other cases will work."

This is especially true for clinical lab orders: every EHR is happy to provide menus for the 20% of the catalogue which makes up 80% of the orders. But there seem to be no EHR vendors who want to support the most esoteric, harder to order labs, let alone properly store and display those complicated results.

Worse, the notion of "lifetime results" is also poorly supported; yes, it is true that some small number of years is as long as most lab results are relevant, but what about those tests whose value never goes away? Those genetic tests which we see repeated over and over again?

Someday I would like to see the following:
  • Acknowledgement that one size does not fit all
  • Full support for lab ordering and resulting
  • Full support for lab consults:
    • if I have a problem with my Amazon order, I can ask about it
    • if I don't understand a lab result, I have no easy way to ask

Tuesday, November 19, 2013

Epic Draws

One of our clients retired our custom draw station solution in favor of Epic, but quickly found that Epic could not provide the management reports we had provided, so we were asked to somehow recreate our report package in the new environment.

After some consideration, the solution we choose was this: an extractor of data from the LIS (SoftLab) to populate the database on which the reporting package was based. So the orders come through Epic, into SoftLab and then into our database.

Once we defined and refined the business rules which let us determine which orders were collected in draw stations and which were not, the rest of the puzzle fell into place.


Along the way, we uncovered some ordering issues--misconfigured menus in Epic, out of date test definitions in SoftLab, receiving procedures not properly adjusted for the new environment--which is the first step to fixing them.

We also learned more than we wanted to about how orders are stored in the LIS, but that will be useful down the road I'm sure.

Tuesday, March 19, 2013

Ordering Off The Menu

Today we start implementing our solution for a universal lab problem: ordering off the menu. In other words, supporting orders for tests which your lab can either do, or send out, but which are not common enough to merit a test definition and/or an entry in your Lab Manual.

(In the trenches, we know that the explosion of possible assays will likely always outstrip the ability of labs to define tests and to maintain their Lab Manual, but the "not common enough" party line is so much more palatable to management that we use it as a courtesy to our clients.)

A truly useful test definition has the following aspects:
  1. an LIS definition to facilitate processing
  2. documentation, ideally in the external Lab Man
    1. what the test is called
    2. how to order it
    3. why order it
    4. when it is done
  3. documentation, hopefully in an internal Lab Man
    1. collection instructions
    2. handling & preparation instructions
    3. processing instructions 
  4. a billing definition to facilitate billing
An uncommon or new test might have to do without item 1 above for a while, because LIS test definitions are often complicated to do, difficult to validate and scarey to release.

More awkwardly, it is usually difficult to collaborate on test definitions in an LIS, although the ordering information make come from a Medical Director, the collection & handling instruction from the Front Bench and the processing instructions from the Lab Supervisor. Usually the LIS only provides good tools for the processing instructions since those are near and dear to LIS developers' hearts.

But when the lab falls too far behind in documenting new or uncommon assays, they make work for themselves when orders for these tests arrive. And this work is at every stop along the way: Customer Service to answer questions about ordering, the Front Bench to deal with miscollected or mishandled specimens, the Processing Bench to try to do the assay on whatever was collected, etc.

So we are providing a technological stepping-stone: what we call a pre-LIS test definition. We created a system to do the following:
  • attach different names to the MISC code
    • these looks like different tests to the users for ordering & processing
    • but these look like a MISC to the LIS for reporting
  • allow different users to enter different parts of the pre-LIS
    • include that information in the internal Lab Man
    • include that information in the external Lab Man
  • when the pre-LIS test definition is ready
    • automatically create an internal Lab task to build the test definition
    • automatically send information to Billing to request billing codes, etc
We feel that this is the best of both worlds: a rapid, easy-to-edit visible test definition that supports, rather than usurps, building LIS test definitions.