HCLS/OntologyTaskForce/BIONTDSEDCM/UI

From W3C Wiki

COI Client Usage Model

Viewed from 10,000 feet, each clinical trial protocol describes a "target" patient type, instances of which caregivers are supposed to locate in their EMR system. Software for semantic search can often aid such match-making.

Assuming deployment requirements exist for a browser-based distributed web service, a next good design step in creating such software is to model the operator interactions used at client nodes to drive a working matching process.

Below is one such model. The toughest design goal was keeping it general with respect to details of the client's EMR system. This lets one web-based search engine successfully interoperate with many remote client sites.

During related mental struggles, I noticed helpful definitions for two types of text fragments in a protocol (whose disparate styles of semi-structured text I assume will persist for some time):

  • (+) factors involve standard medical terms and codes handy in internal inclusion specs. They are relatively easy ways to gather many tentative matches.
  • (-) factors involve quantitative constraints, mostly from eligibility & exclusion specs. They are tougher to manage across multiple EMR types.

In the following system usage model, a Clinical Trials Coordinator (CTC) will seek local matches to the "target" described by one protocol. In practice, most of the search itself seems easy to automate once this "target" exists as a complex local EMR filter. Building it today takes a human operator, aided by system-guided dialogs that assist in resolving noticed ambiguities:

  1. the CTC will first select a new protocol of interest at (e,g) http://clinicaltrials.gov
  2. enter or copy (+) factors into a textbox, submitted as a "target" patient model
    • answer questions on system-built web forms to refine "target" models
    • most try to clarify the terms to use in a semantic search of EMRs
    • once terms are defined, a high-recall search is made of selected corpora
  3. review the ranked results-list. Select good candidates for follow-up & drop others.
  4. optionally quit work here, at least temporarily, leaving the rest of the list unresolved.
  5. to filter other list elements, select or post a (-) factor as a new "target" constraint
    • answer questions on system-built web forms to refine "target" models
    • most try to clarify predicates that can disqualify an EMR match
    • a new one culls non-compliant EMRs from the results. User loops to step 3
  6. overnight, new EMRs may get added to pending results-list (they'll show highlighted when next displayed in the UI)
  7. an email digest or similar mechanism reports new matches for CTC-defined targets already defined. (This would work like Google Alerts, run on a daily schedule, perhaps.)
  8. as convenient, CTC selects any target for review & resumes work above at step 3

The main open questions I still have concern step 5: how might it work in detail? An open library of predicate templates would help, which each CTC could select and tune to local EMRs. Can they be invented as needed, or shared with users of other EMR systems? Can they be reused in other interoperability use cases involving EMRs? Where do they execute? In what language(s)?

My simplest image is an EMR Wrapper but other possibilities exist. Would EMR system vendors, for example, just build such an API directly into their software?

Responses to these open design questions, and comments on other aspects of the above scenario, are requested on the list.