A Bit of Context

I am somewhat fascinated by the interface between Recognition and Reasoning, what we label as Context. What does a “Context” look like? What information does it carry? We’re not sure of a general answer yet – or if one exists — but some examples we’ve seen point to some common principles.

First – our previous article introduced a framework for AI systems that we’ve improved a little:

Regarding Context

Our ICD-10 medical document coding engine works with “human in the loop” Reasoning – that is, the Context is delivered to a human who makes the choices on the final output.  We have a number of efforts going on in adjacent areas too. For E/M coding and Clinical Documentation Improvement (CDI) applications, though, there is an overall requirement for text synthesis, not just text search to create the output.[1]  Synthesis is better suited to Reasoning than Recognition, so the prototype applications all have an interplay between a NoNLP™ Recognition model operating on the input text, and a rules-based Reasoning model to create the final output. Thus we conclude that triggers for Reasoning is one common characteristic of Context.

Triggering Reasoned action is a key element of a number of systems in fact. We see it in some proposed applications where the output of Reasoning is a set of alerts rather than synthesized text. Dennis Mortensen from x.ai has indicated that the interface between the engine which parses an incoming email (Recognition) and the intelligence that schedules meetings (Reasoning) is “a list of intents.” And that points to another important aspect of Context: it’s not the intent; it’s a list of intents. Context is multifaceted.  

In other realms, though, Context is more closely defined as “situational awareness,” with Recognition doing the critical task of sensor fusion in addition to object recognition.  Self-driving vehicles have an extensive Recognition function to map the current physical environment, and a separate Reasoning function to decide what to do.

We hypothesize, from analysis of the Uber fatality[2], that the detail level of Context may (or should) vary relative to what Reasoning must do.  Reasoning for collision avoidance clearly needs less detail on what an obstruction is than on relative position and motion – more detail on the former simply adds to the processing Reasoning must do. Biology has certainly evolved to this optimization – you’ll freeze if you Recognize something out of the corner of your eye well before you realize it’s a tiger!  We surmise that Context may be best conveyed at varying levels of detail, perhaps hierarchically.

Further, we observe that Context need not be entirely certain: our NoNLP™ models give confidence levels for each label they identify rather than absolute predictions. Imprecision can actually prove beneficial in some cases. For example, in medical coding, we might predict a rare ICD-10 code, albeit with low confidence, that then cues the Reasoning to consider something with a higher reimbursement value that otherwise would never have come up. Context may be imprecise, but we’re pretty sure it’s best if confidence or probability information is attached.

 

Conceptually, then, Context is the multi-parameter decision surface for Reasoning.[3]   Context constrains Reasoning to reality; triggers Reasoning when the surface changes, etc.  The surface can be very fuzzy in some cases, made up for with general Reasoning that can execute more quickly.

And, one could argue, that the best Reasoning is able to take the broadest Context… even highly uncertain/improbable elements… and come out with a more optimal solution. I leave it to far better mathematicians than I to prove interesting results on this idea.

 

 

[1] In ICD-10 coding applications, for each proposed code, we provide the snippets of the input text (YouTube) that most point to that code.  It’s a kind of ex post text search, essentially Recognizing the right text. E/M coding and CDI applications require the creation of new text, a process that better uses Reasoning than Recognition.

[2] It seems that the unfortunate case of the Uber fatality was a failure in Reasoning: https://www.theinformation.com/articles/uber-finds-deadly-accident-likely-caused-by-software-set-to-ignore-objects-on-road?shared=56c9f0114b0bb781

[3] Often, it’s the result of a multi-label classification of the inputs – we know something about that here at Textician – but problem type will dictate the optimal specifics of Context.

This entry was posted in Uncategorized. Bookmark the permalink.