Retrieval, Refinement, Recognition, Reasoning
A new framework in AI architecture, with some practical conclusions
Analyzing Artificial Intelligence (AI) is difficult: there are a lot of techniques and technologies working to solve a broad variety of challenging problems. Starting with an architectural framework makes things much easier.
I was fortunate to go to the IDC Directions conference this year. (I go every year, to get back to my Big Iron roots.) One of the better talks I attended was AI Is Hard: What Does It Take to Make AI Work for You? from Dave Schubmehl. He presented this framework:
© 2018 IDC Inc. Used with permission.
This is the jumping off point for my thinking, especially the separation of Learning and Reasoning.
I’ll note that biological systems nicely follow this paradigm. Sensing is followed by filtering, leveling, feature extraction, and other enhancements – efforts to improve the S/N – for the complex and noisy signals acquired by your eyes and ears. For instance, basic edge detection occurs right on your retina. The resulting representation is what is passed along. (Contrary to Dave, I would not call this knowledge representation – just Representation.)
The Representation is passed along to Learning. Often, Learning is viewed as “figuring out that a group of edges is a chair… or a car… or your hand” – that is, as recognition and naming. However, I don’t believe Biology cares about that. Rather, Biology cares about survival… about whether a group of edges is something that can be eaten or that might eat you. Thus, Learning is about tuning the system to make predictions of what’s there from novel data. That’s Recognition, but in a broader sense than object-naming.
We then move on to Reasoning. Lots of systems conflate the “predicting what’s there” stage and the “deciding what to do” (i.e., Reasoning) stage. We’ll come back to that later in this series, but first we need to define the difference. I found a very nice explanation of the two:
First, you must spend years crafting a [language interpretation] engine which is capable of understanding (in full) what is being said within your universe (not to mention all the myriad permutations of phrases to express similar ideas). Then comes an equally difficult step; you need to inject that “understanding” into some Reasoning engine, which can work with a set of goals and ultimately drive the conversation toward some definition of success. – Dennis Mortensen from x.ai
That really sums it up: there is a Recognition stage… which learns (i.e., is tuned) to make ever-more-accurate predictions of the current context over ever-more-novel inputs … and a goal-driven Reasoning stage that seeks to determine the actions that optimize outcomes in a given context. And note that while the predictions of context are based on the data at hand (endogenously), the goals of Reasoning are generally defined externally.
Example: In healthcare, Recognition answers the question, “Will this patient be diagnosed with severe sepsis tomorrow?” Reasoning answers the question, “What should we do for a patient given a likelihood of developing severe sepsis tomorrow?”
Evolution apparently agrees with this separation, as biological systems have a similar dichotomy. See this chart from Peter Carruther’s 2008 book:
© 2018 Peter Carruthers. Used with permission.
Textician’s 4 R’s
That all leads us to a new architectural framework for AI. Textician’s framework (a work in process) looks like this:
In future posts in this series, we’ll step through the framework in more detail, uncovering (hopefully) insight into optimizing AI system design and performance.
 IDC, AI Is Hard: What Does It Take to Make AI Work for You?, Doc #DR2018_T3_DS, Feb 2018.
 When I was at Caltech studying CNS, one professor (I can’t remember whom) remarked that biological systems are not built for high performance. Rather, they are tuned to high performance.
 Proof of this assertion can be found in the question of if the depth perception of stereoscopic vision is resolved before or after the object is recognized. As demonstrated emphatically by Random Dot Stereograms, it’s the former. Biology cares less about what an object is than if it’s going to hit you in the head! (I used to sit next to Béla Julesz in a seminar at Caltech.)