AI Knowledge Map: how to classify AI technologies

By Francesco Corea

II. Problem domain + Approach = Technology Solution

So here we go, that’s it. You were probably expecting given the introduction a weird VR engine projecting multiple lights bundles for each technology, but it is instead an old-fashioned bi-dimensional graph. As simple as that.

Let’s look at it a bit closer though.

On the axes, you will find two macro-groups, i.e., the AI Paradigms and the AI Problem Domains. The AI Paradigms (X-axis) are really the approaches used by AI researchers to solve specific AI-related problems (it does include the approaches we are aware of up to date). On the other side, the AI Problem Domains (Y-axis) are historically the type of problems AI can solve. In some sense, it also indicates the potential capabilities of an AI technology.

Hence, I have identified the following the AI paradigms:

  • Logic-based tools: tools that are used for knowledge representation and problem-solving;
  • Knowledge-based tools: tools based on ontologies and huge databases of notions, information, and rules;
  • Probabilistic methods: tools that allow agents to act in incomplete information scenarios;
  • Machine learning: tools that allow computers to learn from data;
  • Embodied intelligence: engineering toolbox, which assumes that a body (or at least a partial set of functions such as movement, perception, interaction, and visualization) is required for higher intelligence;
  • Search and optimization: tools that allow intelligently searching through many possible solutions.

Those six paradigms also fall into three different macro-approaches, namely Symbolic, Sub-symbolic and Statistical (represented by different colors). Briefly, the Symbolic approach states that human intelligence could be reduced to symbol manipulation, the Sub-symbolic one that no specific representations of knowledge should be provided ex-ante, while the Statistical approach is based on mathematical tools to solve specific sub-problems.

A quick additional note: you might hear people talking about “AI tribes”, a concept proposed by Pedro Domingos (2015) that clusters researchers in groups based on the approaches they use to solve problems. You can easily map those five tribes with our paradigm classification (not considering the embodied intelligence group), i.e. Symbolists with Logic-based approach (they use logical reasoning based on abstract symbols); Connectionists with Machine learning (they are inspired by the mammalian brain); Evolutionaries with Search and Optimization (they are inspired by the Darwinian evolution); Bayesians with Probabilistic methods (they use probabilistic modeling); and finally Analogizers with Knowledge-based methods, since they try to extrapolate from existing knowledge and previous similar cases.

The vertical axis instead lays down the problems AI has been used for, and the classification here is quite standard:

  • Reasoning: the capability to solve problems;
  • Knowledge: the ability to represent and understand the world;
  • Planning: the capability of setting and achieving goals;
  • Communication: the ability to understand language and communicate;
  • Perception: the ability to transform raw sensorial inputs (e.g., images, sounds, etc.) into usable information.

I am still interrogating myself whether this classification is large enough to capture all the spectrum of problems we are currently facing or whether more instances should be added (e.g., Creativity or Motion). For the time being though, I will stick with the 5-clusters one.

The patterns of the boxes instead divide the technologies into two groups, i.e., narrow applications and general applications. The words used are on purpose slightly misleading but bear with me for one second and I will explain what I meant. For anyone getting started in AI, knowing the difference between Weak/Narrow AI (ANI), Strong/General AI (AGI), and Artificial Super Intelligence (ASI) is paramount. For the sake of clarity, ASI is simply a speculation up to date, General AI is the final goal and holy grail of researchers, while narrow AI is what we really have today, i.e., a set of technologies which are unable to cope with anything outside their scope (which is the main difference with AGI).

The two types of lines used in the graph (continuous and dotted) then want to explicitly point to that distinction and make you confident that when you will read some other introductory AI material you won’t be completely lost. However, at the same time, the difference here outlines technologies that can only solve a specific task (usually better than humans — Narrow applications) and others that can today or in the future solve multiple tasks and interact with the world (better than many humans — General applications).

Finally, let’s see what there is within the graph itself. In the map, the different classes of AI technologies are represented. Note, I am intentionally not naming specific algorithms but rather clustering them into macro-groups. I am not either providing you with a value assessment of what it works and what it does not, but simply listing what researchers and data scientists can tap into.

So how do you read and interpret the map? Well, let me give you two examples to help you do that. If you look at Natural Language Processing, this embeds a class of algorithms that use a combination of a knowledge-based approach, machine learning and probabilistic methods to solve problems in the domain of perception. At the same time though, if you look at the blank space at the intersection between Logic-based paradigm and Reasoning problems, you might wonder why there are not technologies there. What the map is conveying is not that it does not categorically exist a method that can fill up that space, but rather that when people approach a reasoning problem they rather prefer to use a Machine Learning approach, for instance.

To conclude this explanation, this is the full list of technologies included with their own definitions:

  • Robotic Process Automation (RPA): technology that extracts the list of rules and actions to perform by watching the user doing a certain task;
  • Expert Systems: a computer program that has hard-coded rules to emulate the human decision-making process. Fuzzy systems are a specific example of rule-based systems that map variables into a continuum of values between 0 and 1, contrary to traditional digital logic which results in a 0/1 outcome;
  • Computer Vision (CV): methods to acquire and make sense of digital images (usually divided into activities recognition, images recognition, and machine vision);
  • Natural Language Processing (NLP): sub-field that handles natural language data (three main blocks belong to this field, i.e., language understanding, language generation, and machine translation);
  • Neural Networks (NNs or ANNs): a class of algorithms loosely modeled after the neuronal structure of the human/animal brain that improves its performance without being explicitly instructed on how to do so. The two majors and well-known sub-classes of NNs are Deep Learning (a neural net with multiple layers) and Generative Adversarial Networks (GANs — two networks that train each other);
  • Autonomous Systems: sub-field that lies at the intersection between robotics and intelligent systems (e.g., intelligent perception, dexterous object manipulation, plan-based robot control, etc.);
  • Distributed Artificial Intelligence (DAI): a class of technologies that solve problems by distributing them to autonomous “agents” that interact with each other. Multi-agent systems (MAS), Agent-based modeling (ABM), and Swarm Intelligence are three useful specifications of this subset, where collective behaviors emerge from the interaction of decentralized self-organized agents;
  • Affective Computing: a sub-field that deal with emotions recognition, interpretation, and simulation;
  • Evolutionary Algorithms (EA): it is a subset of a broader computer science domain called evolutionary computation that uses mechanisms inspired by biology (e.g., mutation, reproduction, etc.) to look for optimal solutions. Genetic algorithms are the most used sub-group of EAs, which are search heuristics that follow the natural selection process to choose the “fittest” candidate solution;
  • Inductive Logic Programming (ILP): sub-field that uses formal logic to represent a database of facts and formulate hypothesis deriving from those data;
  • Decision Networks: is a generalization of the most well-known Bayesian networks/inference, which represent a set of variables and their probabilistic relationships through a map (also called directed acyclic graph);
  • Probabilistic Programming: a framework that does not force you to hardcode specific variable but rather works with probabilistic models. Bayesian Program Synthesis (BPS) is somehow a form of probabilistic programming, where Bayesian programs write new Bayesian programs (instead of humans do it, as in the broader probabilistic programming approach);
  • Ambient Intelligence (AmI): a framework that demands physical devices into digital environments to sense, perceive, and respond with context awareness to an external stimulus (usually triggered by a human action).

In order to solve a specific problem, you might follow one or more approaches, that in turn means one or more technologies given that many of them are not at all mutually exclusive but rather complementary.

Finally, there is another relevant classification that I have not embedded into the graph above (i.e., the different type of analytics) but that is worth to be mentioned for the sake of completeness. You may actually encounter five distinct types of analytics: descriptive analytics (what happened); diagnostic analytics (why something happened); predictive analytics (what is going to happen); prescriptive analytics (recommending actions); and automated analytics (taking actions automatically). You might also be tempted to use it to somehow classify the technologies above, but the reality is that this is a functional classification and a process one rather than a product one — in other words, every technology in the spectrum can fulfill those five analytics functions.