Neuro-symbolic AI: The key to truly intelligent systems

Semantic Knowledge Modeling AI

Larry Swanson, Peter Haase

·

·

Reading time: 5-6 minutes

A glowing, blue, digital brain with a circuit board pattern on its surface. The image represents the concept of neuro-symbolic AI, which combines neural networks (like LLMs) and symbolic technology (knowledge graphs) to create a more intelligent system.

Title image for the article Neuro-symbolic AI: The key to truly intelligent systems. The image is square-sized, and shows a glowing, blue, digital brain with a circuit board pattern on its surface. The image represents the concept of neuro-symbolic AI, which combines neural networks (like LLMs) and symbolic technology (knowledge graphs) to create a more intelligent system.

 

Discover how neural-network technologies like large learning models and generative AI work with symbolic technologies like knowledge graphs to give enterprises the trustworthy AI they need.

 

This article was previously published on the Big Data Value Association (BDVA) blog

 

Neuro-symbolic AI: The key to truly intelligent systems

Nearly three years after ChatGPT’s launch in November of 2022, enterprises still struggle to get accurate, reliable and trustworthy outputs from generative AI. 

 

Enterprises that have tried using LLMs to optimize important business processes or support critical business decision-making keep running into now-familiar roadblocks: 

  • search tools that deliver factually incorrect information

  • analytics platforms that give unreliable and inaccurate reports

  • conversational AI agents that dismay customers with their sycophancy and untrustworthy answers 

 

These are among the reasons that many enterprises are still struggling to unlock the value in their AI investments and why many are revisiting their AI architecture strategies. 

 

To be fair, many of these shortcomings aren’t the fault of LLMs. Generative AI is designed specifically to fabricate answers, and the hallucinations for which LLMs are so well known are just a side effect of this feature. Anyone who has worked with these systems knows that you can’t prompt, fine-tune or guardrail your way through these problems. They are inherent in the technology.

 

So, how can enterprises finally realize the promise of AI and get the factually accurate, trustworthy answers that they and their customers need? 

 

The answer is “neuro-symbolic AI.” That is, systems that combine neural-network technology like LLMs with symbolic technology like knowledge graphs. 

 

LLMs offer human-friendly natural-language and powerful predictive and generative abilities. Knowledge graphs add conceptual understanding of an enterprise’s unique capabilities and knowledge. 

 

Much like the two systems in Daniel Kahneman's "Thinking, Fast and Slow," these two technical paradigms work together to create a fully functioning, more intelligent "brain" for your organization.

 

Table of Contents

 

Neuro-symbolic AI is not new

 

The launch of ChatGPT brought generative AI into the spotlight, but AI as a field has been around for decades, long before IBM’s Watson appeared on Jeopardy and Deep Blue beat chess grandmaster Garry Kasparov. The machine learning and other neural-network methods that power generative AI and LLMs date back to the 1940s, while symbolic AI arose in the 1950s|— and many AI practitioners have been integrating these two technology paradigms since the 1990s. So it’s not like people have been overlooking these technologies. It’s just that recent advances in computing power and AI methods have given them both new importance and significance. 

 

LLMs are great at what they do because they are trained on vast amounts of data gleaned from a variety of sources. When you need to make a decision for your company or provide an answer about one of your enterprise’s products or services, though, general knowledge of the world, while helpful, is not sufficient. And, while it is possible to train and fine-tune LLMs with your enterprise data, only symbolic AI can contextualize and deliver your enterprise’s precise, unique knowledge.

 

Symbolic AI is the explicit representation of knowledge. It captures the facts about an enterprise and the knowledge it has accumulated and stores them in a way that both computers and humans can understand and use. It’s the use of universally understood symbols – numbers, words, logical expressions, etc. – that gives this type of AI its unique power. While LLMs, on the other hand, can offer remarkably plausible predictions based on their statistical analysis of what has been shared on the web and elsewhere, only symbolic AI, like a semantic knowledge graph, can deliver facts and knowledge that are grounded in the actual activities that an enterprise does and the wisdom its employees have accumulated.

 

As distinct as these two components of neuro-symbolic AI are, each can exhibit elements of the other. For example, LLMs can be trained on enterprise data to better inform their outputs. And implementations like the Wikidata project show how symbolic AI can be used to represent general facts about the world. This explains why some have used the yin-yang image to illustrate the concept of neuro-symbolic AI.

 

The power of semantic meaning

 

One of the most impactful aspects of symbolic AI is its ability to give semantic meaning to the information it works with. The symbols that are used to represent real-world entities can be defined, connected and otherwise contextualized so that both humans and computers can truly understand the concepts represented by the symbols. So when an analyst asks a knowledge graph about “new customer orders in Q3,” the system has the information it needs to understand how the enterprise defines a “new” customer, what constitutes an “order,” and the time span for the third quarter in their fiscal calendar.

 

LLMs can approximate semantic understanding by looking at which words and tokens appear close to one another in vector space, but only human-vetted knowledge ensconced in a symbolic AI system can deliver consistent answers that align with an enterprise’s actual practices and precise language. In the last example, for instance, if your enterprise has adopted an unconventional fiscal year, an LLM might assume a conventional January-through-December fiscal year, completely missing the meaning of “Q3” in your organization.

 

One powerful implication of this clear articulation of business information in a knowledge system is the ability to create a “semantic layer,” an enterprise information architecture concept that lets an organization understand and connect its business processes, technical objects, and data and information. This ability to access semantic understanding lets enterprises streamline and accelerate their business processes and make crucial business decisions more quickly, giving them an almost unfair competitive advantage. As one industry expert recently observed about the emerging ubiquity of AI, “If everything becomes intelligent, then meaning becomes the differentiator.”

 

Better together: Knowledge Graphs and LLMs

As you evaluate the strengths and weaknesses of each of these technologies, their complementary benefits become more clear. At the highest level, LLMs and other neural-network technology are really good at learning, while knowledge graphs and other symbolic AI technology are good at knowing.

 

LLMs are very good at learning to identify patterns, make predictions and generate outputs. They do this by looking at huge repositories of information, breaking it down into mathematically computable elements, and reassembling those elements into statistically plausible words, images, and recordings. This learning doesn’t lead to actual knowledge that the LLM can use later. You can see this when you ask an LLM the same question and get a slightly (sometimes significantly) different answer each time you run the query. This reliance on statistical computation is why these systems are called “black boxes.” Not even the engineers who designed them can get them to explain their actions. This lack of transparency and explainability has been an obstacle to enterprises trying to build trustworthy AI systems.

 

Knowledge graphs capture and expose the things that an enterprise actually knows—its business practices, its policies and procedures, its customers and suppliers, its products and services, its data and other digital assets, and even the tacit knowledge in its employees’ heads. This knowledge is unique to any one enterprise. In fact, it’s arguably any enterprise’s most important asset. Unlike the general information captured at one point in time that an LLM works with, the enterprise-specific information in a knowledge graph is always up to date and accurate. 

 

Until recently, building knowledge graphs strained the capabilities of most enterprises, but in a serendipitous turn of events, it turns out that LLMs can help build these sophisticated knowledge systems. LLMs’ natural-language interfaces, predictive power, and generative abilities help knowledge engineers by automating labor-intensive tasks like entity extraction, accelerating ontology building, and improving the quality of the data in the graph. With the help of generative AI, knowledge graphs are now accessible to many more enterprises.

 

It also turns out that knowledge graphs can help LLMs. Their authoritative, human-vetted knowledge of an enterprise’s unique capabilities and knowledge can address the transparency, accuracy and other trust issues that arise with unreliable LLM outputs.

Putting it all together

Neuro-symbolic AI is where these complementary capabilities come together. Just as you wouldn’t bring only half of your brain to work, enterprises shouldn’t bring just one half of artificial intelligence’s capabilities to their enterprise architectures. So, if your enterprise is not already exploring how knowledge graphs and symbolic AI can augment your organization’s intelligence—both artificial and actual—now is a good time to start.

 

Your knowledge-driven AI platform

Ready to leverage AI for your enterprise? metis is a knowledge-driven AI platform combining large language models & knowledge graphs to deliver AI agents that provide generative power, semantic precision & contextual, explainable insights for your business. 

 

At its core, metis is grounded in a sophisticated semantic model that captures essential context and expert knowledge from domain specialists and business users. This unique foundation facilitates a powerful human and AI collaboration, including an augmented intelligence and the human-in-the-loop approach, which leverages human expertise and experience at the heart of all AI interactions.

 

See metis in action and request a demo today!

Dr. Peter Haase

Peter has a long history in the Semantic Web and Knowledge Graph community, his first contacts with semantic technologies dating back 25 years. In 2006, Peter obtained his PhD from the University of Karlsruhe (now KIT). In 2014, Peter founded metaphacts, focusing on enterprise knowledge graphs. In his current role, he is steering the R&D efforts at metaphacts, now part of the Digital Science group, with an increased interest in scientific knowledge graphs.

Larry Swanson

Larry Swanson is Community Growth Manager at metaphacts, a leading knowledge-democratization and AI platform. He hosts the Knowledge Graph Insights podcast and co-organizes the Dataworthy Collective, a weekly gathering of semantics, ontology, and data professionals. He has organized a number of professional communities and events: the Knowledge Graph ConferenceConnected Data LondonDecoupled Days, the Future of Content meetup in Amsterdam, the Seattle Content Strategy meetup, and World Information Architecture Day. He is also a founding member of the Kinetic Council, an association-formation committee that aims to connect professionals across the data, knowledge, semantics, and content industries.