Neurosymbolic AI: A Promising Solution to LLM Hallucinations

Misinformation in, misinformation out.
Misinformation in, misinformation out.

The ongoing development of artificial intelligence (AI) has brought forth significant advancements, yet it also presents challenges, particularly concerning the reliability of large language models (LLMs) like OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama. A core issue is their tendency to produce inaccurate information, known as ‘hallucinations.’

The Hallucination Dilemma

A striking example of AI hallucinations involved US law professor Jonathan Turley, who was wrongly accused by ChatGPT of sexual harassment in 2023. OpenAI’s response was to prevent ChatGPT from discussing Turley, highlighting the inadequacy of case-by-case fixes for such errors. The root problem extends beyond individual cases, as LLMs often amplify stereotypes or provide biased, western-centric responses.

The Accountability Challenge

A significant concern is the lack of accountability in AI’s misinformation spread, as tracing how an LLM arrives at conclusions remains complex. Following the release of GPT-4 in 2023, these issues sparked extensive debate, though interest has waned without resolution. The European Union’s AI Act of 2024 aimed to regulate the field but fell short by relying on self-regulation by AI companies, failing to address core issues.

Inherent Limitations of LLMs

Despite advancements, the latest tests show that even sophisticated LLMs are unreliable. Leading AI companies resist accepting responsibility for errors, while LLMs’ misinformation and biases persist. With the emerging ‘agentic AI,’ where users assign tasks like booking travel or managing bills, the potential for issues increases.

Introducing Neurosymbolic AI

Neurosymbolic AI could offer a solution by reducing the vast data requirements for training LLMs. Unlike traditional LLMs, which use deep learning to infer patterns from extensive text data, neurosymbolic AI combines predictive learning with formal rules. These rules include logic, mathematics, and agreed-upon meanings, allowing AI to organize knowledge into reusable parts and reduce errors.

How Neurosymbolic AI Works

Neurosymbolic AI employs a process called the ‘neurosymbolic cycle,’ integrating learning and formal reasoning. It extracts rules from training data and reintegrates this knowledge into the model, enhancing efficiency and accountability. This approach is more energy-efficient, requires less data storage, and allows users to control how AI reaches conclusions.

The Evolution of AI

AI has evolved through three waves: symbolic AI in the 1980s, deep learning in the 2010s, and now neurosymbolic AI. The latter is applied in niche areas, such as Google’s AlphaFold for protein structures and AlphaGeometry for geometry problems. Broader applications are emerging, like China’s DeepSeek, which uses a technique called ‘distillation.’

Future Prospects of Neurosymbolic AI

For neurosymbolic AI to become viable for general models, further research is needed to refine general rule discernment and knowledge extraction. While LLM developers appear to be moving towards smarter thinking models, they remain focused on scaling up data usage. To advance AI, systems must adapt to new situations with minimal examples, verify understanding, multitask, and reason efficiently.

This approach could potentially replace regulation, embedding checks and balances within AI architecture, standardizing industry practices. While challenges remain, neurosymbolic AI offers a promising path forward.

Note: This article is inspired by content from https://theconversation.com/neurosymbolic-ai-is-the-answer-to-large-language-models-inability-to-stop-hallucinating-257752. It has been rephrased for originality. Images are credited to the original source.

Subscribe to our Newsletter