Track: Artificial Intelligence
Abstract
This paper proposes an approach beyond the current Large Language Models (LLMs) milestones. To achieve optimal results, LLMs require careful fine-tuning. Using techniques such as prompt engineering and In Context Language (ICL), it is possible to provide LLMs with the necessary guidance to perform specific tasks with incredible accuracy and relevance. Deep Neural Networks have limited capabilities in intelligent behavior (i.e., in understanding the meaning of its input). Hence, the scientific community started to question more heavily in recent years whether Deep Learning alone could get us closer to Artificial General Intelligence (AGI). The authors of this paper believe that the training and adaptive behavior challenges were a big part of the problem and that symbolic techniques did not advance simultaneously as Deep Neural Networks. Hence, revisiting and renewing the symbolic techniques and combining them with Deep Learning models like the LLMs can bridge the gap between neural techniques and symbolic knowledge representation. Our proposed system has been shown to significantly improve the performance of LLMs in terms of hallucination rate while providing flexibility of semantic input processing to symbolic systems.