Please note: Importing new articles from Word documents is currently unavailable. We are working on restoring normal service soon. We apologize for any inconvenience.

loading page

Mitigate Large Language Model Hallucinations with Probabilistic Inference in Graph Neural Networks
  • Sarah Fairburn,
  • James Ainsworth
Sarah Fairburn

Corresponding Author:[email protected]

Author Profile
James Ainsworth
Author Profile

Abstract

Large Language Models (LLMs) have achieved impressive proficiency in generating human-like text, yet they often produce factually incorrect or nonsensical content, posing challenges in applications requiring high accuracy and factual consistency. The novel integration of Graph Neural Networks (GNNs) with LLMs significantly enhances the ability to process relational data alongside textual information, thereby improving the overall accuracy and robustness of generated outputs. Through the utilization of comprehensive dataset preparation, sophisticated model architecture, and multi-task learning training procedures, the hybrid GNN-LLM model demonstrated substantial improvements in reducing hallucinations, enhancing contextual understanding, and maintaining robustness under adversarial conditions. Quantitative results revealed notable advancements in precision, recall, and F1-scores, while qualitative analysis highlighted the model's improved ability to generate contextually relevant and factually accurate outputs. Error analysis identified and mitigated common failure modes, showcasing the model's enhanced handling of sophisticated contexts. Comparative performance analysis demonstrated the model's superior scalability and efficiency, demonstrating its capability to process larger datasets effectively. Robustness and interpretability tests affirmed the model's resilience and transparency, providing clearer rationales for decision-making processes. These findings collectively illustrate the potential of the hybrid GNN-LLM model to push the boundaries of current NLP capabilities, offering a more accurate, reliable, and comprehensible solution for complex language processing tasks.