loading page

Bane and boon of hallucinations in context of generative AI
  • Hrishitva Patel
Hrishitva Patel
Information Technology, University of Texas San Antonio, Carlos Alvarez college of Business

Corresponding Author:[email protected]

Author Profile

Abstract

The phenomenon of hallucinations takes place when generative artificial intelligence systems, such as large language models (LLMs) like ChatGPT, generate outputs that are illogical, factually incorrect, or otherwise unreal. In generative artificial intelligence, hallucinations have the ability to unlock creative potential, but they also create challenges for producing accurate and trustworthy AI outputs. Both concerns will be covered in this abstract. Artificial intelligence hallucinations can be caused by a variety of factors. There is a possibility that the model will show an inaccurate response to novel situations or edge cases if the training data is insufficient, incomplete, or biassed. It is common for generative artificial intelligence to generate content in response to cues, regardless of the model's "understanding" or the quality of its output. The rising body of material will be synthesised in the paper in order to provide a comprehensive approach to AI hallucinations. The purpose of this study is to gain a deeper understanding of artificial intelligence hallucinations and to develop strategies that can reduce the negative impacts of these hallucinations while simultaneously maximising their creative potential. For the purpose of contextualising artificial intelligence hallucinations in generative AI and providing information for the building of more reliable AI systems, the technique will examine models and mind maps from the journals that were referred to. Hallucinations experienced by AI models can be mitigated through the critical analysis of AI outputs and the diversification of data sources. This paper stresses the significance of high-quality training data, human feedback, transparency, and ongoing quality control in the development of artificial intelligence. This is accomplished through a synthesis of the relevant literature as well as an analysis of models and mind maps.
29 Mar 2024Submitted to TechRxiv
01 Apr 2024Published in TechRxiv