IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Bane and Boon of Hallucinations in the Context of Generative AI

Author(s): S. M. Nazmuz Sakib (School of Business and Trade, International MBA Institute, Dhaka International University, Bangladesh)
Copyright: 2024
Pages: 24
EISBN13: 9798369373088

Purchase

View Bane and Boon of Hallucinations in the Context of Generative AI on the publisher's website for pricing and purchasing information.

View Sample PDF


Abstract

The phenomenon of hallucinations takes place when generative artificial intelligence systems, such as large language models (LLMs) like ChatGPT, generate outputs that are illogical, factually incorrect, or otherwise unreal. In generative artificial intelligence, hallucinations have the ability to unlock creative potential, but they also create challenges for producing accurate and trustworthy AI outputs. Both concerns will be covered in this abstract. Artificial intelligence hallucinations can be caused by a variety of factors. There is a possibility that the model will show an inaccurate response to novel situations or edge cases if the training data is insufficient, incomplete, or biassed. It is common for generative artificial intelligence to generate content in response to cues, regardless of the model's “understanding” or the quality of its output.

Related Content

Veerta Tantia, K. B. Rishikesh, Diya Susan Biju. © 2024. 14 pages.
Grace C. Khoury, Leila Amer, Zein Khalaf. © 2014. 19 pages.
Zinga Novais, Jorge Vareda Gomes, Mario Romão. © 2023. 20 pages.
Eda Rukiye Donbak. © 2020. 16 pages.
Marina Meucci, Lucrezia Carini. © 2024. 12 pages.
Body Bottom