article thumbnail

Enhancing LLM Reliability: Detecting Confabulations with Semantic Entropy

Marktechpost

LLMs like ChatGPT and Gemini demonstrate impressive reasoning and answering capabilities but often produce “hallucinations,” meaning they generate false or unsupported information. Semantic entropy is a method to detect confabulations in LLMs by measuring their uncertainty over the meaning of generated outputs.

LLM 99
article thumbnail

Small but Mighty: The Enduring Relevance of Small Language Models in the Age of LLMs

Marktechpost

LLMs have gained widespread popularity, with ChatGPT reaching approximately 180 million users by March 2024. Despite LLMs’ advancements in artificial general intelligence, their size leads to exponential increases in computational costs and energy consumption.

BERT 117