Galileo Releases New Hallucination Index
July 29, 2024
Share this

Galileo announced the launch of its latest Hallucination Index, a Retrieval Augmented Generation (RAG)-focused evaluation framework, which ranks the performance of 22 leading Generative AI (Gen AI) large language models (LLMs) from brands like OpenAI, Anthropic, Google, and Meta.

This year's Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months. As brands race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.

The Index tests open-and closed-sourced models using Galileo's proprietary evaluation metric, context adherence, designed to check for output inaccuracies and help enterprises make informed decisions about balancing price and performance. Models were tested with inputs ranging from 1,000 to 100,000 tokens, to understand performance across short (less than 5k tokens), medium (5k to 25k tokens), and long context (40k to 100k tokens) lengths.

- Best Overall Performing Model: Anthropic's Claude 3.5 Sonnet. The closed-source model outpaced competing models across short, medium, and long context scenarios. Anthropic's Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores across categories, beating out last year's winners, GPT-4o and GPT-3.5, especially in shorter context scenarios.

- Best Performing Model on Cost: Google's Gemini 1.5 Flash. The Google model ranked the best performing for the cost due to its great performance on all tasks.

- Best Open Source Model: Alibaba's Qwen2-72B-Instruct. The open source model performed best with top scores in the short and medium context.

"In today's rapidly evolving AI landscape, developers and enterprises face a critical challenge: how to harness the power of generative AI while balancing cost, accuracy, and reliability. Current benchmarks are often based on academic use-cases, rather than real-world applications. Our new Index seeks to address this by testing models in real-world use cases that require the LLMs to retrieve data, a common practice in enterprise AI implementations," says Vikram Chatterji, CEO and Co-founder of Galileo. "As hallucinations continue to be a major hurdle, our goal wasn't to just rank models, but rather give AI teams and leaders the real-world data they need to adopt the right model, for the right task, at the right price."

Key Findings and Trends:

- Open-Source Closing the Gap: Closed-source models like Claude-3.5 Sonnet and Gemini 1.5 Flash remain the top performers thanks to proprietary training data, but open-source models, such as Qwen1.5-32B-Chat and Llama-3-70b-chat, are rapidly closing the gap with improvements in hallucination performance and lower-cost barriers than their closed-source counterparts.

- Overall Improvements with Long Context Lengths: Current RAG LLMs, like Claude 3.5 Sonnet, Claude-3-opus and Gemini 1.5 pro 001 perform particularly well with extended context lengths — without losing quality or accuracy — reflecting the progress being made with both model training and architecture.

- Large Models Are Not Always Better: In certain cases, smaller models outperform larger models. For example, Gemini-1.5-flash-001 outperformed larger models, which suggests that efficiency in model design can sometimes outweigh scale.

- From National to Global Focus: LLMs from outside of the U.S. such as Mistral's Mistral-large and Alibaba's qwen2-72b-instruct are emerging players in the space and continue to grow in popularity, representing the global push to create effective language models.

- Room for Improvement: While Google's open-source Gemma-7b performed the worst, their closed-source Gemini 1.5 Flash model consistently landed near the top.

Share this

The Latest

December 18, 2024

Industry experts offer predictions on how NetOps, Network Performance Management, Network Observability and related technologies will evolve and impact business in 2025 ...

December 17, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 6 covers cloud, the edge and IT outages ...

December 16, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 5 covers user experience, Digital Experience Management (DEM) and the hybrid workforce ...

December 12, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 4 covers logs and Observability data ...

December 11, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 3 covers OpenTelemetry, DevOps and more ...

December 10, 2024

In APMdigest's 2025 Predictions Series, industry experts offer predictions on how Observability and related technologies will evolve and impact business in 2025. Part 2 covers AI's impact on Observability, including AI Observability, AI-Powered Observability and AIOps ...

December 09, 2024

The Holiday Season means it is time for APMdigest's annual list of predictions, covering IT performance topics. Industry experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how Observability, APM, AIOps and related technologies will evolve and impact business in 2025 ...

December 05, 2024
Generative AI represents more than just a technological advancement; it's a transformative shift in how businesses operate. Companies are beginning to tap into its ability to enhance processes, innovate products and improve customer experiences. According to a new IDC InfoBrief sponsored by Endava, 60% of CEOs globally highlight deploying AI, including generative AI, as their top modernization priority to support digital business ambitions over the next two years ...
December 04, 2024

Technology leaders will invest in AI-driven customer experience (CX) strategies in the year ahead as they build more dynamic, relevant and meaningful connections with their target audiences ... As AI shifts the CX paradigm from reactive to proactive, tech leaders and their teams will embrace these five AI-driven strategies that will improve customer support and cybersecurity while providing smoother, more reliable service offerings ...

December 03, 2024

We're at a critical inflection point in the data landscape. In our recent survey of executive leaders in the data space — The State of Data Observability in 2024 — we found that while 92% of organizations now consider data reliability core to their strategy, most still struggle with fundamental visibility challenges ...