Galileo Releases New Hallucination Index
July 29, 2024
Share this

Galileo announced the launch of its latest Hallucination Index, a Retrieval Augmented Generation (RAG)-focused evaluation framework, which ranks the performance of 22 leading Generative AI (Gen AI) large language models (LLMs) from brands like OpenAI, Anthropic, Google, and Meta.

This year's Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months. As brands race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.

The Index tests open-and closed-sourced models using Galileo's proprietary evaluation metric, context adherence, designed to check for output inaccuracies and help enterprises make informed decisions about balancing price and performance. Models were tested with inputs ranging from 1,000 to 100,000 tokens, to understand performance across short (less than 5k tokens), medium (5k to 25k tokens), and long context (40k to 100k tokens) lengths.

- Best Overall Performing Model: Anthropic's Claude 3.5 Sonnet. The closed-source model outpaced competing models across short, medium, and long context scenarios. Anthropic's Claude 3.5 Sonnet and Claude 3 Opus consistently scored close to perfect scores across categories, beating out last year's winners, GPT-4o and GPT-3.5, especially in shorter context scenarios.

- Best Performing Model on Cost: Google's Gemini 1.5 Flash. The Google model ranked the best performing for the cost due to its great performance on all tasks.

- Best Open Source Model: Alibaba's Qwen2-72B-Instruct. The open source model performed best with top scores in the short and medium context.

"In today's rapidly evolving AI landscape, developers and enterprises face a critical challenge: how to harness the power of generative AI while balancing cost, accuracy, and reliability. Current benchmarks are often based on academic use-cases, rather than real-world applications. Our new Index seeks to address this by testing models in real-world use cases that require the LLMs to retrieve data, a common practice in enterprise AI implementations," says Vikram Chatterji, CEO and Co-founder of Galileo. "As hallucinations continue to be a major hurdle, our goal wasn't to just rank models, but rather give AI teams and leaders the real-world data they need to adopt the right model, for the right task, at the right price."

Key Findings and Trends:

- Open-Source Closing the Gap: Closed-source models like Claude-3.5 Sonnet and Gemini 1.5 Flash remain the top performers thanks to proprietary training data, but open-source models, such as Qwen1.5-32B-Chat and Llama-3-70b-chat, are rapidly closing the gap with improvements in hallucination performance and lower-cost barriers than their closed-source counterparts.

- Overall Improvements with Long Context Lengths: Current RAG LLMs, like Claude 3.5 Sonnet, Claude-3-opus and Gemini 1.5 pro 001 perform particularly well with extended context lengths — without losing quality or accuracy — reflecting the progress being made with both model training and architecture.

- Large Models Are Not Always Better: In certain cases, smaller models outperform larger models. For example, Gemini-1.5-flash-001 outperformed larger models, which suggests that efficiency in model design can sometimes outweigh scale.

- From National to Global Focus: LLMs from outside of the U.S. such as Mistral's Mistral-large and Alibaba's qwen2-72b-instruct are emerging players in the space and continue to grow in popularity, representing the global push to create effective language models.

- Room for Improvement: While Google's open-source Gemma-7b performed the worst, their closed-source Gemini 1.5 Flash model consistently landed near the top.

Share this

The Latest

September 19, 2024

As businesses and individuals increasingly seek to leverage artificial intelligence (AI), the cloud has become a critical enabler of AI's transformative power. Cloud platforms allow organizations to seamlessly scale their AI capabilities, hosting complex machine learning (ML) models while providing the flexibility needed to meet evolving business needs ... However, the promise of AI in the cloud brings significant challenges ...

September 18, 2024

The business case for digital employee experience (DEX) is clear: more than half (55%) of office workers say negative experiences with workplace technology impact their mood/morale and 93% of security professionals say prioritizing DEX has a positive impact on an organization's cybersecurity efforts, according to the 2024 Digital Employee Experience Report: A CIO Call to Action, a new report from Ivanti ...

September 17, 2024

For IT leaders, a few hurdles stand in the way of AI success. They include concerns over data quality, security and the ability to implement projects. Understanding and addressing these concerns can give organizations a realistic view of where they stand in implementing AI — and balance out a certain level of overconfidence many organizations seem to have — to enable them to make the most of the technology's potential ...

September 16, 2024

For the last 18 years — through pandemic times, boom times, pullbacks, and more — little has been predictable except one thing: Worldwide cloud spending will be higher this year than last year and a lot higher next year. But as companies spend more, are they spending more intelligently? Just how efficient are our modern SaaS systems? ...

September 12, 2024

The OpenTelemetry End-User SIG surveyed more than 100 OpenTelemetry users to learn more about their observability journeys and what resources deliver the most value when establishing an observability practice ... Regardless of experience level, there's a clear need for more support and continued education ...

September 11, 2024

A silo is, by definition, an isolated component of an organization that doesn't interact with those around it in any meaningful way. This is the antithesis of collaboration, but its effects are even more insidious than the shutting down of effective conversation ...

September 10, 2024

New Relic's 2024 State of Observability for Industrials, Materials, and Manufacturing report outlines the adoption and business value of observability for the industrials, materials, and manufacturing industries ... Here are 8 key takeaways from the report ...

September 09, 2024

For mission-critical applications, it's often easy to justify an investment in a solution designed to ensure that the application is available no less than 99.99% of the time — easy because the cost to the organization of that app being offline would quickly surpass the cost of a high availability (HA) solution ... But not every application warrants the investment in an HA solution with redundant infrastructure spanning multiple data centers or cloud availability zones ...

September 05, 2024

The edge brings computing resources and data storage closer to end users, which explains the rapid boom in edge computing, but it also generates a huge amount of data ... 44% of organizations are investing in edge IT to create new customer experiences and improve engagement. To achieve those goals, edge services observability should be a centerpoint of that investment ...

September 04, 2024

The growing adoption of efficiency-boosting technologies like artificial intelligence (AI) and machine learning (ML) helps counteract staffing shortages, rising labor costs, and talent gaps, while giving employees more time to focus on strategic projects. This trend is especially evident in the government contracting sector, where, according to Deltek's 2024 Clarity Report, 34% of GovCon leaders rank AI and ML in their top three technology investment priorities for 2024, above perennial focus areas like cybersecurity, data management and integration, business automation and cloud infrastructure ...