Vectara Unveils Open-Source Hallucination Evaluation Model To Detect and Quantify Hallucinations in Top Large Language Models
Groundbreaking Model and Leaderboard Provide New Transparency into Risks Associated with GenAI Chatbots from OpenAI, Anthropic, and Others, Enabling Safer Enterprise Adoption and Objective Government Oversight
SANTA CLARA, Calif., Nov. 06, 2023 (GLOBE NEWSWIRE) — Large Language Model (LLM) builder Vectara, the trusted Generative AI (GenAI) platform, released its open-source Hallucination Evaluation Model. This is a first-of-its-kind initiative to proffer a commercially available and open-source model that addresses the accuracy and level of hallucination in LLMs, paired with a publicly available and regularly updated leaderboard, while inviting other model builders like OpenAI, Cohere, Google, and Anthropic to participate in defining an open and free industry-standard in support of self-governance and responsible AI.
By launching its Hallucination Evaluation Model, Vectara is increasing transparency and objectively quantifying hallucination risks in leading GenAI tools, a critical step toward removing barriers to enterprise adoption, stemming dangers like misinformation, and enacting effective regulation. The model is designed to quantify how much an LLM strays from facts while synthesizing a summary related to previously provided reference materials.
“In order to realize the true promise of Generative AI, we first have to tackle the challenge of hallucinations,” said Matei Zaharia, CTO and Co-Founder of Databricks. “The launch of the Hallucination Evaluation Model to the Hugging Face community encourages industry co-innovation and accountability through a powerful measurement tool accessible for all LLM builders.”
The Hallucination Evaluation Model launch includes releasing Vectara’s measurement code base as an open-source model on Hugging Face as well as a publicly accessible Leaderboard available from Vectara. The Leaderboard serves as a quality metric for LLM factual accuracy, similar to how credit ratings or FICO scores function for financial risk, giving businesses and developers insight into the realities of different GenAI tools before implementing them.
“For organizations to effectively implement Generative AI solutions including chatbots, they need a clear view of the risks and potential downsides,” said Simon Hughes, AI researcher and ML engineer at Vectara. “For the first time, Vectara’s Hallucination Evaluation Model allows anyone to measure hallucinations produced by different LLMs. As a part of Vectara’s commitment to industry transparency, we’re releasing this model as open source, with a publicly accessible Leaderboard, so that anyone can contribute to this important conversation.”
Key Features of Vectara’s Hallucination Evaluation Model:
Objective Measurement: This model provides much-needed visibility into the LLMs’ ability to synthesize data without introducing hallucinations. Many LLM vendors make claims about their capabilities to mitigate the impact of hallucinations, but until now, there have been no objectively verifiable methods for detecting and quantifying instances of irrelevant or incorrect data in model outputs. For the model, Vecatara built a machine-learning model, tuned for real world performance and using the latest advancements in hallucination research, to evaluate LLM summarizations without requiring objective scoring or influence.
Transparency Through Open Source: The Hallucination Evaluation Model is available for developers and industry stakeholders to integrate into their own pipelines through an Apache 2.0 License on Hugging Face. Developers can also use the open-source evaluation model to verify the accuracy of Vectara’s platform.
Dynamic Leaderboard: Vectara’s AI researchers and ML engineers (in collaboration with the open source community) will maintain and continually update the Leaderboard, showcasing the hallucination impact of different LLMs and offering a clear comparative perspective as new models emerge. The Leaderboard lists the accuracy and hallucination rates for each model tested in response to the same set of prompts.
The Leaderboard shows that OpenAI’s models have the strongest performance, followed by the Llama 2 models, Cohere and Anthropic. Google’s Palm models scored lower on the Leaderboard.
“Hallucination is one of the most serious issues to consider when deploying production LLMs. Having an open source benchmark model that can evaluate factual accuracy in a quantifiable way will allow developers to directly address the problems,” said Waleed Kadous, Chief Scientist at Anyscale. “Vectara’s new model sets the industry standard for measuring the extent to which LLMs hallucinate, and we’re excited to work with them as a launch partner.”
Vectara has led industry efforts to address hallucinations as a critical barrier to the safe, effective, and accurate use of GenAI. The model doesn’t solve hallucinations directly but rather enables more informed adoption and better decision-making by measuring the frequency and severity of this phenomena. Greater transparency into the quality of LLM-produced summarizations allows LLM users to evaluate GenAI solutions according to the risk profile of the intended use case.
GenAI adoption in highly regulated industries like legal, healthcare, finance, energy, and government will hinge upon vendors’ ability to provide solutions with low to nearly zero risk of factual inaccuracies. Hallucinations have already been raised by stakeholders in these sectors as a serious issue. Until now, however, there has been no way to objectively compare the performance of available models outside of academic benchmarks, which don’t always translate to real-world settings.
Hallucinations also factor heavily in ongoing dialogue about GenAI regulation. Effective government oversight requires measurement tools universally recognized as transparent and objective. Vectara’s open-source model serves as an industry standard, providing the missing link to legislation that virtually all industry leaders agree is needed. With concerns around misinformation and other AI risks rising ahead of the U.S. presidential election and other geopolitical events, the Hallucination Evaluation Model and Leaderboard provide a tangible step toward data-driven and accessible oversight mechanisms.
Vectara is an end-to-end platform that empowers product builders to embed powerful Generative AI features into their applications with extraordinary results. Built on a solid hybrid-search core, Vectara delivers the shortest path to an answer or action through a safe, secure, and trusted entry point. Vectara is built for product managers and developers with an easily leveraged API that gives full access to the platform’s powerful features. Vectara’s Retrieval Augmented (Grounded) Generation allows businesses to quickly, safely, and affordably integrate best-in-class conversational AI and question-answering into their application with zero-shot precision. Vectara never trains their models on customer data, allowing businesses to embed generative AI capabilities without the risk of data or privacy violations. To learn more about Vectara, visit www.vectara.com.