
When AI sounds right but is wrong: The hidden crisis of context
Navigating the AI Minefield: Why Context is King in Preventing Hallucinations for Enterprises
(This article was generated with AI and it’s based on a AI-generated transcription of a real talk on stage. While we strive for accuracy, we encourage readers to verify important information.)
Tallat Shafaat, CEO and Founder of Vectara, addressed critical AI failures at Web Summit Vancouver 2026, focusing on hallucinations and poor grounding. Hallucinations involve AI confidently giving wrong answers, while poor grounding means outputs lack verifiable data. These issues are particularly problematic for enterprises, which need agents to use specific internal datasets, not just general web-trained information.
Mr. Shafaat clarified that AI failures are often misattributed to models. He stressed that the core issue frequently lies in inadequate or incorrect context provided to Large Language Models (LLMs). The problem, he explained, is often the “context problem”—the quality and relevance of the information fed to the LLM, rather than the LLM’s inherent capabilities.
Context is multi-layered, encompassing data (enterprise documents), semantic (entity relationships), operational permissions (agent capabilities based on user roles), and temporal (time-sensitive information). Building this comprehensive and correct context for AI agents requires a sophisticated engineering framework, integrating memory, permission, and retrieval layers for accurate outcomes.
A significant challenge is context’s dynamic nature. Mr. Shafaat noted that agent quality can “drift” and degrade over time, even with consistent instructions and LLMs, because context continuously evolves. AI agents must adapt alongside changing information to maintain performance and prevent costly errors. The consequences of AI errors are far more severe for enterprises than for consumer chatbots, leading to reputational damage or monetary losses.
Agent hallucinations manifest as confidently incorrect answers, wrong tool calls, and a lack of determinism. A single erroneous answer can trigger a brand crisis, and a misdirected tool call could escalate into a security incident, which enterprises cannot tolerate. To mitigate these risks, Vectara employs “guardian agents” with hooks, rules (probabilistic or deterministic), and actions (allow, block, escalate to human). This system significantly reduces hallucinations.
Vectara’s platform provides precise context to AI agents for diverse enterprise applications, extending beyond basic chatbots. It addresses context and hallucination issues across various horizontal use cases, enabling advanced chatbots, engineering knowledge bases, and failure analysis systems. Flexible deployment options include on-premise, Virtual Private Cloud (VPC), or SaaS, accommodating enterprises with strict data sovereignty requirements.
Vectara is also model-agnostic, supporting integration with various cloud providers, proprietary models, or open-source alternatives. It provides sophisticated context engineering techniques out-of-the-box. The platform excels with large, complex datasets, like semiconductor IP data (text, images, tables), ensuring agents process diverse information accurately. Mr. Shafaat concluded by highlighting various enterprises successfully using Vectara in production, underscoring the platform’s versatility.

