Why open source will win the agent era

Share

Unlocking the Agent Era: Why Open Source AI is Poised for Dominance at Web Summit Vancouver 2026

(This article was generated with AI and it’s based on a AI-generated transcription of a real talk on stage. While we strive for accuracy, we encourage readers to verify important information.)

Mike Conover

Mr. Mike Conover, CEO of Brightwave, highlighted open-source AI’s pivotal role in the agent era. With 15 years in AI, he created Databricks’ first open-source LLM, Dolly, proving its instruction-tuned conversational abilities. This demonstrated open-source models’ potential to significantly shape the evolving agent market.

He defined an agent as a language model for reasoning, operating in a continuous loop with various tools like code execution and APIs. Modern agents leverage advanced reasoning and reinforcement learning for complex decision-making and open-ended goal achievement. Brightwave’s system, for instance, autonomously dissects complex financial analysis questions from thousands of sources.

Another innovative application involves agents identifying recently sold homes, adding virtual pools to satellite photos, and sending personalized outreach, drastically cutting marketing costs. This highlights how open-source models offer unparalleled flexibility in tuning intelligence, speed, and cost, making powerful AI capabilities widely accessible.

While proprietary frontier models once dominated, open-source LLMs are now highly effective. Models like Google Gemma can run locally on a MacBook at 50 tokens per second. The future anticipates hyperscale agents with increased parallelism and longer task lengths, leading to a massive surge in compute demand, where cost and speed are paramount.

Open-source models, such as DeepSeek, are up to 35 times cheaper than proprietary alternatives like Opus 4.7, establishing a powerful economic force. Smaller, open-source models also gain speed from reduced overhead, as operations can remain in-memory on a single GPU. This efficiency accelerates task completion and facilitates more “reps” for user learning.

The intelligence gap between open-source and closed-source frontier models is rapidly converging, estimated at 6 to 12 months. Mr. Conover predicts open-source models will achieve Opus 4.6 level capabilities within a year. This convergence, combined with low switching costs due to API compatibility, will fundamentally reshape language model economics.

Beyond commercial advantages, geopolitical and national security concerns drive open-source adoption. Nations prefer transparent, controllable models, avoiding reliance on foreign-developed AI that could exhibit biased or propagandizing behaviors, as seen in “distillation attacks.” This is crucial for sensitive applications like drafting legislation or securing banking infrastructure.

For practical engagement, Mr. Conover recommends `llama.cpp` for local serving and `VLLM` for production-grade parallelism on Nvidia GPUs. He also highlighted `unsloth` for efficient 4-bit quantizations, optimizing memory usage. Caution is advised against unbenchmarked “AI slop” on Hugging Face. Experimentation tools include `Clawed code`, `OpenClaw`, `Pi Mono`, `i.dev`, and the `Hermes agent` for learning from user interactions. Agent harness optimization is key to performance.

Related
Announcement by the Government of British Columbia in partnership with Innovate BC

Announcement by the Government of British Columbia in partnership with Innovate BC

May 13, 2026 - 2 min read
Related
Safety first: Vibe coding at enterprise scale

Safety first: Vibe coding at enterprise scale

May 13, 2026 - 3 min read