
The trust dividend: Why ethics is a competitive advantage
Unlocking the Trust Dividend: Why Ethical AI is Your Next Competitive Edge
(This article was generated with AI and it’s based on a AI-generated transcription of a real talk on stage. While we strive for accuracy, we encourage readers to verify important information.)
At Web Summit Vancouver 2026, Ms. Paula Goldman, Salesforce’s Chief Ethical & Humane Use Officer, discussed her pioneering role. Salesforce established this office seven years ago to anticipate technology’s implications and ensure trust. Her team integrates ethical principles into product design, co-creates safeguards, and manages product accessibility.
The AI ethics conversation has shifted from academia to everyday discourse, driven by modern AI’s general-purpose nature. Ms. Goldman advocates for “human at the helm,” emphasizing active human control and accountability, understanding delegation and fine-tuning AI systems for responsible deployment.
Trust, she asserted, is a competitive advantage. Unethical AI poses significant business risks: reputational damage, regulatory fines, and lost customer opportunities. Examples like chatbots giving illegal advice underscore designing technology to “go right” and deliver reliable, positive outcomes for users, ensuring business success.
Salesforce balances rapid innovation with ethical deployment through an iterative process. This includes pre-market reviews, adversarial testing, and continuous feedback loops. Risk functions also leverage AI, like an accessibility agent that automatically fixes many bugs, enhancing both speed and ethical compliance.
Salesforce designs its AI to facilitate ethical choices and deter misuse, incorporating built-in safeguards. An acceptable use policy, developed with an ethical use advisory council, sets a baseline for responsible product usage. This approach balances innovation with a strong ethical foundation, preventing unintended consequences.
Ms. Goldman’s book, “Manage the Machine,” explores human management of AI. She cited 1-800 Accountant, a Salesforce client, where agentic AI handles routine tasks, but complex financial advice is escalated to human experts. This illustrates designing intelligent hand-offs based on regulatory needs and customer value.
Effective AI management requires understanding limitations, designing delegation, setting parameters, and verifying outputs. User involvement is critical; a hackathon with neurodivergent employees led to valuable Slack features, highlighting diverse perspectives for creating AI tools that meet human needs.
A cautionary tale involved social workers rejecting AI for note-taking, as it disrupted debriefing rituals. This emphasizes that successful AI integration must consider existing human behaviors and social elements. The core message is to act as directors of technology, guiding AI, setting parameters, and fostering continuous feedback for truly helpful technology.

