Should Salesforce launch its own foundation model?
Direct Answer
No. Salesforce should not build a proprietary foundation model. Four specific reasons:
- API leverage asymmetry: Salesforce already extracts disproportionate value from Anthropic partnership (Q1 2025) at negotiated rates; building a model resets cost dynamics and ties $1B+ capex to unproven ROI when Claude/Gemini improve faster than internal R&D cycles.
- Talent war loss: Elite ML teams cluster around OpenAI, Anthropic, Google; poaching them requires 2-3x equity+cash, and Salesforce's brand as "enterprise plumbing" loses to frontier labs. You lose before you build.
- Domain-specific moat is fictional: A "CRM-optimized" model still needs reasoning, coding, long-context—frontier improvements benefit everyone equally. Salesforce gains no lasting edge from domain training data; competitors get same inference improvement 6mo later.
- Distraction tax: Maintaining Agentforce, Einstein, Atlas Reasoning Engine across multi-vendor stack is already stretched. Internal model team becomes black hole—3 years, $1B+, zero incremental customer value vs. choosing the best third-party inference at each moment.
Contingency: *If* API costs exceed 8% of gross profit by 2028, Salesforce should license a model infra platform (MosaicML/Databricks) to fine-tune and serve Anthropic weights on-prem, not build from scratch.
The Case For Building
- Cost floor: Microsoft (Phi), Meta (Llama), Amazon (Nova) reduced inference costs 50-70% via proprietary training; Salesforce's projected $400M–$1B+ API spend by 2027 could drop to $100–200M with captive model.
- Latency guarantee: Inference <100ms on CRM operations (lead scoring, forecast, rep guidance) beats third-party APIs; Agentforce becomes genuinely real-time.
- Data moat narrative: Train on CRM-domain data only Salesforce owns (100M+ orgs' anonymized pipelines, deal playbooks, forecast variance); frontier labs can't replicate without customer consent.
- Strategic independence: Anthropic partnership works until it doesn't; owning the model stack insulates Salesforce from API price shocks, Claude version breaks, or Anthropic pivoting upmarket.
- M&A option: A 40B-param CRM model becomes acquisition target for private equity or Azure; asset on the balance sheet.
Why It Won't Happen
- CFO math fails: $1.2B training cost + $300M+ annual infra + $200M+ headcount = $1.7B+ capex commitment. 4-year payoff requires 6-8% margin improvement; Excel can't justify it to Bret Taylor.
- Talent cliff: Salesforce has no frontier ML recruiting brand. Ex-Meta, ex-OpenAI engineers don't move to CRM. You hire 2nd/3rd-tier talent, extend timeline to 4+ years, and lose the race.
- Partner lock-in works both ways: Anthropic's Q1 2025 deal likely includes volume commitments; Salesforce can't exit without penalty. Building in parallel is contract breach and legal quagmire.
- Frontier model cadence: Claude, GPT-5, Gemini 2.0 release every 12–18 months with 2–3x capability jumps. Salesforce's internal cadence is 24–36 months minimum. You'll launch a model that's already 2 generations stale.
- Board optics: Bret Taylor, board see $1B+ R&D spend on model training while core CRM UX stalls, customer acquisition costs rise, and stock multiple compresses. Activist investor (Starboard, Elliott) launches campaign in Year 2.
What Salesforce Should Actually Do
- Deepen Anthropic partnership: Lock in 3-year volume discount (target 30% off published API rates), secure priority on Claude 4.5 → 5.0 training runs, embed Anthropic engineers in Agentforce roadmap.
- License MosaicML/Databricks model infra: Use Mosaic Research Foundation to fine-tune open-weight models (Llama 3.1, Nemotron) for Salesforce-specific tasks (rep guidance, pipeline hygiene, forecast anomalies) without training from scratch. Cost: $50–100M, 12-mo timeline.
- Build CRM-inference optimization layer: Instead of the model itself, invest $200M in optimized inference routing, prompt tuning, retrieval-augmented generation (RAG) against Salesforce Data Cloud. Make the *application* layer the moat, not the weights.
- Play the long tail: Sponsor open-source CRM fine-tuning benchmarks (similar to MMLU for enterprise). Attract academic partnerships, become the "standard" for CRM model eval. When you need proprietary inference later, you've already mapped the terrain.
- Hedge with multi-vendor: Don't bet Agentforce on Claude alone. Ship inference experiments with Gemini 2.0, use smaller fast models (Mistral, Phi-4) for on-prem/edge. Rotate which model is "primary" quarterly based on cost + capability.
- Acquire domain talent, not labs: Hire 15–20 ex-Anthropic, ex-OpenAI researchers as "Agentforce Science" advisory council. $300M in equity+salary over 3 years, zero overhead of maintaining a parallel lab. They iterate on your prompts, evals, fine-tuning, and keep you 3mo ahead of industry.
- Plan the 2028 off-ramp: By 2028, open-source models (Llama 4, Nemotron-2) may match Claude-4 quality at 1/10th cost. Salesforce should position to flip to fine-tuned open weights + managed inference (MosaicML, Replicate, Together AI). Build that optionality into Agentforce architecture now—model-agnostic inference interface.
- Let OpenAI, Google compete for your money: Instead of building, hold Salesforce's $1B spending power like a sword. OpenAI, Anthropic, Google all want enterprise revenue; pit them against each other. You get better rates, priority support, and custom training runs from bidding wars.
Decision Matrix
| Path | Cost | Timeline | Risk | Probability (2027) |
|---|---|---|---|---|
| Build proprietary model | $1.2B capex + $300M/yr opex | 48+ months | High (talent, stale on launch) | 5% |
| Deepen Anthropic + license MosaicML infra | $100–150M | 18 months | Medium (vendor lock) | 70% |
| Acquire model-science talent + multi-vendor hedge | $300M + ongoing | Continuous | Low (option value) | 60% |
| Stay API-only, optimize routing/prompting | $50M | Continuous | Low (commodity) | 40% |
| Acquisition play (buy Hugging Face, Cohere) | $2–5B | 12 months | Very High (integration, cultural) | 10% |
Bottom Line
Salesforce should not build a proprietary model. Instead: (1) lock Anthropic rates at 30% discount with 3-year volume commitment, (2) license MosaicML to fine-tune open weights for CRM inference, (3) hire advisory-model researchers from OpenAI/Anthropic rather than rebuild, (4) prepare for a 2028 flip to open-source inference via Databricks/Replicate as Llama/Nemotron converge on Claude quality. The $1B+ Salesforce will save by *not* building goes to customer acquisition, Einstein platform breadth, and Agentforce product speed. In CRM, product velocity beats model training every quarter.