In Copilot Researcher, users can now run the same prompt across providers and compare outputs inside Word, Excel, PowerPoint, Outlook, or Teams—without leaving their workflow. This enables side-by-side testing for reasoning, summarization, or safety-sensitive content.
In Copilot Studio, admins and makers can select Anthropic or OpenAI when composing automations and agents. Tenant-level controls in the Microsoft 365 admin center allow IT to enable or block Anthropic under “AI providers for other large language models.” Notably, Anthropic models are hosted on AWS, yet surfaced in Microsoft’s stack.
Why it matters: Microsoft is formalizing a portfolio approach, reducing single-vendor risk, accelerating feature parity, and letting customers pick the best model for the job—from long-context analysis to cautious reasoning. It also aligns procurement with governance demands for choice and control.
Near-term impact: Knowledge workers gain flexible prompting—try Claude or OpenAI per document or research task. Ops teams can route reasoning-heavy steps to Claude in Studio while keeping OpenAI for other tasks. Admins retain governance knobs to allowlist or revoke providers at any time.
Strategy shift: Multi-model becomes the front-line experience, not just an Azure back-end option. Shipping AWS-hosted Claude inside 365 signals “best tool wins.” OpenAI stays central, but Copilot increasingly resembles a model marketplace.
Moving forward, as enterprises will seek clarity on data boundaries, logging, and retention, plus cost/latency transparency and feature roadmaps. To enable, go to Microsoft 365 admin → Copilot → Settings → AI providers, connect Anthropic, and govern access as needed.