On April 22, 2026, OpenAI launched Workspace Agents in ChatGPT and announced the deprecation of Custom GPTs for organizations. The change sounds like a feature rename. It is not. It is a structural rewrite of how OpenAI expects businesses to deploy AI inside teams, with implications for your SaaS stack, vendor strategy, and AI budget.
What OpenAI Actually Shipped
Two things happened in the same announcement, and both matter.
First, OpenAI introduced Workspace Agents as a research preview for ChatGPT Business, Enterprise, Edu, and Teachers plans. Workspace Agents are powered by Codex. They run in the cloud, persist across sessions, hold memory between runs, can be scheduled or triggered by events, and act on third-party SaaS through native connectors for Slack, Google Drive, Microsoft apps, Salesforce, Notion, and Atlassian Rovo. Multiple team members can share a single agent and refine it through normal conversation.
Second, per VentureBeat reporting on April 22, 2026, OpenAI confirmed that Custom GPTs will be deprecated for organizations on a date still to be announced. Existing Business, Enterprise, Edu, and Teachers customers will be required to migrate their GPTs into Workspace Agents. Pricing is free until May 6, 2026, after which a credit-based model begins.
These are not two adjacent product updates. They are one strategic move.
Why the Custom GPT Deprecation Matters More Than the Launch
When Custom GPTs launched in late 2023, a lot of organizations adopted them as their first formal AI integration. HR teams built onboarding GPTs. Sales teams built proposal GPTs. Compliance teams built policy GPTs. The pattern was the same everywhere: a custom system prompt, some uploaded documents, a recognizable name and avatar, and a permissioned audience inside the company. For most of those teams, the GPT was the AI rollout. It is what their users interacted with day to day.
Our take: The deprecation is more important than the new launch. A new product is optional. Migration is mandatory. If your business standardized on Custom GPTs over the last two years, the next ninety days will involve an audit, a migration, and a rethink of how AI plugs into your team workflows. That is a planned project that just got added to your roadmap whether you wanted it or not.
What Workspace Agents Actually Do Differently
The simplest way to describe the difference is that Custom GPTs were chat configurations, and Workspace Agents are workers.
Background execution. Workspace Agents continue running after the user closes the browser. A Custom GPT only existed inside an active chat. That is the difference between asking an assistant to do something and asking it to be on call.
Native SaaS actions. Custom GPTs could read documents and call generic Actions. Workspace Agents have first-party connectors that read and write inside Slack channels, Salesforce records, Notion docs, Microsoft 365, and Atlassian Rovo. That moves AI from a thinking layer into an acting layer in the systems your team already uses.
Team sharing and persistence. Multiple team members share an agent and the agent learns from interactions across the team. It can be guided in conversation, and improvements persist across runs. Custom GPTs were largely static after the prompt was written.
Scheduling and triggers. Agents can be scheduled, run on triggers, or wake themselves up to continue work across days. That is how you start to replace recurring human tasks rather than augmenting individual sessions, and it is the same direction we covered in our analysis of Codex computer use and workflow automation.
Admin controls. A Compliance API gives admins visibility into every agent's configuration, updates, and runs. ChatGPT Enterprise and Edu admins can control which connected tools and actions specific user groups access, plus who can build or share agents. Custom GPTs offered far less granular oversight.
The capability gap is large. The deprecation is therefore not a rebrand.
What Businesses Should Actually Do
Once a vendor announces a deprecation, the right response is rarely panic and rarely indifference. It is a structured migration plan with a few checkpoints.
-
Inventory your existing Custom GPTs. Pull a list of every GPT in the workspace, who owns it, what data it touches, and how heavily it is used. Anything used daily by a team of more than five people is operationally critical and goes to the top of the migration list.
-
Score migration value before lifting and shifting. Some Custom GPTs were experiments that nobody actually adopted. Do not migrate those. The deprecation window is an opportunity to retire the half-finished projects that accumulated in the early ChatGPT for business era.
-
Pilot one or two agents under tight scope. Pick a workflow that benefits from background execution and SaaS actions, not a chat that summarizes a doc. Examples: a sales agent that drafts follow-ups in Salesforce, a support agent that triages tickets in Slack, a finance agent that reconciles vendor records across Notion and the ERP. Run it with a small group of users and measure cycle time and error rate against the human baseline.
-
Set up admin and compliance plumbing first. Before broadly rolling out, configure Compliance API monitoring, set permission boundaries on integrations, and decide who in the organization is allowed to build versus consume agents. If you have not already operationalized the controls in our governance framework for growing companies, this is the moment.
-
Forecast credit consumption. The pricing model is shifting from flat seat to credits. Background-running agents that operate across days can consume far more than an interactive chat. Pull usage telemetry from the free preview before May 6, 2026 so the budget request that goes to finance is grounded in actual data.
For organizations whose team-level workflow automation strategy was anchored around Custom GPTs, this is a meaningful re-architecture, not a configuration change. The agents replace not just a UI but the assumptions about how AI sits inside your team operations.
How This Reshapes the AI Vendor Map
The Workspace Agents launch puts OpenAI in direct competition for the same enterprise seat that Microsoft Copilot, Google Gemini Enterprise, and Anthropic's Claude for Work are fighting for. Each is now offering the same conceptual product: shared, permissioned agents that act inside the SaaS your team already uses.
That has two implications for buyers.
Lock-in risk is rising in agent platforms. A Workspace Agent built on Codex with deep Slack and Salesforce integrations is not a portable artifact. The prompt logic might be portable, but the connector setup, permission graph, memory state, and admin telemetry are platform-specific. Choosing an agent platform in 2026 looks more like choosing an iPaaS or a CRM than choosing an LLM API.
Model strategy and agent platform strategy are now separate decisions. A year ago you could pick a model and run everything through it. With Workspace Agents, you are also choosing a workflow runtime, a permissioning model, an admin surface, and a billing meter. Some teams will end up with Workspace Agents on OpenAI for one set of jobs and Claude or Gemini agents for another. We covered the underlying vendor landscape shakeup in detail.
The decision posture is no longer "what is the best model" but "which agent platform do we standardize on, and where do we hold optionality."
What Not to Do
Do not migrate everything by default. A deprecation announcement is also a forced cleanup window. Use it.
Do not let a single owner build and approve their own agents. Workspace Agents take real actions in real systems. Separate the build role from the deploy role, and require admin sign-off on any agent that touches customer data, financial records, or external messaging.
Do not assume free preview pricing reflects production economics. Credit-based pricing for background agents has a long tail. Track usage in the preview window, model out three or four scenarios for steady-state load, and put a guardrail in your AI budget that triggers an internal review if monthly credits exceed the forecast.
Key Takeaways
- OpenAI launched Workspace Agents in ChatGPT on April 22, 2026 for Business, Enterprise, Edu, and Teachers plans, powered by Codex.
- OpenAI is deprecating Custom GPTs for organizations and will require migration to Workspace Agents on a date still to be announced.
- Workspace Agents add background execution, native SaaS actions, team sharing, scheduling, persistent memory, and admin controls including a Compliance API.
- Pricing is free until May 6, 2026, then credit-based. Budget impact differs from flat seat Custom GPT pricing.
- The competitive picture pits OpenAI against Microsoft Copilot, Google Gemini Enterprise, and Claude for Work, raising the stakes on agent platform selection.
Not sure where Workspace Agents fit in your AI roadmap? Book a discovery call and we will help you figure that out, no strings attached.