EdgeBytes: Frontier and Cowork: The Rise of AI Workforces and the Next Enterprise Architecture Paradigm | 3.11.26
Hi everyone, welcome to another episode of EdgeBytes from The Enterprise Edge! For the past year, enterprise AI has largely been framed as a feature story.
​
Every enterprise software vendor now has AI embedded somewhere in their stack.
ERP vendors have AI copilots.
CRM vendors have AI assistants.
Collaboration platforms have AI summarization tools.
​
But something much bigger is starting to emerge.
The next wave of enterprise AI isn’t about features.
It’s about AI workforces.
​
My name is Mark Vigoroso, founder and CEO of The Enterprise Edge, and I’d like to add another few cents to the “SaaS won’t be eaten by AI” argument.
​
Two developments in particular have done a decent job spooking the public markets regarding the future of SaaS in the enterprise AI era. These developments are of course OpenAI Frontier and Anthropic’s Claude Cowork.
​
Both are designed to move AI beyond answering questions…
and into actually doing work.
​
Let’s start with OpenAI’s Frontier initiative, which is fundamentally about enterprise orchestration.
Frontier provides a platform where organizations can deploy AI systems that:
analyze enterprise data, execute multi-step workflows, interact with internal tools, and operate across systems.
Think of it less like a chatbot and more like a digital operations layer.
The goal is to allow enterprises to create AI workers that can operate across the entire technology environment — not just within one application.
​
This is important because the biggest constraint in enterprise AI today isn’t model intelligence.
It’s fragmentation. Data lives in one system. Processes live in another. Decisions happen somewhere else.
​
Frontier is designed to operate above that complexity, orchestrating work across systems.
OpenAI launched Frontier with Fortune 500 launch partners including Intuit, State Farm, Uber, Thermo Fisher Scientific, HP, and Oracle, with additional pilots underway at companies like Cisco and T-Mobile.
​
An example use case is Frontier agents inside financial and accounting workflows to analyze financial data, support forecasting, and assist in preparing financial insights across an ecosystem.
​
Early results from Frontier adopters include manufacturing process optimization cycles shrinking from six weeks to one day, financial services sales teams gaining 90% more time for customer engagement, and energy companies increasing output by about 5% after AI-driven process optimization.
​
Now, Anthropic’s Claude Cowork takes a slightly different approach.
Instead of focusing primarily on orchestration infrastructure, Cowork is designed to function more like a knowledge worker.
It can read documents, manipulate files, execute tasks, and plan multi-step work sequences.
In practice, Cowork looks less like an infrastructure layer and more like a digital analyst or digital associate.
You give it an outcome. Like Prepare the financial briefing. Analyze the contract portfolio. Summarize the regulatory changes.
And it executes the work. Early Cowork pilot use cases include drafting internal reports, legal document review, compliance analysis, proposal generation, and product research.
​
So the distinction becomes clear. Frontier is closer to an AI operating system. Cowork is closer to an AI employee.
Ok, so here’s the critical enterprise question. How do these systems relate to the AI already embedded inside enterprise applications?
​
Because every major platform — SAP, Salesforce, Oracle, Workday, Microsoft — is already building AI agents directly into their products. The answer is that these two approaches operate at different architectural layers.
Embedded AI agents inside enterprise applications are domain-specific.
They operate inside a particular system.
A finance AI inside ERP. A sales AI inside CRM. A service AI inside a ticketing platform.
Frontier-style systems operate across those applications.
They orchestrate workflows that span multiple systems.
And Cowork-style systems operate above both, acting as general digital workers that interact with multiple environments the way humans do.
​
So rather than replacing embedded enterprise AI immediately, these platforms will likely sit on top of them. At least initially.
ServiceNow is arguable the only exception among the big enterprise ISVs, as they continue to position from a cross-functional workflow perspective with capabilities like their AI Control Tower.
​
This is where the Value Physics framework becomes important, which is a framework we’ve recently developed to help deconstruct how the best companies accelerate and sustain tangible value from enterprise AI investments.
The fundamental premise of Value Physics is that AI value moves through organizations the same way physical systems behave.
There are forces that accelerate value. And forces that slow it down.
Friction from legacy systems.
Organizational mass from governance complexity.
And velocity — the speed at which organizations can move from investment to outcome.
​
The core equation captures that dynamic:
Enterprise AI Value = [(Acceleration − Friction) / Mass] × Velocity.
And what Frontier and Cowork are both trying to do is reduce one of the biggest sources of friction in enterprise AI. Integration.
​
Today, most AI pilots fail not because the models are weak.
They fail because connecting AI to real workflows inside complex organizations is extremely difficult.
These new platforms from OpenAI and Anthropic Claude attempt to remove that friction.
​
So which approach wins? The answer is probably all of the above, but in different roles.
Frontier-style platforms will likely address enterprise orchestration. They’re designed for large-scale system integration, where AI coordinates work across dozens of applications.
​
Cowork-style systems may address knowledge work augmentation. They function more like digital professionals — analysts, researchers, operations associates.
And embedded AI agents inside enterprise applications will continue to dominate system-specific automation. Finance tasks inside ERP. Customer service tasks inside CRM. Operational workflows inside ITSM systems.
​
So the architecture that’s emerging looks something like this:
Embedded AI → Platform AI → Workforce AI.
Three layers of intelligence. Each solving a different problem.
But the most important question isn’t which model is smarter.
It’s which architecture allows organizations to move from AI investment to measurable business value faster.
​
That’s the central idea behind Value Physics.
Enterprise transformations fail not because the technology doesn’t work.
They fail because the conditions for value capture and sustainment were never established.
Too much friction. Too much organizational mass. Too little velocity.
​
Frontier and Cowork are early attempts to solve that structural problem. And To make AI deployable inside real enterprises at scale.
Let’s distil this down to 3 rules of thumb that are directionally useful, at least for now!
​
1. Use embedded AI for system-native optimization.
Start with the AI already embedded in your enterprise platforms—ERP, CRM, ITSM, and industry applications—because it has the closest access to transactional data and process context. Embedded AI delivers the fastest initial speed-to-value for domain-specific automation.
​
2. Deploy platform AI to orchestrate cross-system workflows.
Consider platforms like Frontier when processes span multiple systems—sales-to-cash, procure-to-pay, claims processing, or incident response. Platform AI reduces integration friction by coordinating workflows across applications rather than forcing humans to stitch them together.
​
3. Consider workforce AI to amplify knowledge work.
Tools like Cowork are best used as digital analysts, researchers, and coordinators that accelerate human-led work—strategy, reporting, analysis, and planning. Workforce AI should augment decision-making, not replace the domain systems where operational execution happens.
​
We continue to enter new phases of the enterprise AI era at an unprecedented rate. As of now, we’re looking at not just software platforms. Not just automation tools.
But AI workforces operating inside digital enterprises.
Some will be embedded in applications. Some will orchestrate across systems. And some will operate like digital coworkers.
​
The organizations that win won’t be the ones experimenting with the most AI.
They’ll be the ones that understand how value actually moves through their organization — and remove the forces slowing it down.
Because in the end, as Value Physics reminds us: The technology doesn’t determine the outcome. The physics does.
​
That’s all for now. Thank you for being with us. Would love to hear your reactions, experiences, and other thoughts. Leave a like, share this video or drop a comment below. See you on the next episode of EdgeBytes. Signal over noise.
