Work as we know it is entering a whole new phase of operations. AI agents are moving from experimentation to practical use, and the shift is changing how teams plan, execute and measure work. The focus is now on reducing friction, consolidating information, and giving employees systems that support judgment rather than replace it. This was the core message in Pasquale DeMaio’s Innovation Talk at AWS re:Invent 2025.
He outlined how organizations can deploy AI assistants that improve experience, raise productivity, and operate with the security and trust large enterprises require. As an AWS Gen AI competency partner, GoML teams see these patterns across enterprises we work with.
The companies that succeed with agentic AI follow a few steady principles. Below is a practical framework based on what works in the field and what AWS highlighted in the session.
A New Operating Model for AI Agents
Most enterprises operate inside fragmented systems. Data lives in multiple platforms. Processes depend on institutional memory. Employees spend significant time retrieving information rather than using it.
AI agents solve this only when they are fed the right data, sit in the right workflow, and operate with the right oversight. Amazon Q Business and QuickSight follow this model, which is why they were central to the session discussion. They are not “another tool.” They function as a layer that sits across tools.
Principle 1: Use Enterprise Data with Precision
AI agents need broad and accurate data access. When they can interpret documents, tickets, dashboards, chats, and activity logs, they shift from being informational tools to operational ones.
This enables simple outcomes that matter:
- Answers that reflect the organization’s context
- Faster retrieval of relevant documents
- Consolidated insights that cut across systems
- Reduced dependency on domain experts for basic information
GoML builds most enterprise AI workflows with this principle in mind. Strong data foundations create strong agents. Some of our work with AI content publishing agents, SDLC agents, and decarbonisation assistants are strong cases in point.
Principle 2: Automate with Clear Lines of Ownership
Automation only works when teams trust the system. That trust comes from a clear division of work:
- AI handles preparation, summarization, and routine tasks
- Humans handle judgment, decisions, and escalation
The session emphasized this balance. It keeps risk low. It also keeps adoption high. When people understand what the AI is responsible for, they use it more consistently and more confidently.
Principle 3: Build Workspaces People Actually Use
An AI assistant is only effective if employees use it daily. That depends on the workspace, not the model.
The modern AI workspace needs to:
- Support natural language interaction
- Bring data, tasks, and analysis into one view
- Allow actions directly from the interface
- Integrate with the systems employees already depend on
Companies like AstraZeneca, BMW, and 3M—whose examples were highlighted across multiple re:Invent sessions—benefit from this approach. Their adoption rates improved because teams did not need to change the way they worked. The AI agent simply fit into it.
In GoML deployments, this is often where the real productivity gains appear. A well-designed workspace reduces switching and raises output without requiring new behavior. Some of our work with patient health assessment and AI hedge fund data analytics leveraged these principles to great effect.
Principle 4: Measure Productivity at the Workflow Level
The session made one point clear: AI should prove value. Organizations need reliable measurement, not abstract claims.
Productivity benchmarks that matter include:
- Time saved in information retrieval
- Reduction in manual steps
- Quality consistency in outputs
- Rate of work completion
- Impact on dependent teams
These metrics expose bottlenecks and show where scaling makes sense. They also help leadership understand which agents drive real value and which need refinement.
AWS referenced multiple organizations that have adopted AI agents at scale:
- AstraZeneca uses agents to accelerate research and reduce repetitive data gathering.
- BMW deploys agents across engineering and operational workflows to improve throughput and reduce costs.
- 3M tackles application sprawl and improves sales readiness with unified workspaces.
- Priceline improves customer service operations through guided flows, transcription, and automated call summaries.
The common pattern: targeted use cases, fast pilots, and clear measures of success before expansion.
GoML’s experience mirrors this. The strongest results come from teams that start narrow, validate impact, and scale intentionally.
A Practical Way to Start with Agentic AI
Organizations do not need a large program. They need one workflow that benefits from better context, less manual effort, and a clear chain of ownership.
A simple entry plan:
- Select a workflow with recurring manual work
- Expose the right enterprise data to the agent
- Deploy an assistant inside the team’s existing workspace
- Measure improvements in time, quality, and consistency
- Expand to adjacent workflows once the value is evident
This mirrors the guidance from the session and the AI Matic approach we follow at GoML for keeping risk low and results visible.
AI agents are starting to enter a phase where improvements will become steady. The systems that succeed will be the ones built with clean data pipelines, meaningful human oversight, functional workflows, and well-defined metrics.
AWS is investing heavily in this direction. Enterprises are already validating the model. The opportunity now is to build with discipline.
GoML supports organizations moving through this transition. Not with slogans or sweeping promises, but with practical implementations grounded in workflow design, data quality, and measurable outcomes.





