- Dr. Yunguo Yu, VP of AI Innovation and Prototyping
This is Part 3 of a 5-part series on Building the Orchestrated AI Foundation for Healthcare. This series addresses the $7.5-15M annual cost of AI fragmentation in healthcare. Part 1 explored the problem; Part 2 examined Microsoft’s agentic AI research. This part details the technical foundation required for production deployment.
Quick Summary: Making agentic AI work in production requires two foundational layers: Data Orchestration (unified data fabric) and Agent Orchestration (collaborative intelligence). These layers eliminate fragmentation by ensuring agents have unified data access and can coordinate like a clinical team.
Series Context: The Four-Layer Architecture
Healthcare organizations face a critical challenge: dozens of AI point solutions that don’t communicate, creating data silos, security risks, and clinician burnout. The solution is an orchestrated AI infrastructure built on four integrated layers:
- Data Orchestration – Unified data fabric connecting all sources
- Agent Orchestration – AI agents that collaborate, not compete
- Governance & Compliance – Built-in safety and auditability
- Workflow Integration – Seamless embedding into clinical workflows

Figure 1. This diagram illustrates how the five doctor personas collaborate through the orchestration layer, with each agent as an independent module coordinated by a workflow engine.
This part focuses on the first two layers—the technical foundation that makes everything else possible.
Why These Two Layers Are Foundational
Microsoft’s research validated agentic AI—multiple specialized agents collaborating like a clinical team, achieving 80% accuracy on complex diagnostic cases. If Part 2 proved that agentic AI works, Part 3 explains how to build the infrastructure that makes it work in production. Making agentic AI work in real healthcare environments requires two foundational layers that must be built together:
- Layer 1: Data Orchestration – The nervous system that connects all data sources
- Layer 2: Agent Orchestration – The brain that coordinates intelligence
These layers are interdependent. Agents can’t collaborate effectively without unified data, and unified data is useless without orchestrated agents to act on it. Build them separately, and you’ll have the same fragmentation problem you started with.
Executive Snapshot: Why This Matters Today
Agentic AI fails without unified data and orchestrated decision-making. These two layers directly reduce fragmentation costs, eliminate redundant testing, accelerate clinical decisions, and allow AI investments to scale safely across service lines. They represent the infrastructure required for AI to deliver measurable ROI in payer and provider operations.
Layer 1: Data Orchestration (The Nervous System)
The Technical Foundation
Healthcare data lives everywhere: EHRs, labs, imaging systems, social determinants of health (SDoH) platforms, wearables, genomics databases.
Fifty-plus data sources, each with its own API, schema, and latency. Some update in real-time; others batch nightly. Some use FHIR R4; others use proprietary formats. Some are cloud-native; others are legacy on-premise systems.
The solution is a Unified Clinical Data Fabric: a real-time, FHIR-native orchestration layer (standard healthcare data format) with sub-second latency that agents query and update. This isn’t a data warehouse or a data lake—it’s a live, queryable layer that agents can access in real-time.
Key Components:
- Real-Time Data Streaming: Sub-second FHIR streaming ensures agents always act on the most recent patient status.
- Schema Normalization: Automatically translates heterogeneous formats (HL7, FHIR, proprietary) into a unified schema in real-time.
- Identity Resolution: Resolves patient identity conflicts across systems (e.g., matching “John Smith” in EHR with “J. Smith” in labs).
- Synthetic Data Engine: Generates plausible, de-biased responses for missing data points rather than failing silently.
- Data Quality Layer: Validates completeness, accuracy, and timeliness before agents consume data.
Implementation Challenges
Building a unified data fabric isn’t trivial. Health systems face several challenges:
Challenge 1: API Heterogeneity
- Different EHRs expose different APIs
- Some vendors throttle API access
- Legacy systems may not have APIs at all
Solution: Build vendor-agnostic abstraction layers. Use FHIR R4 as the standard, but maintain adapters for proprietary formats. For legacy systems without APIs, consider hybrid approaches with vendor partnerships or middleware solutions.
Challenge 2: Latency Requirements
- Clinical decisions happen in seconds, not minutes
- Agents need sub-second data access
- EHR APIs may have rate limits
Solution: Implement intelligent caching with invalidation strategies. Cache frequently accessed data (patient demographics, recent labs) while streaming real-time updates. Use event-driven architectures that push updates rather than polling.
Challenge 3: Data Quality and Completeness
- Missing data is common in healthcare
- Data quality varies by source
- Timeliness matters for clinical decisions
Solution: Implement data quality scoring and confidence indicators. When data is missing or stale, flag it clearly. Use the synthetic data engine conservatively—only for non-critical decisions, and always with transparency.
What This Means for Clinicians
For Clinicians: No more logging into five systems to piece together a patient story. The AI pulls the most recent data automatically, treats missing information conservatively, and never duplicates tests because it didn’t see the last order. Clinicians can trust that the data an agent pulls is the most recent available.
Real-World Example: A sepsis agent needs to check for allergies before recommending antibiotics. Instead of the clinician manually checking the EHR, the data orchestration layer automatically pulls allergy data from multiple sources (EHR, pharmacy, previous encounters) and presents a unified view. The agent sees “penicillin allergy – confirmed in 3 sources” rather than fragmented data.
Symphony’s Data Hub Approach
Symphony is the first healthcare platform to operationalize Microsoft’s agentic architecture at enterprise scale. Zyter Symphony’s Data Hub unifies multiple data sources (EHRs, labs, imaging, SDoH, wearables) into a real-time patient 360, providing the Unified Clinical Data Fabric that agents query and update. The platform integrates with TruCare, which already supports over 44 million covered lives, demonstrating production-scale data orchestration.
Key Features:
- Real-time FHIR streaming from multiple EHRs
- Automatic identity resolution across systems
- Sub-second query latency for agent access
- Data quality scoring and confidence indicators
- Synthetic data generation for missing information (with transparency flags)
Layer 2: Agent Orchestration (The Brain)
The Technical Foundation
Managing hundreds of agents in production requires distributed systems engineering. This isn’t a simple API call—it’s coordinating multiple AI systems that need to collaborate, reach consensus, and escalate to humans when needed.
Successful implementations require three core orchestration capabilities:
1) Agent Mesh (Dynamic Service Registry)
Think of this as a service discovery layer for AI agents. The Agent Mesh dynamically discovers capabilities, routes requests based on context, and handles failover—ensuring specific agents can always be found and utilized.
Architecture Pattern: Similar to Kubernetes service discovery, but for AI agents. Each agent registers its capabilities (e.g., “I can diagnose sepsis” or “I can check drug interactions”). When a request comes in, the mesh routes it to the appropriate agent(s).
2) Workflow Engine (Stateful Orchestration)
The Workflow Engine manages the state and sequence of multi-agent interactions. It preserves context across steps (e.g., confirming a diagnosis before ordering treatment) and handles escalation logic when unique agent confidence scores drop below thresholds.
Example Workflow:
- Sepsis agent flags patient
- Allergy-checker agent verifies no contraindications
- Stewardship agent evaluates resource implications
- If all agree, recommend treatment; if not, escalate to clinician
3) Consensus Protocol (Decision Coordination)
The Consensus Protocol aggregates inputs from multiple agents to reach a final decision. It weighs votes by agent confidence and domain expertise, resolving conflicts (e.g., safety measures overriding cost efficiency) or documenting reasoning for audit.
Example: Three agents evaluate a test recommendation:
- Dr. Test-Chooser: “Recommend D-dimer” (confidence: 85%)
- Dr. Stewardship: “Approve – cost-effective” (confidence: 90%)
- Dr. Challenger: “Question – consider alternative” (confidence: 60%)
The consensus protocol weighs these votes, considers confidence scores, and either approves the recommendation or escalates to the clinician.
Implementation Patterns
Pattern 1: Agent Mesh Architecture
Request → Agent Mesh → Route to Agent(s) → Response
↓
Health Check & Failover
Pattern 2: Workflow Orchestration
State Machine:
Start → Agent 1 → Agent 2 → Agent 3 → Consensus → Decision
↓
Escalate to Human
Pattern 3: Consensus Protocol
Vote Collection → Weight by Confidence → Apply Rules → Decision or Escalate
What This Means for Clinicians
For Clinicians: When multiple AI agents collaborate on a case, they coordinate like a clinical team. If one agent recommends a test and another questions it, the system escalates to the clinician for a final decision—maintaining full transparency about why each recommendation was made.
Real-World Example: A sepsis workflow involves multiple agents:
- Sepsis predictor flags patient
- Allergy-checker verifies no contraindications
- Stewardship agent evaluates antibiotic choice
- Documentation agent prepares note
If any agent disagrees or confidence is low, the system escalates to the clinician with a clear explanation: “Sepsis agent recommends vancomycin, but allergy-checker found penicillin allergy in pharmacy records. Please review.”
The power isn’t in the agents themselves—it’s in the handshakes between them.
Symphony’s Agent Orchestration
Symphony brings this architecture to life by coordinating 40+ prebuilt modular AI agents through its Agent Mesh and Workflow Engine, enabling agents to collaborate like a clinical team. The platform demonstrates production-scale agent orchestration, supporting millions of covered lives.
Key Features:
- 40+ prebuilt agents across clinical, administrative, and operational domains
- Agent Mesh for dynamic discovery and routing
- Workflow Engine for stateful orchestration
- Consensus Protocol for multi-agent decision-making
- Human-in-the-loop escalation with full transparency
How Layers 1 & 2 Work Together
Integration Insight: These layers are symbiotic. Data orchestration provides the foundation; agent orchestration provides the intelligence. Together, they create a system where:
- Agents query unified data – No more fragmented data sources
- Data updates trigger agent workflows – Real-time intelligence
- Agent decisions write back to data fabric – Closed-loop learning
- Consensus protocols use data quality scores – Confidence-aware decisions
Example Integration:
A patient arrives in the ED. The data orchestration layer streams real-time vitals, labs, and history. The agent orchestration layer:
- Routes data to sepsis agent
- Sepsis agent queries allergy data from data fabric
- Multiple agents collaborate on recommendation
- Consensus protocol reaches decision
- Decision writes back to EHR via data fabric
All of this happens in seconds, with full auditability. The orchestration layers eliminate the fragmentation problem by ensuring agents have unified data access and can coordinate seamlessly.
Common Pitfalls: What to Avoid
Pitfall 1: Building Layers Separately
- Building data orchestration without agent orchestration creates a data warehouse, not an AI platform
- Building agent orchestration without data orchestration creates agents that can’t access data
Solution: Build both layers together, with clear interfaces between them.
Pitfall 2: Ignoring Latency Requirements
- Clinical decisions happen in real-time
- Agents can’t wait minutes for data
Solution: Design for sub-second latency from day one. Use streaming, caching, and event-driven architectures.
Pitfall 3: Over-Engineering Consensus
- Complex consensus protocols can deadlock
- Too many agents voting can slow decisions
Solution: Start simple. Use clear escalation rules. Default to human judgment when uncertain.
Pitfall 4: Neglecting Agent Health
- Agents can fail, degrade, or produce errors
- No health monitoring means silent failures
Solution: Implement comprehensive observability. Monitor agent performance, error rates, and response times. Build failover mechanisms.
What This Means for Different Roles
For CTOs: Data orchestration eliminates 40-60% of integration costs. Agent orchestration enables rapid agent deployment without rebuilding infrastructure. The model-agnostic architecture means you can swap AI models without rebuilding the system.
For CMIOs: Clinicians get unified data views automatically. No more manual data gathering across systems. Agents coordinate recommendations transparently, with full escalation paths when they disagree.
For CFOs: Reduced duplicate tests = $2-4M annual savings. Better data integration = improved risk contract performance (5-10% HCC capture improvement). The orchestration layer pays for itself in 6-12 months.
For Clinical Leaders: Agents collaborate like a clinical team, maintaining context across interactions. When agents disagree, the system escalates to clinicians with full transparency—no black boxes.
Real-World Deployment Examples
Emergency Medicine: Sepsis agents + stewardship agents + allergy-checker agents coordinate in real-time during resuscitation. The data orchestration layer streams vitals and labs; the agent orchestration layer coordinates recommendations.
Oncology: Tumor board agents pull genomic data, clinical trial eligibility, and prior authorization requirements simultaneously. The data fabric unifies multiple sources; agents collaborate on treatment recommendations.
Primary Care: Chronic disease agents + medication reconciliation agents + SDoH agents close care gaps during 15-minute visits. Unified data enables rapid agent coordination.
The Path to Maturity
Organizations typically evolve through four levels of AI orchestration maturity. Where does your organization stand?
- Level 0 – Point Solutions: Disconnected AI tools solving narrow problems (e.g., a standalone sepsis alert).
- Level 1 – Siloed AI Agents: Individual agents capable of complex tasks but unable to communicate with each other.
- Level 2 – Unified Data OR Unified Agents: Either a strong data fabric or a strong agent collaboration framework, but not both.
- Level 3 – Orchestrated Data + Orchestrated Agents (Symphony): A unified data fabric powering a mesh of collaborative agents—the requirement for enterprise-scale ROI.
What’s Next: Governance and Integration
In Part 4 of this series, we’ll explore the remaining two layers: Governance & Compliance and Clinical Workflow Integration. These layers make agentic AI safe, auditable, and usable in real clinical workflows.
We’ll cover:
- Policy-as-Code for clinical guardrails
- Audit ledgers for regulatory compliance
- Medical-legal frameworks for liability
- Ambient intelligence and workflow integration
- Failure modes and risk mitigation
Want to Go Deeper?
This executive brief is Part 3 of a five-part blog series exploring how healthcare organizations can move beyond fragmented AI tools toward a fully orchestrated, AI-native foundation.
The full series:
Part 1: Beyond Point Solutions: Building the Orchestrated AI Foundation for Healthcare
Part 2: From Single Models to Agentic AI Systems That Collaborate
Part 3: Layer 1 & 2: Data and Agent Orchestration – The Foundation
Part 4: Layer 3 & 4: Governance and Workflow Integration – Making It Real (Coming Soon)
Part 5: Build vs. Buy: The Strategic Framework and 90-Day Plan (Coming Soon)
To complement this series, a comprehensive implementation guide is coming soon. This companion resource will include expanded technical detail, implementation roadmaps, failure mode analysis, and extended case studies for healthcare leaders and technical teams.
