For teams running voice AI in production, infrastructure decisions show up in subtle but important ways. How quickly a dashboard loads. How reliably a call connects. How confidently you can scale usage during peak demand.
That’s why we’ve launched US-based infrastructure for customers operating in the United States.
By moving core backend services and databases into US territory, we’ve removed cross-Atlantic dependencies that introduce unnecessary latency and variability. The result is a platform that feels faster, behaves more predictably, and better aligns with US data residency expectations—without requiring changes to how teams already work.
Why infrastructure location matters for voice AI
Voice AI runs in real time. Every request, response, and system decision happens under tight latency constraints. When services and databases are separated by continents, small delays can compound—both for callers and for the teams building and maintaining voice workflows.
There’s also the question of data location. For many US-based organizations, knowing where customer data is stored and processed is not optional. It’s part of internal policy, regulatory requirements, or customer commitments.
This update is designed to address both realities by bringing data, compute, and voice traffic closer to where they’re actually used.
What this changes for US-based teams
With US-based infrastructure now live, several practical improvements take effect immediately:
1. US data residency
Customer data can now be stored and processed entirely within the United States, making it easier to meet data residency and security requirements without changing workflows or integrations.
2. Lower latency across the platform
Removing cross-continent database round-trips results in faster dashboard load times, quicker flow publishing, and smoother navigation throughout the product.
3. Higher-performance telephony
Voice requests initiated in the US are routed through local servers, reducing call connection times and increasing concurrent call capacity during high-volume periods.
4. Improved stability under load
Localized compute resources make the system more responsive and resilient during peak traffic and sustained call volumes.
Built for teams running voice AI at scale
This update isn’t about adding a new feature. It’s about strengthening the foundation for teams who rely on voice AI as part of their core operations.
If your organization operates in the United States, US-based data residency and lower latency provide a faster, more stable, and more predictable environment for running voice AI in production.
Discover how US-based data residency and lower latency benefit your workspace. Book a meeting with our team now to learn more.






