Traditional intent-based Voice AI is rigid. Modern LLM-based Voice AI is flexible and adaptive to live conversation, but sometimes responds in unexpected ways.<p>Pre-populating a semantic cache is a way to get consistent Voice AI outputs when you need it while still getting the magical experience of modern Voice AI. We describe more to this approach here.