AI is moving fast—but reliability can’t be an afterthought. In this roundtable, seasoned AI leaders share hard-won insights on what it really takes to run trustworthy systems in production. Whether you’re scaling LLM apps or deploying traditional ML, you’ll walk away with practical steps you can apply today.
Key Topics for AI Teams:
- How to monitor ML and generative AI models before they break
- Proven tactics to handle edge cases and adversarial inputs in real time
- Scaling strategies that keep reliability high as workloads grow.
Speakers:
- Dr. Haixun Wang Ph.D., Head of AI, EvenUp & former VP Engineering, Instacart
- Dr. Helen Gu Ph.D., Founder & CEO, InsightFinder AI & Professor, North Carolina State University
- George Miranda, VP Marketing, InsightFinder AI & co-author Observability Engineering (O’Reilly)
Recorded September 11, 2025
If you’re building AI products, this conversation will help you design systems that stay reliable under pressure.