Hema Veeradhi
Hema Veeradhi is a Principal Data Scientist working in the Emerging Technologies team part of the office of the CTO at Red Hat. Her work primarily focuses on implementing innovative open AI and machine learning solutions to help solve business and engineering problems. Hema is a staunch supporter of open source, firmly believing in its ability to propel AI advancements to new heights. She has been a previous speaker at Open Source Summit, KubeCon, DevConf CZ and FOSSY.
Principal Data Scientist
Company or affiliation –Red Hat
Session
Why do so many LLM-based AI agents break down in real-world settings? Despite their increasing popularity, many are fragile, hard to scale, and tightly coupled to specific toolchains. We set out to build a modular, cloud-native agent stack using open source components—and one component that stood out was the Model Context Protocol (MCP).
MCP is an open standard that enables AI assistants and agents to seamlessly connect with real data sources—content repositories, business tools, development environments, and more. Think of it as a USB-C port for AI applications—rather than building custom integrations for every tool, MCP simplifies connectivity, authentication, and data flow, ensuring seamless interoperability between AI models and external systems.
In this talk, we’ll walk through how we integrated MCP into an open AI agent architecture leveraging:
1) vLLM for efficient model inference
2) Llama Stack as the open source agent framework
3) MCP to handle tool invocation and data flow
4) Kubernetes for scalable, cloud-native deployment
We’ll walk through the architecture, demo the system in action, and share lessons learned along the way. You’ll gain a solid understanding of how MCP works, its role in the AI ecosystem, and whether it’s just hype or a game-changer. Whether you're an AI researcher, open source contributor, developer, or architect, you'll walk away with practical insights on using MCP to build more dynamic and efficient AI applications.