Gaurav Kamathe
Seasoned Software Engineering professional.
Primary interests are AI/ML, Security, Linux, Malware.
Loves working on the command-line.
Sessions
As a cornerstone of Production AI, Kubeflow continues to redefine how enterprises orchestrate machine learning workflows. However, the project's true strength lies in its ecosystem. In this lightning talk, we’ll share Red Hat’s vision for Kubeflow community engagement in 2026. Join Amita Sharma (Kubeflow Trainer lead) and the team as we discuss our roadmap for fostering contributor growth, enhancing upstream collaboration, and what you can expect from the Kubeflow presence at this year's community booth. Whether you're a seasoned contributor or an ML enthusiast, learn how you can help shape the future of open-source AI.
In the rush to operationalize machine learning, teams often celebrate “great benchmark results” while overlooking whether their model has truly been validated for its intended purpose. The result? Impressive numbers that crumble in real-world deployment — models that outperform baselines but underperform expectations.
This talk explores the subtle — yet crucial — difference between model validation and model benchmarking. While both rely on similar metrics, they answer fundamentally different questions.
We’ll unpack how these two processes differ in goal, methodology, and risk management, using simple mental models and relatable real-world analogies. You’ll learn how to design evaluation workflows that distinguish between proving correctness and proving competitiveness — and why this distinction is essential for reproducibility, transparency, and trust, especially in open-source and collaborative ML environments.