Huzaifa Sidhpurwala
I work in Red Hat's Product Security AI team, mainly doing research in the field of AI security, safety and trustworthiness. I have studied AI security from Stanford and am a certified Trusted AI Safety Expert from Cloud Security Alliance Foundation. I have over 15 years of security experience working for open source projects across the entire ecosystem.
Session
AI systems today demonstrate impressive capabilities—but they also introduce a rapidly expanding attack surface. Modern machine learning pipelines, from data collection and training to inference, are vulnerable to a wide range of adversarial manipulations. This talk provides a practitioner-focused exploration of how attackers compromise AI systems, using real research and case studies. Equally important, the session outlines defensive strategies grounded in current academic and industry work.
Attendees will leave with a clear, realistic understanding of how adversarial attacks work, what defenses are actually effective today, and how to architect AI systems that remain trustworthy even under adversarial pressure.