Nikita Sanjay Patwa
Hello, I am a Software Maintenance Engineer at Red Hat. I started two years ago as an intern in Technical Support, building strong troubleshooting and customer support skills. I later moved to the Sustaining Engineering team, where I maintain user-space packages, focusing on stability and long-term reliability. I’m passionate about emerging technologies and applying modern innovations in open-source and systems engineering.
Session
Generative AI might be the future, but it still runs on Python, glibc, OpenSSL, and the Linux kernel. What happens when a critical CVE drops in these foundational components? If you blindly update, you risk breaking brittle ML dependencies. If you do nothing, your AI infrastructure becomes a massive attack vector.
In this session, we will explore incident response from the perspective of an Enterprise Linux distro engineer. We will demystify how CVE severity is analyzed specifically for AI workloads and unpack the delicate engineering decisions behind backporting security fixes without triggering regressions in complex AI runtimes.
Through a hands-on live demo, we will recreate a historical CVE in a core cryptographic library, demonstrate its impact on a running AI inference service, and apply a seamless system patch to validate service continuity. You will leave with a practical playbook for navigating security crises without sacrificing the stability of your production AI.