2025-06-12 –, E104 (capacity 72)
The AI/ML lifecycle involves multiple stages, from data preparation to model deployment, each with its own set of challenges. This talk will explore use cases where AIOps has streamlined the AI/ML lifecycle,, enabling faster experimentation and deployment. We'll present a cloud-based solution architecture using tools like MLflow, Kubernetes, and cloud-native services (e.g., AWS SageMaker, Openshift AI). The demo will showcase an end-to-end AIOps pipeline, including data versioning, model training, deployment, and monitoring with a focus on integrating LLMs and AI agents into production workflows.
Intermediate - attendees should be familiar with the subject
I am a passionate DevOps Engineer currently working at Red Hat as an Associate Software Maintenance Engineer. With two years of experience, I specialize in Linux kernel maintenance, cloud infrastructure, and automation. My expertise includes hybrid cloud management (AWS, Terraform), containerization (Docker, Kubernetes, Podman), CI/CD pipelines (Jenkins, GitHub Actions, GitLab, ArgoCD), and monitoring tools (Prometheus, Grafana, ELK, Loki).
At Red Hat, I focus on maintaining and optimizing Red Hat Enterprise Linux (RHEL) by backporting critical CVE patches, managing kernel builds, troubleshooting complex kernel issues, and optimizing CI/CD workflows. My previous experience includes working on performance engineering and automating infrastructure using Ansible and AWS.