Christopher Nuland
Christopher Nuland is a Principal Technical Marketing Manager for AI at Red Hat and has been with the company for over six years. Before Red Hat, he focused on machine learning and big data analytics for companies in the finance and agriculture sectors. Once coming to Red Hat, he specialized in cloud native migrations, metrics-driven transformations, and the deployment and management of modern AI platforms as a Senior Architect for Red Hat’s consulting services, working almost exclusively with Fortune 50 companies until recently moving into his current role. Christopher has spoken worldwide on AI at conferences like KubeCon EU/US and Red Hat’s Summit events.
Red Hat
Job title –Principal Technical Marketing Manager for AI
Sessions
In this hands-on presentation, learn how Python, Kubernetes, and practical programming came together to build an AI capable of beating the iconic 1990s arcade game, Double Dragon. Instead of complex AI theory, this talk shares an accessible story about how developers can leverage familiar tools to achieve remarkable results.
You'll see how I programmed an AI to interact directly with the game using PyBoy, a Python-based emulator specifically designed for seamless integration with Python scripts. Highlights include:
-
Creating and deploying the PyBoy emulator within Kubernetes, enabling scalable and repeatable AI training sessions. I'll demonstrate containerizing PyBoy, managing Kubernetes resources efficiently, and ensuring consistent, reproducible training environments.
-
Designing effective reward systems in Python to guide the AI toward mastering complex game scenarios. We'll dive into how reward structures were crafted to incentivize strategic gameplay behaviors, ensuring the AI learns efficiently and effectively.
-
Highlighting the AI’s victory, showcasing in real-time how it outperformed human players. You'll witness live demonstrations of the trained AI conquering increasingly challenging game scenarios, demonstrating the power and practicality of integrating Python and Kubernetes.
This session offers relatable insights into how developers can practically implement AI in projects using open-source tools and accessible programming practices.
Efficient data ingestion is foundational to modern AI-driven applications, yet developers face significant challenges: unstructured data, sensitive information management, and rising costs from excessive model fine-tuning. Fortunately, cloud-native Java runtimes like Quarkus simplify this process by seamlessly bridging data ingestion and AI workflows, primarily through Retrieval-Augmented Generation (RAG). In this hands-on technical workshop tailored for developers and AI engineers, we'll explore how Quarkus empowers teams to ingest, structure, and query data, making institutional knowledge instantly available to large language model (LLM) consumers.
Participants will:
* Structure Unstructured Data: Learn to extract actionable insights from PDFs, proprietary formats, and unstructured documents using the open-source Docling project, preparing your data for seamless AI integration.
* Deploy and Utilize RAG Effectively: Understand how RAG enables real-time retrieval and enhances generative responses without extensive fine-tuning. We’ll also cover targeted fine-tuning with InstructLab for specialized, domain-specific knowledge.
* Hands-On AI Development (Inner and Outer Loops): Practice building AI workflows locally using Podman AI Lab, ensuring sensitive data remains secure, and scale seamlessly into a Kubernetes-native environment, maintaining privacy and cost efficiency.
We'll culminate the workshop by constructing a practical, privacy-conscious application: a searchable, AI-powered ticketing solution inspired by systems like ServiceNow. Join us and discover how easily Quarkus and RAG can transform your raw data into secure, powerful, and instantly accessible business insights.