Christopher Nuland
Christopher Nuland is a Principal Technical Marketing Manager for AI at Red Hat and has been with the company for over six years. Before Red Hat, he focused on machine learning and big data analytics for companies in the finance and agriculture sectors. Once coming to Red Hat, he specialized in cloud native migrations, metrics-driven transformations, and the deployment and management of modern AI platforms as a Senior Architect for Red Hat’s consulting services, working almost exclusively with Fortune 50 companies until recently moving into his current role. Christopher has spoken worldwide on AI at conferences like KubeCon EU/US and Red Hat’s Summit events.
Red Hat
Job title –Principal Technical Marketing Manager for AI
Sessions
In this hands-on presentation, learn how Python, Kubernetes, and practical programming came together to build an AI capable of beating the iconic 1990s arcade game, Double Dragon. Instead of complex AI theory, this talk shares an accessible story about how developers can leverage familiar tools to achieve remarkable results.
You'll see how I programmed an AI to interact directly with the game using PyBoy, a Python-based emulator specifically designed for seamless integration with Python scripts. Highlights include:
-
Creating and deploying the PyBoy emulator within Kubernetes, enabling scalable and repeatable AI training sessions. I'll demonstrate containerizing PyBoy, managing Kubernetes resources efficiently, and ensuring consistent, reproducible training environments.
-
Designing effective reward systems in Python to guide the AI toward mastering complex game scenarios. We'll dive into how reward structures were crafted to incentivize strategic gameplay behaviors, ensuring the AI learns efficiently and effectively.
-
Highlighting the AI’s victory, showcasing in real-time how it outperformed human players. You'll witness live demonstrations of the trained AI conquering increasingly challenging game scenarios, demonstrating the power and practicality of integrating Python and Kubernetes.
This session offers relatable insights into how developers can practically implement AI in projects using open-source tools and accessible programming practices.
In 2022, I was diagnosed with a chronic illness, a revelation that prompted reevaluating my approach to open-source contributions. However, instead of marking the end of my journey, it led to a rebirth, reshaping my strategies and perspectives. This talk will be a deep dive into the transformative adaptations I made and how these changes kept me active in the open-source community, drove my personal growth, and strengthened my contributions.
The session will focus on:
-
The art of prioritization: The importance of focusing on a select few projects, ensuring quality contributions over quantity.
-
Leaning on collaboration: How sharing the load and collaborating with peers can lead to innovative solutions and reduce the individual burden.
-
The renewed emphasis on work-life balance: Taking a step back to ensure mental and physical well-being while keeping the passion for open-source alive.
By sharing my story, I hope to inspire others navigating similar challenges and emphasize that, with the proper adjustments, one can continue to thrive and contribute meaningfully to the open-source world.
Efficient data ingestion is foundational to modern AI-driven applications, yet developers face significant challenges: unstructured data, sensitive information management, and rising costs from excessive model fine-tuning. Fortunately, cloud-native runtimes like Python simplify this process by seamlessly bridging data ingestion and AI workflows, primarily through Retrieval-Augmented Generation (RAG). In this hands-on technical workshop tailored for developers and AI engineers, we'll explore how Python empowers teams to ingest, structure, and query data, making institutional knowledge instantly available to large language model (LLM) consumers.
Participants will:
* Structure Unstructured Data: Learn to extract actionable insights from PDFs, proprietary formats, and unstructured documents using the open-source Docling project, preparing your data for seamless AI integration.
* Deploy and Utilize RAG Effectively: Understand how RAG enables real-time retrieval and enhances generative responses without extensive fine-tuning. We'll also cover targeted fine-tuning with InstructLab for specialized, domain-specific knowledge.
* Hands-On AI Development (Inner and Outer Loops): Practice building AI workflows locally using Podman AI Lab, ensuring sensitive data remains secure, and scale seamlessly into a Kubernetes-native environment, maintaining privacy and cost efficiency.
We'll culminate the workshop by constructing a practical, privacy-conscious application: a searchable, AI-powered ticketing solution inspired by systems like ServiceNow. Join us and discover how easily Python and RAG can transform your raw data into secure, powerful, and instantly accessible business insights.