DevConf.CZ 2025

Cedric Clyburn

Cedric Clyburn (@cedricclyburn), Senior Developer Advocate at Red Hat, is an enthusiastic software technologist with a background in Kubernetes, DevOps, and container tools. He has experience speaking and organizing conferences including DevNexus, WeAreDevelopers, The Linux Foundation, KCD NYC, and more. Cedric loves all things open-source, and works to make developer's lives easier! Based out of New York.


Company or affiliation

Red Hat

Job title

Senior Developer Advocate


Sessions

06-12
09:30
35min
Keynote: Fine-tuning our way towards openness in AI
Carol Chen, Cedric Clyburn

The rise of generative AI is reshaping how we think about collaboration, application development, and daily life. It enables more efficient solutions and better-informed decisions, though the journey can feel overwhelming. Which foundation model should I use? How do I know if the pre-training datasets are unbiased or if the tuning and inference methods ensure fairness? Can I trust the results from these models?

The key to navigating this emerging path is adopting the flexibility, transparency, and collaboration of open source that many of us are familiar with. What does this mean in practice? How can we equip ourselves and our communities with the right tools and knowledge to dispel doubts and address uncertainties? Is there a more comforting light at the end of this confusing tunnel?

Sometimes it feels like there are more questions than answers. In this session, we ask you to keep an open mind and explore possibilities with us. We will discuss approaches to making AI tangible and actionable, illustrating them with use cases and demos.

D105 (capacity 300)
06-13
14:00
35min
Bootable Containers in Action: Hands on with Deploying AI Workloads
Cedric Clyburn, Carol Chen

There’s an exciting potential for bootable containers, which allow you to build and manage a full operating system just like a container image, and recently, Red Hat announced it’s intention to donate the tool to the Cloud Native Computing Foundation (CNCF). However, for AI/ML workloads which require a complicated stack of dependencies, this technology helps curate the delivery of a full stack for training and inferencing, for example with Red Hat Enterprise Linux AI. Join us as we put together an operating system for running an AI-enabled application with CentOS Stream, using an InstructLab fine-tuned model from our local developer workstation. With bootable containers, our deployment workflow is simplified, with flexibility for dynamic requirements and environments in building the next generation of Linux workloads.

Linux Distributions, Operating Systems, and Edge
E105 (capacity 70)
06-13
15:30
80min
Build your own language model with InstructLab & Open Source AI
Cedric Clyburn, Carol Chen

In the past year, we’ve noticed how open source language models have now met or even exceeded the capabilities of proprietary AI models. That’s fantastic, but how can you get started customizing a model of your own, to be an expert in whatever your use case might be? Traditionally it required extensive data science or ML engineering, more hardware than most of us will ever have, and highly structured datasets. But now, various open source projects have emerged to simplify the fine-tuning process, where we teach a model how to learn new tricks, saving costs, improving accuracy, and ensuring data privacy.

We encourage developers, AI engineers, and all to join us in this workshop, where you’ll help 404 Airlines adopt generative AI through a domain-specific small language model. You’ll learn the fundamentals of model training and deployment, how to incorporate real-time data with retrieval augmented generation (RAG), and model integration into a live application! Bring a laptop and let’s take flight, while using open source to customize and run our own language model.

Artificial Intelligence and Data Science
A218 (capacity 20)
06-14
14:00
80min
Building Intelligent Apps with Quarkus and RAG: From Raw Data to Real-Time Insights
Christopher Nuland, Cedric Clyburn

Efficient data ingestion is foundational to modern AI-driven applications, yet developers face significant challenges: unstructured data, sensitive information management, and rising costs from excessive model fine-tuning. Fortunately, cloud-native Java runtimes like Quarkus simplify this process by seamlessly bridging data ingestion and AI workflows, primarily through Retrieval-Augmented Generation (RAG). In this hands-on technical workshop tailored for developers and AI engineers, we'll explore how Quarkus empowers teams to ingest, structure, and query data, making institutional knowledge instantly available to large language model (LLM) consumers.

Participants will:
* Structure Unstructured Data: Learn to extract actionable insights from PDFs, proprietary formats, and unstructured documents using the open-source Docling project, preparing your data for seamless AI integration.
* Deploy and Utilize RAG Effectively: Understand how RAG enables real-time retrieval and enhances generative responses without extensive fine-tuning. We’ll also cover targeted fine-tuning with InstructLab for specialized, domain-specific knowledge.
* Hands-On AI Development (Inner and Outer Loops): Practice building AI workflows locally using Podman AI Lab, ensuring sensitive data remains secure, and scale seamlessly into a Kubernetes-native environment, maintaining privacy and cost efficiency.

We'll culminate the workshop by constructing a practical, privacy-conscious application: a searchable, AI-powered ticketing solution inspired by systems like ServiceNow. Join us and discover how easily Quarkus and RAG can transform your raw data into secure, powerful, and instantly accessible business insights.

Application and Services Development
C228 (capacity 24)