Carol Chen
Carol Chen is a Community Architect at Red Hat, supporting and promoting various upstream communities over the last 9 years such as InstructLab, Ansible and ManageIQ. She has also been actively involved in open source communities while working for Jolla and Nokia previously. In addition, she has experiences in software development/integration in her 12 years in the mobile industry. Carol has spoken at events around the world, including AI_Dev in France and OpenInfra Summit in China. On a personal note, Carol plays the Timpani in an orchestra in Tampere, Finland, where she now calls home.
Red Hat
Job title –Principal AI Community Architect
Sessions
The rise of generative AI is reshaping how we think about collaboration, application development, and daily life. It enables more efficient solutions and better-informed decisions, though the journey can feel overwhelming. Which foundation model should I use? How do I know if the pre-training datasets are unbiased or if the tuning and inference methods ensure fairness? Can I trust the results from these models?
The key to navigating this emerging path is adopting the flexibility, transparency, and collaboration of open source that many of us are familiar with. What does this mean in practice? How can we equip ourselves and our communities with the right tools and knowledge to dispel doubts and address uncertainties? Is there a more comforting light at the end of this confusing tunnel?
Sometimes it feels like there are more questions than answers. In this session, we ask you to keep an open mind and explore possibilities with us. We will discuss approaches to making AI tangible and actionable, illustrating them with use cases and demos.
There’s an exciting potential for bootable containers, which allow you to build and manage a full operating system just like a container image, and recently, Red Hat announced it’s intention to donate the tool to the Cloud Native Computing Foundation (CNCF). However, for AI/ML workloads which require a complicated stack of dependencies, this technology helps curate the delivery of a full stack for training and inferencing, for example with Red Hat Enterprise Linux AI. Join us as we put together an operating system for running an AI-enabled application with CentOS Stream, using an InstructLab fine-tuned model from our local developer workstation. With bootable containers, our deployment workflow is simplified, with flexibility for dynamic requirements and environments in building the next generation of Linux workloads.
In the past year, we’ve noticed how open source language models have now met or even exceeded the capabilities of proprietary AI models. That’s fantastic, but how can you get started customizing a model of your own, to be an expert in whatever your use case might be? Traditionally it required extensive data science or ML engineering, more hardware than most of us will ever have, and highly structured datasets. But now, various open source projects have emerged to simplify the fine-tuning process, where we teach a model how to learn new tricks, saving costs, improving accuracy, and ensuring data privacy.
We encourage developers, AI engineers, and all to join us in this workshop, where you’ll help 404 Airlines adopt generative AI through a domain-specific small language model. You’ll learn the fundamentals of model training and deployment, how to incorporate real-time data with retrieval augmented generation (RAG), and model integration into a live application! Bring a laptop and let’s take flight, while using open source to customize and run our own language model.