DevConf.IN 2025

RHEL AI: Best Practices And Optimization Techniques To Achieve Accurate Custom LLM
2025-02-28 , Raigad Room (Chanakya Building / School of Business)

RHEL AI is a Red Hat product based on upstream InstructLab project allowing user to customize Large Language Models (LLM) using private data.

Accuracy of the custom LLM is critical to achieve optimal performance of AI.

Best practices and optimization is required at every stage of LLM customization to achieve accurate custom LLM for inferencing.

  1. Data Seed stage
  2. Synthetic Data Generation stage
  3. Training stage
  4. Evaluation and Re-Training stage
  5. Prompt Engineering

The session will cover best practices and optimization techniques which can be used on each stage to achieve optimal and accurate LLMs.

The information shared in the session can also be used on upstream InstructLab or 3rd party LLM Models in general.


What level of experience should the audience have to best understand your session?

Beginner - no experience needed

I have total of around 22 years of IT experience.

In Red Hat, I have competed around 11 years and 8 months.