Laura Barcziova
Senior software engineer at Red Hat, working on Packit, agentic automation, and open-source projects.
Session
There is a massive gap between a flashy prototype and an AI service you can trust. While implementing an agentic service to automate RPM packaging, we learned that while AI handles complex tasks, your code must provide reliability. The real challenge was building a "safety harness" to keep a non-deterministic model from breaking deterministic systems.
This talk shares the practical engineering required to make agents production-ready:
- Sandboxing: Why you should never let an agent run free and how to use isolated environments to ensure AI commands can't damage your system.
- Validation loops: Automated checks that treat every AI suggestion as an untrusted draft that must be verified before execution.
- Observability: Move beyond the "black box" by tracking an agent’s "train of thought" so you can debug AI failures like any other software bug.
If you want to build AI tools as stable, observable, and secure as traditional code, this session provides the blueprint we used to get there.