Andrew Block
Andrew Block is a Distinguished Architect at Red Hat who works with organizations to adopt Open Source solutions with a focus on Cloud Native, security and emerging technologies. Andrew is the author of multiple technical publications and frequently shares his knowledge and experiences related to relevant industry topics. He is a recognized leader, maintainer and contributor within the Open Source community, and partners with organizations to help incorporate Open Source practices and their benefits.
Sessions
AI models have become the next wave of cloud native applications. And, in a cloud native world, containers have become the de facto method of delivery. However, instead of bundling the model inside a container, what if there was a way to publish models directly while reusing many of the same technologies?
OCI artifacts have emerged as this solution and an increasing number of technologies have adopted this approach for packaging and distributing content. When considering OCI artifacts for AI models, several questions still remain open.
Packaging machine learning models is complex, often requiring teams to use proprietary package types or cobble together open source tools. These inconsistent environments, manual processes, and proprietary formats lead to deployment failures, delays, increased operational costs, and vendor lock-in.
ModelPack, an emerging Open Source Project, solves these challenges by providing a standardized, consistent, reproducible, and portable packaging format for AI/ML models, that is vendor neutral. The result simplifies deployment, reduces errors, and ensures models work seamlessly across a variety of environments.
In this session, attendees will learn how to package, distribute and run AI/ML projects as OCI artifacts like a pro. By exploring the end to end lifecycle of an AI/ML model including the resources provided by the ModelPack project, attendees will not only see the benefits, but have a repeatable process that they can reuse in their own environments.
Agentic architectures introduce new security challenges like dynamic policies, autonomous decision loops, continuous model execution, and cross-service actions. In this talk, we unpack the full identity flow for securing these systems from attesting compute, verifying workload lineage, enabling cryptographic identity with SPIFFE/SPIRE, integrating OIDC federation, and enforcing fine-grained authorization using purpose-built control loops. We explore patterns for securing AI agents, vector databases, model-serving pipelines, and GPU/Confidential Compute workloads. The session includes design patterns, identity lifetime management, trust-domain boundaries, workload attestation using hardware-backed roots, and how to build a platform where every component, from the operator to the model pipeline, authenticates and authorizes seamlessly.