DevConf.CZ 2025

Taming the Wild West of ML: Practical Model Signing with Sigstore on Kaggle
2025-06-14 , D105 (capacity 300)

The rapid evolution of LLMs and the ML field has ushered in remarkable progress, but also a new wave of security threats. Model poisoning, supply chain vulnerabilities, and the challenge of verifying model and data provenance are just a few of the risks we face.

We've developed an efficient solution to sign models with Sigstore, at scale. This talk touches upon the practical experience of integrating this solution into Kaggle, a leading platform for data science and machine learning.

Attendees will learn about the benefits of model signing, and best practices for securing ML workflows. By sharing actionable insights, we aim to empower other model hubs to adopt similar solutions. Protecting the integrity of all ML models through widespread adoption will prevent a significant number of ML supply chain incidents.


Experience level

Beginner - no experience needed

See also:

Mihai Maruseac is a member of Google Open Source Security team (GOSST), working on Supply Chain Security, specifically for ML, but also a GUAC maintainer. Before joining GOSST, Mihai created the TensorFlow Security team after joining Google, moving from a startup to incorporate Differential Privacy (DP) withing Machine Learning (ML) algorithms. Mihai has a PhD in Differential Privacy from UMass Boston.