Aleksei Turov
Expert Data Scientist (banking & telecom) focused on credit risk and production ML. Built online lending and personalization systems, and deployed/monitored ML services using MLflow, Airflow, Docker and FastAPI. Speaker at DevFest Bishkek 2025 on monitoring scoring models in production.
Session
Data teams want fast, safe ways to ship models to production — but cloud budgets and managed platforms are not always an option. This talk presents a simple, fully on-prem MLOps reference setup built from open-source components: training with notebooks/Airflow, experiment tracking and registry with MLflow + Postgres, artifact storage on MinIO (S3), promotion via MLflow aliases (Test/Production), and per-alias serving using MLflow Serve containers.
The key idea: aliases act as stable contracts, so promotion and rollback become an instant alias switch while model versions change underneath. You’ll see a short demo of the workflow: train → register → switch alias → a new serving container starts → dashboards update.
We’ll also cover what this setup monitors today (health checks, logs via Loki, basic Grafana panels) and what to add next (latency/RPS/error metrics, drift and quality monitoring).