BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.devconf.info//devconf-cz-2026//talk//PCK73G
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-devconf-cz-2026-PCK73G@pretalx.devconf.info
DTSTART;TZID=CET:20260619T140000
DTEND;TZID=CET:20260619T152000
DESCRIPTION:With the rapid rise of AI\, organisations are eager to adopt AI
  models into their workflows\, yet model deployment remains complex\, reso
 urce-intensive\, and prone to security risks\, making it difficult to expe
 riment and iterate with. Enter Ramalama\, an open-source tool that simplif
 ies inferencing of AI models with the familiar approach of containers\, wh
 ile keeping everything local.\n\nIn this workshop\, you’ll get an in-dep
 th introduction to Ramalama\, its flexibility with container engines\, mod
 el registries and inference runtimes\, how it abstracts underlying complex
 ities\, can help streamline your workflow\, making AI model deployment a s
 traightforward process. \n\nAttendee Takeaways:\nUnderstanding of Ramalama
 's role in integrating AI models with container technology.\nInsights into
  the security and performance benefits of running AI models in isolated co
 ntainers.\nPractical knowledge on deploying and scaling AI workloads using
  Ramalama.
DTSTAMP:20260430T125006Z
LOCATION:C228 (capacity 24)
SUMMARY:Ramalama: Local AI Model Deployment with containers. - Carol Chen\,
  Dominik Kawka
URL:https://pretalx.devconf.info/devconf-cz-2026/talk/PCK73G/
END:VEVENT
END:VCALENDAR
