BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.devconf.info//devconf-cz-2026//talk//3CCA7M
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-devconf-cz-2026-3CCA7M@pretalx.devconf.info
DTSTART;TZID=CET:20260618T103500
DTEND;TZID=CET:20260618T105000
DESCRIPTION:The "Bigger is Better" era of AI is hitting a physical limit. W
 hile trillion-parameter models dominate the cloud\, the real-world demand 
 for private\, low-latency\, and energy-efficient intelligence is growing a
 t the edge. Enter LFM 2.5\, the latest flagship from Liquid AI. Built on a
  hybrid "Liquid" architecture rather than standard Transformers\, LFM 2.5-
 1.2B-Thinking achieves frontier-grade reasoning in a sub-1GB RAM footprint
 .\n\nIn this 15-minute lightning talk\, we will explore the shift from "Sy
 stem 1" (probabilistic chat) to "System 2" (deliberative reasoning) on con
 sumer hardware. We will dissect how LFM 2.5 uses Linear Implicit Variable 
 (LIV) operators to achieve 2x CPU throughput over Llama 3.2 and Qwen\, ena
 bling 300+ tokens/sec on mobile NPUs. Finally\, we will demonstrate a "Rea
 soning Trace" running locally on Fedora using vLLM and llama.cpp\, proving
  that you don't need a data center to build a "thinking" agent.
DTSTAMP:20260430T125129Z
LOCATION:A113 (capacity 64)
SUMMARY:Beyond the Transformer Wall: Scaling Reasoning to the Edge with LFM
  2.5 - Mitul Sharma
URL:https://pretalx.devconf.info/devconf-cz-2026/talk/3CCA7M/
END:VEVENT
END:VCALENDAR
