BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.devconf.info//devconf-cz-2026//talk//HFNTKE
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-devconf-cz-2026-HFNTKE@pretalx.devconf.info
DTSTART;TZID=CET:20260618T140000
DTEND;TZID=CET:20260618T143500
DESCRIPTION:In performance critical domains like fintech\, gaming\, real-ti
 me analytics\, AI inference\, and edge computing\, even small inefficienci
 es can cause event loop stalls and unpredictable tail latency.\n\nThis tal
 k explores how to push Node.js beyond its perceived limits by combining mo
 dern platform capabilities: worker_threads for true parallelism\, native m
 odules via N-API (Rust or C++) for CPU-intensive workloads\, and fine-grai
 ned performance analysis of the event loop. Through a real-world API examp
 le\, we’ll show how to offload heavy computation without blocking reques
 t handling\, reduce serialization overhead\, and achieve stable\, low-late
 ncy responses under high load.\n\nAttendees will learn how to identify CPU
  bottlenecks\, design worker pools\, decide when JavaScript is “fast eno
 ugh\,” and integrate native code without sacrificing maintainability. Th
 e session includes live demos\, profiling graphs\, and before/after latenc
 y benchmarks using tools like clinic.js.
DTSTAMP:20260430T125204Z
LOCATION:E104 (capacity 72)
SUMMARY:Going Fast: Building Ultra-Low Latency APIs in Node.js with Native 
 Modules and Worker Threads - Deepesh Nair\, Prathamesh Shirsat
URL:https://pretalx.devconf.info/devconf-cz-2026/talk/HFNTKE/
END:VEVENT
END:VCALENDAR
