BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.devconf.info//devconf-us-2025//talk//NKPGBG
BEGIN:VTIMEZONE
TZID:EST
BEGIN:STANDARD
DTSTART:20001029T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10;UNTIL=20061029T070000Z
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:STANDARD
DTSTART:20071104T030000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000402T030000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4;UNTIL=20060402T080000Z
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:DAYLIGHT
DTSTART:20070311T030000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-devconf-us-2025-NKPGBG@pretalx.devconf.info
DTSTART;TZID=EST:20250919T164000
DTEND;TZID=EST:20250919T171500
DESCRIPTION:Traditional approaches to improving AI model performance—scal
 ing model size or training data—are increasingly constrained by cost\, l
 atency\, and diminishing returns.\n\nInference-Time Scaling (ITS) offers a
 n orthogonal solution by optimizing how computational resources are alloca
 ted during inference.By restructuring search and evaluation strategies at 
 test-time\, ITS significantly enhances model output quality without retrai
 ning or expanding model parameters.\n\nIn this talk\, we will introduce th
 e top methods of ITS\, and how you can try it on your existing models usin
 g off-the-shelf tool-kits such as reward_hub and inference_time_scaling li
 brary.
DTSTAMP:20260315T073729Z
LOCATION:Ladd Room (Capacity 170)
SUMMARY:Unlocking Smarter AI with Inference-Time Scaling - Guangxuan Xu\, K
 ai Xu
URL:https://pretalx.devconf.info/devconf-us-2025/talk/NKPGBG/
END:VEVENT
END:VCALENDAR
