BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//pretalx.devconf.info//devconf-us-2025//talk//ZXUGWY
BEGIN:VTIMEZONE
TZID:EST
BEGIN:STANDARD
DTSTART:20001029T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10;UNTIL=20061029T070000Z
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:STANDARD
DTSTART:20071104T030000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000402T030000
RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4;UNTIL=20060402T080000Z
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
BEGIN:DAYLIGHT
DTSTART:20070311T030000
RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-devconf-us-2025-ZXUGWY@pretalx.devconf.info
DTSTART;TZID=EST:20250920T123000
DTEND;TZID=EST:20250920T130500
DESCRIPTION:Have you ever had to deal with training machine learning models
  where the data is very large? If the data does not fit in main memory\, t
 hen how can you use GPUs if their memories are even smaller? Many of these
  cases require strategies for handling data sets. In this presentation\, w
 e will introduce DASF\, a framework that brings together lazy data loading
  techniques using Dask\, acceleration techniques using RAPIDS AI\, and oth
 er techniques that facilitate the use of large data in ML pipelines locall
 y or in HPC environments. We will also present a show case carried out wit
 h a company in the oil and gas sector.
DTSTAMP:20260315T085008Z
LOCATION:Ladd Room (Capacity 170)
SUMMARY:Training AI Models on Massive Datasets - Julio Faracco
URL:https://pretalx.devconf.info/devconf-us-2025/talk/ZXUGWY/
END:VEVENT
END:VCALENDAR
