2025-09-19 –, Ladd Room (Capacity 170)
llm-d is a well-lit path for anyone to serve LLMs at scale, for any model across a diverse and comprehensive set of hardware accelerators. Come learn more about how llm-d enables distributed inference at scale!
Beginner - no experience needed
Robert is a director of engineering at Red Hat. Before joining Red Hat, Robert was senior director of engineering at Neural Magic. He is a core committer to vLLM and a maintainer of lllm-d.