BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
PRODID:adamgibbons/ics
METHOD:PUBLISH
X-PUBLISHED-TTL:PT1H
BEGIN:VEVENT
UID:sId3xYZCXWpX8cCaIlk5d
SUMMARY:Research @ TNG: Our roadmap to distributed training on a B200 clust
	er
DTSTAMP:20260430T144232Z
DTSTART:20260522T080500Z
DESCRIPTION:Description:\nJoin this talk for an inside look at TNG's AI Res
	earch Team as Dr. Andreas Rabenstein and Dr. Fabian Klemm share their jour
	ney setting up an 8×NVIDIA B200 GPU cluster for distributed training and i
	nference. They will walk you through the process – from hardware setup and
	 network configuration to software stack decisions and performance tuning.
	This talk covers the technical challenges they faced\, some of the experim
	ents they ran\, and lessons learned. They will discuss container orchestra
	tion\, distributed training frameworks\, and inference optimization\, shar
	ing benchmarks and results from their test workloads.Whether you are plann
	ing your own GPU cluster\, curious about the B200 architecture\, or intere
	sted in AI infrastructure\, this session provides practical insights and h
	onest reflections on what worked and what didn't.\n-----------------------
	---------\n\nSpeaker:\n- Fabian Klemm\n- Andreas Rabenstein\n\n-----------
	---------------------\n\nTalk details:\n- Link to the Big Techday website:
	 https://bigtechday.com/en/talks#7LcWdO8zbqxy61ueLKy2FN\n
LOCATION:Strietzel
DURATION:PT50M
END:VEVENT
END:VCALENDAR
