BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
PRODID:adamgibbons/ics
METHOD:PUBLISH
X-PUBLISHED-TTL:PT1H
BEGIN:VEVENT
UID:hNRJwNP6cN-fLw6f2Sir-
SUMMARY:Break me if you can: Evaluating safety robustness in AI models 
DTSTAMP:20260430T144231Z
DTSTART:20260522T114500Z
DESCRIPTION:Description:\nAI safety is often framed as an ethical question\
	, but for businesses shipping real products\, it is also a matter of secur
	ity\, trust\, and legal integrity. Unsafe model behavior can expose secret
	s\, amplify bias\, enable abuse\, and create serious reputational risk. Th
	is talk gives developers and product teams a practical introduction to AI 
	alignment and AI safety through the lens of how systems actually fail in p
	roduction. \n\nFrom there\, Áron Erdélyi\, László Balázsik\, and Benjamin 
	Balogh examine how the AI red teaming field is moving from manual probing 
	to automated adversarial evaluation at scale\, including static test sets\
	, agentic jailbreaking\, and optimizer-driven attack discovery. They close
	 with a practical threat model\, concrete mitigations\, and an overview of
	 the current AI security landscape\, giving developers a grounded framewor
	k for building safer\, more secure AI systems.\n--------------------------
	------\n\nSpeaker:\n- Laszlo Balazsik\n- Áron Erdelyi\n- Benjamin Balogh\n
	\n--------------------------------\n\nTalk details:\n- Link to the Big Tec
	hday website: https://bigtechday.com/en/talks#10nd7cyMtBmtuCJF3j45IX\n
LOCATION:Kleine Lokhalle
DURATION:PT50M
END:VEVENT
END:VCALENDAR
