
Big Techday 25
Talks
Here you can find the talks given at our Big Techday 25, which took place at Motorworld Munich on October 24th, 2025.
You can find the talks of our past Big Techdays in our "Look Back". Provided we have obtained permission from the speakers, the recordings and slides are available in the details of the respective talk.
Topics
Adventure
Time
19:00 (CEST)
Location
Dampfdom
Language
EN
Speaker
Talk
Beyond the summit - Inspiration from fourteen peaks
How can climbing all 14 highest peaks transcend physical achievement and become a vehicle for inspiration?
Nima Rinji Sherpa shares his journey from Makalu village to becoming the youngest person to summit all 14 of the world’s highest peaks. He talks about his holistic philosophy, which emphasizes adaptability, mental fortitude, and symbiotic respect for nature, challenging conventional approaches to climbing. Through personal anecdotes and documentary insights, Nima reveals strategies for transforming tragedy into resilience, empowering a new generation to pursue purpose beyond the peaks.
AI - Research
Time
9:00 (CEST)
Location
Kohlebunker
Language
EN
Talk
ARTIPHISHELL Intelligence: LLM-driven bug hunting & patching in the AI Cyber Challenge
DARPA's grand challenges have consistently pushed the boundaries of autonomous systems. The AI Cyber Challenge (AIxCC) represents the latest frontier: Fully autonomous Cyber Reasoning Systems (CRS) that can discover, analyze, and patch vulnerabilities in critical open-source software without human intervention. The AIxCC tackled a crucial question: How can we most effectively leverage modern AI advances to help secure the critical open-source software underlying our digital infrastructure? The results exceeded expectations: Autonomous CRSes not only identified and fixed synthetic vulnerabilities planted by organizers, but collectively discovered 18 previously unknown vulnerabilities in widely-used open-source software and successfully patched 11 of them.
As co-captain of team Shellphish, Lukas Patrick Dresel will share his AIxCC journey and how they built their CRS, ARTIPHISHELL. The talk explores the competition's goals, its evolving framework, and how it demonstrated viable paths to automate some of software security's most challenging tasks. Central to this story is the remarkable evolution of Large Language Models: From basic autocomplete tools at the outset to indispensable reasoning engines powering sophisticated multi-agent systems at every stage of our pipeline by the finals. In ARTIPHISHELL, they deployed LLMs for tasks ranging from automatic vulnerability discovery through grammar-aware fuzzing to accurate patching of complex vulnerabilities - all problems that previously required expert human intervention.
However, LLMs were only part of the answer. He'll discuss how state-of-the-art program analysis techniques proved essential for grounding LLM outputs in reality to mitigate hallucinations and for maximizing the value extracted from their responses. Comparing approaches across teams, he'll examine both widely-adopted techniques and more niche approaches used by some teams.
Finally, he will cover the competition results and the key lessons learned about LLM strengths and limitations in software security contexts. Looking forward, he will also go over some other promising ways in which they have seen LLMs successfully help to find, demonstrate and fix software security issues.
Time
11:30 (CEST)
Location
Dampfdom
Language
EN
Speaker
Talk
Beyond ChatBots: Unleashing human creativity with cognitive cartography, glass-box self-recoding AI, and visual data science in Mantis
This talk introduces Mantis, an interactive human-AI platform that transcends conventional chatbot frameworks - which often exclude humans from creative processes and deliver oversimplified answers - by making complexity visible and blending natural language, structured data, and visual reasoning in a shared canvas for exploration and discovery. Mantis creates maps of meaning - semantic landscapes that represent ideas, documents, collaborators, molecules, genes, and patients - allowing users to navigate, cluster, annotate, and guide evolving AI agents within interactive workflows. Applied across domains, it turns hypotheses into iterative data science workflows, maps patient trajectories from symptoms to outcomes in healthcare, and enables drug discovery through structure-function analysis in biology. Each tool in Mantis is an autonomous reasoning agent, coordinated by an orchestrator that interprets user goals and manages tasks. The platform understands its own architecture and can generate new code to extend itself. Mantis is not a chatbot. It is a new paradigm for how humans and machines co-evolve through collaborative creation.
Time
14:55 (CEST)
Location
Dampfdom
Language
EN
Speaker
Talk
Chemputation
Digital chemistry is emerging as a new discipline at the interface of chemistry, computing, and automation. The concept of chemputation was developed at the University of Glasgow and Chemify Ltd: the programming of chemistry using a universal chemical language, robotics, and digital infrastructure to make synthesis reproducible, scalable, and deterministic. This framework bridges the gap between algorithmic design and physical realization, enabling chemical processes to be encoded as code and executed in robotic platforms. The first large-scale implementation of this vision is Chemify’s Chemifarm in Glasgow, a fully automated facility that integrates AI-driven molecular design with industrial-scale robotics and a growing digital reaction library. By unifying theory, software, and laboratory hardware, chemputation provides a reproducible foundation for accelerating discovery in drug design, materials science, and complex chemical systems, while also opening the door to decentralized and off-world chemical manufacturing. Chemputation is the future of chemistry in both academia and industry.
Time
16:15 (CEST)
Location
Strietzel
Language
EN
Speaker
Talk
Chess AI: How to explain a chess game with AI? - From traditional engine analysis to LLM agents
Board games such as chess and go have traditionally been fertile grounds for research and experiments with new AI methods. These days, chess engines like Stockfish or AlphaZero are far superior even to the best human chess players in the world. Although advances in pure playing strength are of little relevance for the average player, the educational aspect remains largely unexplored.
The presented chess AI combines several approaches to make chess knowledge more accessible:
- The evaluations of state-of-the-art neural-network-based engines can be accessed by an LLM-based agent (powered by e.g. Grok-4) via tool calls.
- These tools allow the agent to thoroughly analyze possible moves, offer comprehensive overviews of positions, point out potential errors, and highlight particularly strong moves.
- The resulting agent commentary is transformed into a full chess video based on ElevenLabs’ text-to-speech service for narration.
The video can either be in short form, presented as a puzzle, or as full video providing an in-depth analysis.
In the past month, the videos uploaded to the corresponding YouTube channel @TNGAIChess have amassed more than 150k views!
Time
10:10 (CEST)
Location
Rollprüfstand
Language
EN
Speaker
Talk
Deeptech for depression: Mobile therapy with magnetic stimulation
Mext aims to revolutionize mental health care by making depression therapy accessible anywhere, anytime – independent of where or how people live. Using mobile, user-friendly devices, Mext enables patients to receive effective neuromodulation treatments outside clinical settings, lowering barriers and empowering self-directed therapy. In parallel, Machine Learning and AI are used to collect and analyze vital data in real time, with the goal of ensuring safety and exploring how these insights could eventually inform and adapt therapeutic protocols. Together, this integration of mobile health technologies and intelligent data analysis lays the foundation for more personalized, scalable, and safe mental health treatments.
Time
10:10 (CEST)
Location
Strietzel
Language
EN
Speaker
Talk
Finally understand embeddings - And you will never have to search for the right emoji again
Embeddings are the foundation of how Large Language Models understand the world. Yet, when you try to read about them, you are bombarded with scary words like tensors, matrices, and multi-head attention. No need! Let's try and actually get a grip on how embeddings work. Join Elias Schecke and Dennis Schulz to understand how powerful embeddings can be as a tool for Natural Language Processing. Of course, they will bring their own demo application that adds a correct, fitting, and tasteful emoji to whatever chat message you feed it with - so your parents will finally be able to pick the correct emoji when writing to you.
Elias and Dennis will explain why they translate language into high-dimensional vectors. Then, they will build a sample application for emoji search, explaining semantic search, finetuning, and multi-modal embeddings along the way. They will also give an overview over benchmarks and how to scale your semantic search application.
Time
10:10 (CEST)
Location
Dampfdom
Language
EN
Speaker
Talk
How to get into the top ten model providers on the world's largest LLM aggregator platform
Henrik Klagges und Dr. Robert Dahlke explain how their exploration of DeepSeek’s open-weight Mixture of
Experts-architecture led to a new way of large language model construction and optimization, called Assembly of Experts. Using AoE, they created R1T-Chimera and R1T2-Chimera - 671B child models derived from multiple DeepSeek parents. The construction process can be performed in less than an hour and does not require weeks of large-cluster expensive model pretraining or finetuning, while achieving surprising results of faster inference, smarter responses and tunable LLM personalities. By engaging with the AI community via Hugging Face and X, this research and model releases achieved viral success: The Chimera model family is currently processing over 100 billion tokens per week on OpenRouter, the world’s largest LLM aggregator. It earned the eighth place in the OpenRouter total market share rankings by model author. In the category of AI-based roleplaying, R1T2 even is the second most-popular model with over 10% market share.
Time
10:10 (CEST)
Location
Kohlebunker
Language
EN
Talk
Proxima Fusion's ConStellaration challenge: Datasets to drive the future of fusion energy
Europe's fastest-growing fusion company, Proxima Fusion, is working to build the world's first stellarator fusion power plant, delivering virtually limitless clean energy. Getting there requires tackling one of the world’s most complex technical challenges: Quasi-isodynamic (QI) stellarator optimization. Proxima believes machine learning has an outsized role to play in solving that challenge, which is why they've teamed up with Hugging Face to launch a collaborative challenge inviting the global machine learning community to help solve three different stellarator design problems. Proxima engineers Santiago Cadena and Veronika Siska present an open dataset of diverse QI-like stellarator plasma boundary shapes and introduce three optimization benchmarks of increasing complexity to show how learned models trained on Proxima's dataset can lower the entry barrier for optimization and machine learning researchers to engage in stellarator design and to accelerate cross-disciplinary progress toward bringing commercial fusion energy to the grid.
Time
13:45 (CEST)
Location
Strietzel
Language
EN
Speaker
Talk
Tales from the machinery room - Customizing LLMs @ TNG AI research
"The Germans just Frankensteined DeepSeek's R1 and V3 into something called R1T Chimera".
Way beyond this X-post, the R1T and R1T2 Chimera models published by TNG gained severe attention with a daily usage of more than 10 billion tokens via OpenRouter. So, what kind of "Frankensteining" is going on there? How can a small software consultancy such as TNG produce its own models?
At TNG, an internal AI research group has been experimenting and publishing results with a focus on Mixture-of-Expert Large Language Models and adaptions of the DeepSeek model family. First, they manipulated the way experts work within a model under the name "Mixture of Tunable Experts". They then continued with the "Assembly-of-Experts" merging process resulting in the Chimera models.
This talk aims to give some technical insights into the work of TNG AI research. Dr. Fabian Klemm and David Reiss recall the theoretical basics and the most important results. They provide technical details on how things were done as well as some anecdotes about the successes and losses along this journey.
AI - Solutions
Time
14:55 (CEST)
Location
Rollprüfstand
Language
EN
Speaker
Talk
(On-site only) AI-driven e-commerce: Scaling direct digital customer relationships
Hilti is bringing AI into the heart of its e-commerce experience to deliver more effective and tailored customer journeys. Building on Hilti’s proven direct customer relationships, the aim is to scale this experience globally in the digital space. In this talk, our speakers will showcase their first real-time AI use cases live on hilti.com: search re-ranking powered by CatBoost to adapt product results to user intent, and recommendations built with Graph Neural Networks to suggest related products. A highlight is also the AWS-based infrastructure that makes it possible, a simple yet efficient solution, built in a short time and performing up to expectations to ensure scalable, low-latency delivery of AI models directly into the digital customer journey.
Time
13:45 (CEST)
Location
Rollprüfstand
Language
EN
Speaker
Talk
(On-site only) From data to decisions: Democratizing insight generation with Agentic AI in BMW’s Cloud Data Hub
The team's vision is to empower every BMW employee to independently derive insights from data while ensuring compliance with governance, security, and quality standards. Bringing this vision to life within a complex and enterprise-scale data environment required a fundamental shift in how they approach data accessibility. Join the team on their journey as they share how they leveraged agentic AI to democratize data analysis across BMW’s central data lake, the Cloud Data Hub. They present the design of a platform solution that makes big data analysis available to all users, including those without technical expertise, through an intuitive and AI-driven interface.
Discover how they evolved from static SQL-based workflows to a dynamic multi-agent system that orchestrates data discovery, querying, visualization, and documentation through a coordinated network of purpose-built AI agents. Their architecture is built to thrive in heterogeneous data environments with a strong focus on explainability, security, and robustness. Within this talk, they will also share practical insights from deploying agentic systems at scale and provide actionable insights into enabling enterprise-scale self-service analytics powered by AI.
Time
16:15 (CEST)
Location
Rollprüfstand
Language
DE
Speaker
Talk
AI hits different - Ein Musikalbum aus dem Maschinenraum
What happens if, during a sabbatical, you don’t end up writing a book but instead create a music album - with artificial intelligence as a co-producer? This talk demonstrates how a complete album can be produced in just a few weeks using various AI tools - from the very first lines of lyrics and the arrangement to the mastering process.
The language models employed generate song lyrics and inherit stylistic traits through targeted in-context learning with the creator’s own songs. Suno AI provides initial musical ideas, Splitter AI separates vocals, drums, and instruments, and Basic Pitch converts audio signals into MIDI data. Additionally, MidJourney, Sora, and AI-based mastering engines are used. The workflow combines creative control with efficient automation - far from one-click solutions.
Music producer Ivo Moring (known for DJ Ötzi’s “Ein Stern”) will also offer insights into professional music production in the mainstream sector. He explains how established production processes work in major labels, what role AI already plays, and the challenges it presents.
This talk delivers a practical overview of the tools, workflows, and creative potential of modern, AI-supported music production - offering concrete takeaways for developers, creatives, and tech enthusiasts alike.
Time
9:00 (CEST)
Location
Rollprüfstand
Language
EN
Speaker
Talk
Hey, washing machine? Natural language meets appliance interaction at BSH
What if your washing machine could explain error codes or suggest stain removal? BSH’s LLM-powered chatbot makes this real - blending diagnostics, care advice, and control into natural dialogue. Join the talk to unpack the why (user needs) and how (technologies) behind this innovative leap in appliance interaction.
Time
16:15 (CEST)
Location
Stellwerk
Language
EN
Speaker
Talk
Leveraging confidential data: Implementing a scalable RAG solution for insurance and finance
The insurance and finance industries rely heavily on extracting structured insights from unstructured data while maintaining strict data confidentiality standards. Certain transactions generate large volumes of documentation that small expert teams must process under tight timelines - making team capacity, not deal flow, the main bottleneck. Munich Re Markets, in collaboration with TNG and other partners, has developed a specialized Retrieval-Augmented Generation (RAG) application to address these needs. This solution enables querying complex documents - including table extraction - using the best available AI models, selected based on confidentiality requirements.
The journey of designing this workflow, including document parsing and indexing, data retrieval and answer generation in a scalable multi-tenant architecture, will be shown. This talk addresses challenges such as handling documents and queries in different languages, extracting information from tables and enabling the user to search by document category. By maintaining test suites for quality evaluation of both the retrieval and answer generation steps, a consistent quality is ensured and incrementally improved by implementing newly developed AI techniques. The speakers also share their insights on user experience (UX) Design addressing the needs of non-technical users.
Time
13:45 (CEST)
Location
Dampfdom
Language
EN
Speaker
Talk
State of the Open LLMs in 2025
In this talk, Vaibhav Srivastav from Hugging Face will share the latest trends shaping open large language models in 2025 - from new model releases and adoption patterns to the challenges of scaling and regulation. The session will highlight where open-source is thriving, what hurdles remain, and what engineers should expect next.
Time
14:55 (CEST)
Location
Strietzel
Language
EN
Speaker
Talk
The next wave of disruption in engineering
In the era of LLMs, one gets notoriously confronted with the question of where we stand with applicability of large-scale deep learning models within scientific or engineering domains. Or slightly differently put, what type of scaling is needed for disrupting verticals besides language and vision. The discussion starts by reiterating on recent triumphs in weather and climate modeling, which culminated in the first foundation model for earth systems. Secondly, the challenges and conceptual barriers which need to be overcome for the next wave of disruption in engineering are discussed.
Time
9:00 (CEST)
Location
Strietzel
Language
DE
Speaker
Talk
Von der Idee bis zum Rollout - der Betrieb von KI-Lösungen
Artificial intelligence promises enormous potential - but the path from the initial idea to productive rollout and sustainable operation is complex. In this presentation, Mark Brinkmann shares his experiences from recent years. He shows what typical success factors and stumbling blocks look like throughout the entire life cycle - from conception to development to operation. A special focus will be on STACKIT, the cloud platform that serves as the foundation for sovereignty and security.
AI - Tools
Time
9:00 (CEST)
Location
Röhrl / Geistdörfer
Language
EN
Speaker
Talk
Can AI see you? Your introduction to Computer Vision and GenAI
Your computer can understand the visual world around you - or even create a new one. From FaceID and Snapchat Filters to Stable Diffusion, visual AI has become part of many everyday applications - and it's easier to integrate than ever. Let's discover the secrets behind computer vision and visual GenAI! Starting from the very basics of computer vision, the speakers will explore how we can leverage deep learning techniques to process, analyze, and alter visual information.
They will kick things off with a brief introduction into how images and videos are processed in computers. After that, they will do a deep dive into Convolutional Neural Networks (CNNs): the type of neural network that took the world of computer vision by storm. They will show you how different CNN architectures help mastering different disciplines of computer vision like classification, segmentation, or pose detection. Finally, they will look into GenAI and explore how diffusion models like Stable Diffusion generate and complete images. Throughout this learning experience, the theoretical concepts with a plethora of live demos will be visualized and their application in a selection of real-world business cases will be shown. Can't wait to get started on your own? They will introduce you to some open source vision frameworks like MediaPipe or YOLO which you can try out at home.
Time
11:30 (CEST)
Location
Kohlebunker
Language
EN
Speaker
Talk
Chutes.ai & the decentralisation hack: Rival AI Cloud titans with a handful of coders
Chutes, a decentralised AI-inference network, proves that a finely tuned incentive layer can replace billions in data-centre spend. Independent operators compete to supply GPU power and are rewarded for reliability and alignment with network goals. Daily token emissions are algorithmically distributed by a public score. The result: industry-low prices paired with unrivalled stability. Usage has exploded to 150 billion tokens per day from 500k+ users, matching volumes of leaders like OpenAI. This talk dissects the growth metrics, transferable economic design and societal upside of community-owned AI - from affordable access in under-served or censorship-restricted regions to compute that migrates toward renewable-energy surpluses. The takeaway: Get the incentives right, and global-scale AI builds itself.
Time
13:45 (CEST)
Location
Kohlebunker
Language
EN
Speaker
Talk
How to run your LLM and how to benchmark it
Talk 1: Hosting AI in-house: The TNG GPU cluster story
Self-hosted Large Language Models (LLMs) enable companies to deeply integrate AI into company processes and reap maximum productivity gains while still respecting the privacy of sensitive business data. But what are the requirements and pitfalls when you want to run LLMs on your own hardware?
TNG owns and operates an in-house GPU cluster with more than 50 GPUs, powering AI services such as chat, coding agents, transcription, and image generation, and serving a multitude of TNG-internal applications. This talk provides an overview of how to run your own AI data center. It addresses challenges in cluster management when you grow from a single machine to multiple nodes with different hardware, reviews hardware requirements for modern LLMs, and outlines performance optimizations that are necessary when hundreds of users want to access LLMs simultaneously with different load patterns.
Talk 2: Beyond the score: Your guide to benchmarking LLMs
How to select LLMs to deploy on a GPU cluster for hundreds of engineers? LLM benchmark results are proudly presented for every new model, but what's behind the numbers? Lennard Schiefelbein und Jonathan du Mesnil de Rochemont guide you to designing benchmarks that actually measure the real-world problem-solving skills of LLMs.
In this talk, they discuss methods for measuring business-relevant LLM skills such as coding, tool calling and performance in agentic workflows. After exposing some of the issues with popular benchmarks, they show you different methods to evaluate LLMs. Discover techniques such as evaluations with ground truth, LLM-as-a-judge and testing LLM-generated code. Additionally, they highlight common pitfalls, and show you how prompt variations and generation parameters can silently impact your benchmark. They provide a decision framework balancing latency, cost, and output quality for selecting LLMs.
You will be equipped to better understand the advertised performance of LLMs - beyond the score. Get ready to evaluate LLMs like an engineer!
Time
14:55 (CEST)
Location
Röhrl / Geistdörfer
Language
EN
Speaker
Talk
Reason and conquer: How AI agents automate the web
What if a digital teammate could click, read, and type across the web and your internal tools – so you don’t have to? Meet TWAIN, TNG’s Web AI Navigator: a multi-agent system that actually gets work done. From filing and updating Jira issues, digesting Confluence pages, and booking travel to triaging and replying to emails, TWAIN autonomously navigates real workflows instead of stopping at suggestions.
Dr. Maxim Stykow and Alexander zur Bonsen will share how they built it and what they learned the hard way. Under the hood, TWAIN combines browser automation, internal API access, and LLM planning, with the Model Context Protocol (MCP) enabling a plug-and-play ecosystem of tools for “infinite” extensibility.
Join the talk for:
• Live, end-to-end demos of TWAIN handling tasks that usually take multiple tabs and many clicks.
• Architecture patterns for robust AI agents at work, not just in labs.
• Battle-tested solutions to messy realities: SSO/auth flows, flaky DOMs, context limits, rate limits, and error recovery.
• Insights, how they bake in privacy-first principles while keeping the experience conversational – and a little bit magical.
You’ll leave with practical patterns, gotchas to avoid, and checklists you can reuse to bring similar agents into your own environment. This talk is for engineers, architects, and product folks who want AI that moves the needle from “chat” to “ship”. Bring your questions and your toughest workflows.
Time
11:30 (CEST)
Location
Röhrl / Geistdörfer
Language
EN
Speaker
Talk
Rise of the AI testers: Generating unit and end-to-end tests with AI agents
How can you automatically generate high-quality tests for existing codebases? In this talk, Dr. Marie Bieth and Dr. Michael Oberparleiter demonstrate AI-powered approaches to creating both unit and end-to-end tests using Large Language Models (LLMs), achieving high code coverage and comprehensive business logic validation across diverse projects.
Unit Test Generation: They employ a divide-and-conquer strategy that provides models with targeted repository context for deep codebase understanding. A containerized feedback loop enables iterative test refinement through execution results, continuously improving test quality. In the talk, the speakers compare performance across leading commercial and open-weights LLMs to identify optimal solutions.
End-to-End Test Generation: Starting from simple natural language descriptions of user flows, they demonstrate how combining specialized LLMs for visual application understanding and test code generation produces comprehensive, maintainable Playwright test suites.
This approach delivers tests that follow best practices, reducing manual testing effort while maintaining quality standards.
Time
16:15 (CEST)
Location
Kohlebunker
Language
EN
Speaker
Talk
Sparse Models are the future: A deep dive into Mixture-of-Experts
The limits of scalability have been reached. AI training compute has increased by 10^21 since AlexNet, but these models can’t just get bigger forever. The most powerful language models today use less than 10% of their parameters for any given token, achieving significant computational savings while maintaining high quality. The efficiency comes from Mixture-of-Experts (MoE) architectures, which route different inputs to specialized expert networks instead of activating all parameters, saving compute. Drawing from latest trillion-parameter model design choices, this talk will cover why sparse architectures through MoE represent the most viable path for efficient AI scaling in production systems.
Time
11:30 (CEST)
Location
Rollprüfstand
Language
EN
Speaker
Talk
Why I let AI agents write my code (and why your teams should too)
Six months ago, the team was skeptical about AI-assisted development. Today, AI agents handle the boilerplate while they focus on architecture and innovation - and their productivity has fundamentally transformed.
This isn't another AI hype talk. It's an honest journey from skeptics to advocates, with real data and live demonstrations showing how agent-assisted coding transforms daily workflows. You'll see their actual before/after productivity measurements and learn why early-adopting teams are gaining significant competitive advantages in delivery speed and code quality.
Through live coding sessions and real project examples, they will demonstrate how AI agents excel at routine tasks while developers focus on higher-value problem-solving. They will address the real barriers keeping organizations hesitant - legacy codebases, massive repositories, limited AI knowledge - with practical solutions from their extensive production experience across multiple enterprise clients.
Architecture & Design
Time
9:00 (CEST)
Location
Stellwerk
Language
EN
Speaker
Talk
Secure bare-metal Kubernetes at idgard: Zero internal access to customer data by design
Idgard, a security-first file-sharing application, demands stringent operational requirements to guarantee user data privacy. Especially its core promise, namely no unauthorized access, not even by developers or operational personnel, restricts operational possibilities drastically. This talk explores these challenges and technical intricacies of establishing a secure, bare-metal Kubernetes environment to meet these requirements, covering network bootstrap, immutable cluster setup, multi-layer access control, and robust monitoring and alerting mechanisms.
Cloud
Time
10:10 (CEST)
Location
Stellwerk
Language
EN
Speaker
Talk
(On-site only) Dirty trenches & clean code - What to learn from a telco’s Cloud-first strategy
OXG was founded in 2023 as subsidiary of Vodafone to build a fiber network telco as cleanly as possible: pulling fiber cables using a modern Cloud-first approach for the tech stack.
This worked - mostly. Today OXG employs a mixture of self-built and third-party components. While the glue components holding the proprietary business logic come as shiny, server-less, Cloud-native, fully-scripted one-click-deployment artifacts, like every other company OXG is still bound by reality.
Setting up some of the not-so-Cloud-native applications felt like smuggling a dinosaur onto a spaceship – complete with hard-coded passwords, expensive extra luggage, suspicious security findings, and a boarding process that involved three deployment pipelines and a lot of coffee.
This talk will give an overview of the ups and downs of the last two years including the reason why holding a gym membership is the same as having a backup & recovery plan.
Time
11:30 (CEST)
Location
Stellwerk
Language
EN
Speaker
Talk
BMW's software development dashboard - built with AWS Glue & QuickSight
Your data lake is filled – but how do you extract concrete results? Using the example of the BMW software development dashboard, Ulf Richter and Péter Kaposvári show step by step how to develop a serverless dashboard in AWS with Glue and QuickSight. The solution offers cost efficiency and high scalability while requiring minimal operational effort.
Time
14:55 (CEST)
Location
Stellwerk
Language
EN
Speaker
Talk
From field to cloud
Farmers still wake up with the sun, but now, their first stop isn’t the field. It’s the dashboard. From fertilizer application being backed by satellite imagery to monitoring real-time flow rates from sprayers, agriculture is being redefined by data. The challenge is turning these insights into executable field plans. Discover how the digital farming platform bridges the gap between field and office by leveraging a microfrontend and microservice architecture. Learn how the team combines mobile and web applications to empower farmers, distributors, and technicians with seamless data flow and user experiences. They will walk through the technical challenges they faced and address issues around consistency, cross-team alignment, and complexity. Join the talk to see how Yara Farming Solutions Europe transforms digital farming.
Computer & Games
Time
14:55 (CEST)
Location
Kohlebunker
Language
EN
Talk
How to make games in 2025 - How AI and emerging technologies are transforming game development
For decades, jokes were made in the gaming industry whenever someone mentioned AI. Not out of arrogance but out of frustration. The topic was discussed, researched, and dismissed too many times without ever being taken seriously. It took a former game developer like Demis Hassabis to show the world what AI is truly capable of - like how to win at Go. But now, in 2025, we live in a completely different world. What does that mean for game developers today? What new tools and approaches are emerging, what are we already using, and what opportunities are opening up? And most importantly: How can we shape this new era of game development together?
Time
13:45 (CEST)
Location
Kleine Lokhalle
Language
EN
Speaker
Talk
enigame - The joy of solving tricky tasks: Europe's largest code and cipher cracking contest
Humans are innate problem solvers. The desire to discover secrets, unravel mysteries, and solve puzzles runs deep in human nature. Many of us even make a living out of it: Professionals in STEM fields tackle complex puzzles every day. But what makes a problem feel like a joyful challenge rather than a tedious task? This talk highlights the core of enigame's daily work - codes, ciphers, patterns, logic, and creativity - through the lens of puzzle hunts.
The core elements of designing puzzles that truly engage people are explored, using examples from Europe's largest puzzle hunt, enigame. It is shown how well-crafted puzzles can foster creativity, build mental flexibility, and create a strong sense of community and collaboration.
Hardware & Reality Hacking
Time
16:15 (CEST)
Location
Dampfdom
Language
DE
Speaker
Talk
(On-site only) Elektrofahrzeug CONCEPT AMG GT XX - in acht Tagen um die Welt
Electromobility poses unique challenges for engineers - especially when it comes to achieving high efficiency at high speeds and enabling ultra-fast charging. Using the example of the Mercedes-AMG GT XX Concept, this talk illustrates how technical innovation, team spirit, and a focused methodology can set new benchmarks in the industry. In the summer of 2025, the vehicle set no fewer than 25 records in Southern Italy, impressively demonstrating the vast potential of the next generation of electric vehicles. Among its achievements, it covered a distance of 40,075 kilometers in under eight days, proving its ability to meet the toughest demands even in long-distance scenarios. The presentation will also highlight how teamwork between Formula 1, research departments, and marketing can be structured in a highly targeted and effective way.
Time
9:00 (CEST)
Location
Kleine Lokhalle
Language
EN
Speaker
Talk
From concept to launch: Challenges in rocket design
At the 2025 European Rocketry Challenge in Portugal, BME Suborbitals became the first Hungarian team to compete with a rocket powered by a self-developed solid motor. That rocket was Aurora powered by evosoft, the team's most complex project yet, featuring distributed avionics, a new internal frame structure, and an actively controlled airbrake system. Designed to carry a 3U CubeSat and a CanSat payload to an altitude of 3,000 meters, Aurora represents years of student-driven innovation and hard work.
In this talk, you can gain a closer insight into the team's road to EuRoC and the challenges and triumphs of turning Aurora from concept to launch, and you’ll even have the opportunity to view the rocket in person.
Time
14:55 (CEST)
Location
Kleine Lokhalle
Language
DE
Speaker
Talk
Ready for Liftoff: KI, Resilienz und der Wille zum Erfolg
Artificial intelligence accelerates development cycles and opens up new avenues even for highly complex systems such as rocket engines. But anyone who has ever experienced a rocket launch up close knows that success does not come about through perfect planning or algorithms alone. A sensor error, a sudden change in weather, or an unexpected system reaction – everything can change in a matter of seconds. In moments like these, it's not the obvious that counts, but the ability to find unconventional solutions, adapt plans flexibly, and still act with determination. This is precisely where the decisive difference lies that distinguishes humans despite increasingly powerful AI: perseverance, resilience, and the unconditional will to succeed. This “special something” is invaluable to companies – and it will continue to be so in the future. The talk brings this difference to life, supported by direct insights into the development cycle of a student-built rocket, right up to launch.
Robotics
Time
10:10 (CEST)
Location
Kleine Lokhalle
Language
EN
Speaker
Talk
(On-site only) Robot Foundation Models
Physical intelligence – the ability of machines to perceive, reason, and act in the physical world – has improved immensely in the past two years. Building on LLMs and Vision Language Models, Robot Foundation Models learn to control robots based on position, vision, and language input. Valentin Bertle, Daniel Klingmann und Til Theunissen give an overview of the advances and what they've learned by applying Robot Foundation Models to industrial tasks.
Time
9:00 (CEST)
Location
Dampfdom
Language
DE
Speaker
Talk
Autonomie, Robotik, KI - und der Mensch
According to robotics research, robots are able to perform tasks autonomously in more and more contexts of action. With the help of artificial intelligence, robots can learn to adapt to different environments and challenges. We are then talking about artificially intelligent autonomous robots. How does this adaptive, technically autonomous behaviour of a robot relate to human autonomy, which we regard as a constitutive element of personhood?
This talk examines this question from the perspective of technology assessment. In which contexts does it make sense to allow autonomous robots to act instead of humans? When is collaboration between humans and robots the method of choice? And in what way does human autonomy meet technical autonomy in this collaboration? The different degrees of autonomy in autonomous driving can be used as an example here.
Time
11:30 (CEST)
Location
Kleine Lokhalle
Language
EN
Speaker
Talk
Humanoids driving go-karts
Robots driving go-karts at 30 km/h, guiding visitors through the German Museum and cycling - what could possibly go wrong when you are teaching robots to take over machines made for humans? This talk explores what worked, what failed, and which safety precautions are needed when humanoids work with people.
Time
16:15 (CEST)
Location
Kleine Lokhalle
Language
EN
Speaker
Talk
Robots among us: Advances in AI for everyday androids
Robotics is transitioning from factory floors to our everyday lives, particularly through humanoid robots. This session features a live demonstration of our Unitree G1 android, exploring how advances in AI, increasing data availability, and edge computing hardware are transforming science fiction into reality.
The session begins with an overview of robotics and its market potential. Next, robot training is examined, demonstrating how integrated hardware and software bridge the gap between simulation and real-world deployment. Finally, Vision-Language-Action (VLA) models are explored for enabling meaningful human-robot interaction and future applications.
Tools & Methods
Time
11:30 (CEST)
Location
Strietzel
Language
EN
Speaker
Talk
Conversational, generative, contextual: A framework for AI-era product design
The traditional user experience (UX) design paradigm is no longer applicable in the context of AI. It is not possible to design deterministic user flows when systems generate different outputs from identical inputs. Most AI failures aren’t model failures—they’re context failures.
This talk outlines pragmatic methods for designing AI-powered products, encompassing four pivotal dimensions: non-deterministic UX patterns, generative UI systems, context engineering workflows, and enterprise risk assessment. Learn about concrete methods for testing unpredictable systems, measuring effectiveness without traditional metrics, and communicating the design intent that makes or breaks AI implementations.
You’ll leave with an actionable framework, assessment tools, and design & communication patterns — moving beyond AI hype to systemic design practice.
Time
13:45 (CEST)
Location
Stellwerk
Language
EN
Speaker
Talk
Education as a noisy signal: How to build robust predictions from grades using context data, econometrics, and AI
TNG Technology Consulting proudly advertises an academic staff ratio of 99.9% on its website. But what is the real value of academic education? Can grades truly predict later success in the labour market? And are physics students really the smartest of them all? Time for an education-economics perspective. This talk explores the “return” of education and the predictive validity of grades. Expect latent abilities, selection effects, instrumental variables, plenty of noise, a touch of AI, and a few fun facts from the world of higher education.





































































