17 – 23 January 2026

Part 1: Applied AI in HCI Research
Part 1 explores how applied AI is reshaping HCI research, from AI-supported paper writing and theory engagement to ethical and emotional entanglements in AI content creation. Participants will gain a grounding in reinforcement learning, see applied insights, and discuss real-time interaction with intelligent agents in user studies. The block also covers practical research acceleration, including systematically sourcing grey literature with GenAI and modern search engines, improving experiment design with AI, and using GenAI for coding and data analysis.
Schedule
| Time | 17.01.2026 | 18.01.2026 | 19.01.2026 |
| Saturday | Sunday | Monday | |
| 9:00-10:30 | TBD (Bastian P., Florian A.) | ||
| Break | |||
| 11:00-12:30 | Closing (Albrecht S.) | ||
| Break | |||
| 13:00-15:00 | Realtime Interaction with Intelligent Agents in User Research (Matthias S.) | ||
| Coffee | |||
| 15:30-16:45 | Introduction & Organisation (Albrecht S., WS-Orga Team) | Systematically Sourcing Grey Literature with GenAI and Modern Search Engines (Marco G.) | |
| Break | |||
| 17:00-18:00 | Writing Papers with AI support – How is AI changing current research (Albrecht S., Hans G.) | Improving your experiment design with AI (Fiona D.) | |
| Dinner | |||
| 19:30-21:00 | Basics of Reinforcement Learning (Sven M.) Avalanche Search&Rescue Research (Pascal K.) | Coding / Data analysis with GenAI (Maxi W., Jan L.) | |
| 21:00-21:30 | Life outside Academia: Difference in Research, How to CV and Apply to Job (Luke H., Markus F., Sarah V.) | Engagement with Theory in HCI/Emotional and Ethical Entanglements in AI Content Creation (Jasmin N.) |
Part 2: Multimodal Generative Models – Text, Images, Videos
Part 2 examines multimodal generative models across text, images, and video, with an emphasis on building, adapting, and responsibly deploying these systems in interactive contexts. Sessions span AI enabled hardware, AR focused pipelines such as LLM generated shader code and cognitive augmentation, and multimodal human agent interaction that integrates sensor data. Hands-on tutorials cover LLM finetuning via system prompts, Stable Diffusion, and RAG. The block also addresses conversational agents with voice and gaze awareness, guardrails in LLMs from a source code perspective, and methodological shifts such as QDA in the age of LLMs. It closes with wellbeing applications, PhD expectations, mentoring, and discussions around scientific integrity.
Schedule
| Time | 19.01.2026 | 20.01.2026 | 21.01.2026 | 22.01.2026 | 23.01.2026 |
| | Monday | Tuesday | Wednesday | Thursday | Friday |
| 13:00-15:00 | | AI Enabled Hardware (Philipp T., Boris K.) | Reprogamming your vision: using LLMs to generate shader code for adjusting vision in Augmented Reality (Yanni M., Florian M.) | How do guardrails in LLMs work, looking at the source code and removing them (Katharina B., Oliver H.) | Departure (Checkout 10 am) |
| Coffee | | | |||
| 15:30-16:45 | | Tutorial LLM & Finetuning LLMs via System Prompts (Chris L., Yannick W.) | Tutorial: Stable Diffusion (Steeven V.) | Tutorial on RAG (Jesse G.) | |
| Break | | | | | |
| 17:00-18:00 | Implementing Conversational Agents with Voice and Gaze Awareness (Heiko D.) | QDA in the age of LLMs: does it even make sense any more? (Pawel W.) | AI for Wellbeing (Nadine W., Evropi S.) | | |
| Dinner | | | | | |
| 19:30-20:30 | Introduction & Organisation (Albrecht S., WS-Orga Team) | Cognitive Augmentation and Manipulation with AR (Jan G.) | How to integrate sensor data models, multimodal human agent interaction (Thomas K.) | Self-organized Dinner | |
| 20:30-21:30 | Expectations for a PhD (Albrecht S.) | Mentoring (Jasmin N.) | Scientific Integrity (Albrecht S., Jasmin N., Pawel W.) | |
Organizers:







