Merry Christmas 2016

Merry Christmas 2016


Look inside your envelope,

the Christmas card should have them all:

Wooden wheel, a snowflake, too.

Copper things, we have a few.

Resistor, light, they’re all brand new,

a paper cut out with a view.


To bring the gift of light,

for every winter night.

You will need the following pieces for the first assembly part:

four copper strips (two long, two short), LED, resistor, wooden rim, wooden plug and the black piece of paper.


Slide in the LED,

into the holes that be.

Long leg positioned through the oak,

on the left side of the thickest spoke.

First, insert the LED into the two holes on the wooden rim. The longer leg of the LED goes into the left hole, when the thickest spoke points away from you.


Resistor needs to be installed,

’cause, otherwise, it will explode.

Remember the longer diode leg,

it connects to the pole that adds (+).

Bend the legs towards the other side of the spoke and attach the resistor to one of the legs. Make sure it is tightly connected. Remember the position of the longer leg for later on. 


Onto the wooden ring,

attach with copper string.

Wrap around the leggy bit,

the extra strings will make it fit.

Now it is time to attach the copper strings to the rim. Carefully place the two longer copper strips on both sides of the thick spoke using the adhesive side. Twist it around the LED legs and use the short strips to fixate it. CAUTION: the adhesive side does not conduct electricity, so make sure the other side is attached to the LED legs.


The gluey side just won’t conduct,

hence wires must be nicely tucked.

Repeat the same step for the second leg. When finished, the result should look similar to the picture above.


Plus pole connects to long diode leg,

make sure this is correct.

Conductive strips for the plug,

need to be edge-snug.

Connect the plug to the two stripes using the small piece of black paper to keep the strips separated. Make sure to connect the longer LED leg to the plus pole (marked on the plug). If you followed all steps up until now, you can simply connect it as shown above. The strips attached to the plug should be spaced as far apart as possible (on both side), but must not overlap over the ledge.


Attach the plug you just might,

and you have a working light.

You may want to test your Christmas lantern for now. Plug it into a USB-charger and check if the LED lights up. If not, check the wiring, especially the connection between LED legs and copper strips. Consider checking for correct polarity. Do not despair if your light does not work. You can still proceed and build the final lantern.


The lantern stands on wooden feet,

and then you join the paper sheets.

Be aware of text alignment,

else you prolong the assignment.

Now it is time to make it look like a lantern. Attach the feet to the wooden rim and connect the two paper ornaments. Check the text orientation before connecting both pieces. Be sure to insert the paper hooks as shown in the picture for a stable and round lantern.


And now it’s time to mount the screen,

it wraps around the wooden rim.

Slide the transparent paper into the lantern. For best visual results, position the fold in front of the overlap.


The snowflake goes on top,

ensure that room lights are now off.

Position the quadratic transparent paper on top of the lantern and add the snowflake ornament.


Now it’s time to plug it in,

and pour yourself a Christmas drink.


Highlights 2016!

Congratulations to our PhDs

This year we had 6 people finishing their PhD – the highest number for a single year till now – congratulations!


Exciting new Projects

In 2016 we started a number of new projects:


Event Highlight

50 Jahre mit der Maus


captureDie Abteilung für Mensch-Computer Interaktion der Universität Stuttgart, das Informatik-Forum Stuttgart (infos e.V.) und die GI- / ACM-Regionalgruppe Stuttgart / Böblingen laden ein zur Festveranstaltung ” 50 Jahre Computer mit der Maus” am

Montag, 5. Dezember 2016, 17:30 – 19:00 Uhr.

Alle Interessierten sind herzlich zu dieser Veranstaltung eingeladen. Die Teilnahme ist kostenfrei. Anmeldungen bitte unter:

Von der klobigen Kugel bis zur ergonomischen High‐End‐Maus: Das griffigste Eingabegerät in der Mensch‐Computer‐Interaktion feiert Geburtstag. Wir feiern mit der Maus und werfen einen Blick auf die Geschichte des ersten Nagers aus Deutschland. Die Computermaus ‐ “Wer hat sie erfunden?” Heute gilt der Amerikaner Douglas C. Engelbart als Erfinder der ersten Computermaus. Das Team um Rainer Mallebrein bei der deutschen Firma Telefunken stellte aber schon im Oktober 1968 – wenige Monate vor der Veröffentlichung von Engelbarts Maus – die Rollkugelsteuerung (RKS) vor, die als Eingabegerät an einem Bildschirm hing. Den Ruhm als Erfinder der Computermaus konnten die Ingenieure leider nicht für sich beanspruchen. Man hielt die Maus für nicht patentwürdig, denn sie war nur ein schlichtes Beiwerk für den neuen Großrechner TR440. Aber die Mäuse blieben und sind auch 50 Jahre später nicht mehr aus unserem Alltag wegzudenken. Gemeinsam mit Rainer Mallebrein werfen wir einen Blick zurück in die Entstehungsgeschichte der Maus, lernen ihren ersten Anwendungsfall kennen und erleben die Evolution der Maus bis heute. Wird es den zuverlässigen Nager auch in Zukunft geben? Diskutieren Sie im Anschluss mit und teilen Sie Ihre Erinnerungen.

17:30 – 17:40: Begrüßung / Einlass

17:40 – 18:10: Prof. Dr. Horst Oberquelle, “Eine kurze Geschichte der Eingabegeräte – Ein Blick in das Hamburger Mouse-oleum”

18:10- 18:30: Valentin Schwind, Clemens Krause: Eingabegeräte aus dem Computermuseum Stuttgart und die 3D Rekonstruktion des RKS 100-86 (Studentenprojekt).

18:30- 19:10: Im Gespräch: mit Herrn Mallebrain und mit Prof. Rul Gunzenhäuser über die Rollkugel (Moderation Albrecht Schmidt)

19:10 – 19:30: Impuls: Was kommt nach der Maus? Die Zukunft der Mensch-Computer-Interaktion (Albrecht Schmidt)

19:30: Gespräche mit Snacks und Getränken

 Wir freuen uns auf Ihr Kommen!

Merry Christmas 2015!

Dear colleagues and friends,

we hope you received your exclusive hcilab xmas construction kit, version 4.0. The design is inspired by the traditional German Christmas pyramid (instructions for last year’s Christmas kit you find here). To fully enjoy the hcilab Christmas experience, here are quick steps to a successful assembly:

xmas15-final xmas15-construction-kit

Click here to see the instructions

This is the first construction kit that – if assembled right – will move. We are curious about your result and we would like to see if it really moves. Please take a picture or better a movie of your final version and we would be happy if you email it to us, or if you share your accomplishment with us on facebook or link to it in a blog comment.

It is now – nearly by the day – 5 years that our group started at the University of Stuttgart. The group is much bigger now than we anticipated, and hence we work closer together (=more people have to share a room).

Absence leads to productivity?

While Albrecht enjoyed his time in Cambridge working with Per Ola Kristensson at the University and with Steve Hodges’ Sensor and Devices group at Microsoft Research, the group was very productive.

a2 a1

CHI2015 in Korea

The ACM SIGCHI was this year for the first time in Asia and we participated, presenting research results in different forms.


Postdoc++ == Professor

At the moment we are without postdocs, as Katrin Wolf and Oliver Korn left us. Katrin accepted an offer from the University of Art & Design in Berlin (btk) and is now Professor for media informatics. Oliver Korn joint Offenburg University of Applied Sciences and is now professor for human computer interaction. We are looking forward to 2016 and to the new postdocs we have hired.

German HCI conference with 750+ People in Stuttgart

In September we ran the German HCI conference Mensch und Computer, which had more than 750 attendees. Organizing such a conference at the University is a major undertaking and it kept many of us very busy.
11999682_695737560561663_1308933216170982553_o IMG_1393

Book with some of our research highlights

Our institute is growing and we have a book to highlight ongoing projects and research in Computer Graphics, Visualization, Computer Vision, and Human-Computer-Interaction. The book is in German and online available as PDF. For the project in Human-Computer-Interaction and Socio-Cognitive Systems see from page 53 onwards.

hci2015 Capture

New and Ongoing Projects

This year four new larger projects started and we have several ongoing projects – too many to describe all of them. If you are around in Europe Stuttgart is really close – please take the chance and visit us. Here are some pointers to the larger projects:

New projects in 2015

SFB-TRR161 (DFG funded). Collaborative Research Center on Quantitative Methods for Visual Computing with University of Konstanz and the Max Planck Institute in Tübingen.
CIMPLEX (EU H2020 funded). Bringing CItizens, Models and Data together in Participatory, Interactive SociaL EXploratories
DAAN (BMBF funded). Design of ambient adaptive notification environments.
FeuerWeRR (BMBF funded). Augmented reality thermal camera for fire fighters.

Ongoing projects

MotionEAP (BMWi funded and extended as everyone is excited by the outcome so far): increasing efficiency in manual production through projected augmented reality. < >

Recall (European Project): Re-thining and re-defining memory augmentation. Augmenting the human mind has many facets.

SimpleSkin (European Project): Cheap, textile based whole body sensing systems for interaction, physiological monitoring, and activity recognition.

meSch (European Project): Material EncounterS with digital cultural Heritage.

SimTech (DFG funded): Interaction with simulation systems.

We are looking forward to a set of projects starting in 2016 and would like to wish all collegues, friends, acquaintances, and our families the very best for 2016!

— Albrecht Schmidt

Some Publications in 2015

Florian Alt, Stefan Schneegass, Alireza Sahami Shirazi, Mariam Hassib, and Andreas Bulling. 2015. Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’15). ACM, New York, NY, USA, 316-322.

Katrin Wolf, Stefan Schneegass, Niels Henze, Dominik Weber, Valentin Schwind, Pascal Knierim, Sven Mayer, Tilman Dingler, Yomna Abdelrahman, Thomas Kubitza, Markus Funk, Anja Mebus, and Albrecht Schmidt. 2015. TUIs in the Large: Using Paper Tangibles with Mobile Devices. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15). ACM, New York, NY, USA, 1579-1584.

Yomna Abdelrahman, Alireza Sahami Shirazi, Niels Henze, and Albrecht Schmidt. 2015. Investigation of Material Properties for Thermal Imaging-Based Interaction. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 15-18.

Luis A. Leiva, Alireza Sahami, Alejandro Catala, Niels Henze, and Albrecht Schmidt. 2015. Text Entry on Tiny QWERTY Soft Keyboards. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 669-678.

Max Pfeiffer, Tim Dünte, Stefan Schneegass, Florian Alt, and Michael Rohs. 2015. Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 2505-2514.

Niels Henze, Thomas Olsson, Stefan Schneegass, Alireza Sahami Shirazi, and Kaisa Väänänen-Vainio-Mattila. 2015. Augmenting food with information. In Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia (MUM ’15). ACM, New York, NY, USA, 258-266.

Markus Funk, Alireza Sahami Shirazi, Sven Mayer, Lars Lischke, and Albrecht Schmidt. 2015. Pick from here!: an interactive mobile cart using in-situ projection for order picking. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’15). ACM, New York, NY, USA, 601-609.

Lars Lischke, Sven Mayer, Katrin Wolf, Alireza Sahami Shirazi, and Niels Henze. 2015. Subjective and Objective Effects of Tablet’s Pixel Density. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 2769-2772.

Sven Mayer, Katrin Wolf, Stefan Schneegass, and Niels Henze. 2015. Modeling Distant Pointing for Compensating Systematic Displacements. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 4165-4168.

Martin Pielot, Tilman Dingler, Jose San Pedro, and Nuria Oliver. 2015. When attention is not scarce – detecting boredom from mobile phone usage. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’15). ACM, New York, NY, USA, 825-836.

Tilman Dingler and Martin Pielot. 2015. I’ll be there for you: Quantifying Attentiveness towards Mobile Messaging. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’15). ACM, New York, NY, USA, 1-5.

UNH IRES research interns on tour

Group photoStatmitte

Miriam and the UNH IRES research interns from the HCI lab went to Saarbrüken to visit the Saarbrüken University HCI group. The first lab visited was the DFKI Lab (or the German Research Center for Artificial Intelligence) where they met researcher Dr. Sven Gehring to get a tour. The tour exhibited demonstrations such as the possible future technology of grocery stores, collaborative reading environment on a single screen, augmented system to develop floor plans and similar layouts, and a viable poster voting system. In addition to the demos they were talked through some posters displayed on the various other projects being undertaken at the HCI group in Saarbrüken like mobile projection to facilitate learning guitar, large interactive displays, and even a mobile application to chart rock climbing paths of various difficulties.


The second lab visited was the Cluster of Excellence Multimodal Computing and Interaction (MMCI) lab. This is where doctoral researcher Martin Weigel gave a tour of the lab and went into detail about his project which was in wearable touch sensors that are worn on the skin.

Chloe Eghtebas

Reading List: Developing Ubiquitous Computing Devices


Together with Thomas Kubitza I was teaching a class in the UBI summer school on Developing Ubiquitous Computing Devices. The summer school was held in Oulu and organized by Timo Ojala.

In total the summer school include the following 4 courses:

  • EXPERIENCE-DRIVEN DESIGN OF UBIQUITOUS INTERACTIONS IN URBAN SPACES Prof. Kaisa Väänänen-Vainio-Mattila, Tampere University of Technology, Finland & Dr. Jonna Häkkilä, University of Oulu, Finland
  • DESIGNING MOBILE AUGMENTED REALITY INTERFACES Prof. Mark Billinghurst, University of Canterbury, New Zealand
  • DEVELOPING UBIQUITOUS COMPUTING DEVICES Prof. Albrecht Schmidt, University of Stuttgart, Germany
  • URBAN RESOURCE NETWORKS Prof. Malcolm McCullough, University of Michigan, USA

There was more than work… if you are curious have a look at flickr for photos and more photos.

As some people asked for the reading list for our course on Developing Ubiquitous Computing Devices, I thought I post it here…. The reading list is also available as PDF for download.

The reading list comprises 4 areas that are relevant to our course. We expect that you have come across the original paper by Marc Weiser, introducing the concept of ubiquitous computing [1].

In the first part we have included papers that provide an overview of interaction concepts that are relevant in the context of ubiquitous computing. In particular this is tangible interaction [2a] [2b], reality based interaction [3], embedded interaction [4]. The concept of informative art [5] is introduced as well as the notion of persuasive technologies [16].This part is concluded with an overview of interaction with computers in the 21st century [6].

In the second part we have included a paper on how to create smart devices [7], which gives an overview of sensors that may be useful for creating novel and reactive devices. In [8] sensing is extended to context and context-awareness. In the third part we introduce the .NET Gadgeteer platform [9] and show some trends in the development of ubiquitous computing devices: how can we create new products once we can fabricate things [10] and enclosures [10b] and how ubicomp technologies enable new devices and devices concepts [11].

The final part provides some ideas for application scenarios that we plan to assess during the course. In [12] a concept of how to change a bed into a communication media is presented and in [13] a social alarm clock is presented. A recent study [14] shows the impact of technology on communication and in [15] an overview of novel alarm clocks and sleep monitoring devices is given.

[1] Weiser, M. (1991). The computer for the 21st century. Scientific american,265(3), 94-104.
[2a] Ishii, H., & Ullmer, B. (1997, March). Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems (pp. 234-241). ACM.
[2b] Ishii, H. (2008, February). Tangible bits: beyond pixels. In Proceedings of the 2nd international conference on Tangible and embedded interaction (pp. xv-xxv). ACM.
[3] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., & Zigelbaum, J. (2008, April). Reality-based interaction: a framework for post-WIMP interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 201-210). ACM.
[4] Kranz, M., Holleis, P., & Schmidt, A. (2010). Embedded interaction: Interacting with the internet of things. Internet Computing, IEEE, 14(2), 46-53.
[5] Ferscha, A. (2007). Informative art display metaphors. In Universal Access in Human-Computer Interaction. Ambient Interaction (pp. 82-92). Springer Berlin Heidelberg.
[6] Schmidt, A., Pfleging, B., Alt, F., Sahami, A., & Fitzpatrick, G. (2012). Interacting with 21st-Century Computers. Pervasive Computing, IEEE, 11(1), 22-31.
[7] Schmidt, A., & Van Laerhoven, K. (2001). How to build smart appliances?.Personal Communications, IEEE, 8(4), 66-71.
[8] Schmidt, A. (2013). Context-Aware Computing: Context-Awareness, Context-Aware User Interfaces, and Implicit Interaction. The Encyclopedia of Human-Computer Interaction, 2nd Ed.
[9] Villar, N., Scott, J., Hodges, S., Hammil, K., & Miller, C. (2012). . NET gadgeteer: a platform for custom devices. In Pervasive Computing (pp. 216-233). Springer Berlin Heidelberg.
[10] Schmidt, A., Doring, T., & Sylvester, A. (2011). Changing How We Make and Deliver Smart Devices: When Can I Print Out My New Phone?. Pervasive Computing, IEEE, 10(4), 6-9.
[10b] Weichel C., Lau M., Gellersen,H. (2013). Enclosed: A Component-Centric Interface for Designing Prototype Enclosures. Tangible, embedded, and embodied interaction conference (TEI 2013)
[11] Hodges, S., Villar, N., Scott, J., & Schmidt, A. (2012). A New Era for Ubicomp Development. Pervasive Computing, IEEE, 11(1), 5-9.
[12] Dodge, C. (1997, March). The bed: a medium for intimate communication. InCHI’97 extended abstracts on Human factors in computing systems: looking to the future (pp. 371-372). ACM.
[13] Schmidt, A., Shirazi, A. S., & van Laerhoven, K. (2012). Are You in Bed with Technology?. Pervasive Computing, IEEE, 11(4), 4-7.
[14] Schmidt, A. (2006). Network alarm clock (The 3AD International Design Competition). Personal and Ubiquitous Computing, 10(2-3), 191-192.
[15] Shirazi, A. S., Clawson, J., Hassanpour, Y., Tourian, M. J., Schmidt, A., Chi, E. H., Borazio, M., & Van Laerhoven, K. (2013). Already Up? Using Mobile Phones to Track & Share Sleep Behavior. International Journal of Human-Computer Studies.
[16] Fogg, B. J. (2009, April). A behavior model for persuasive design. In Proceedings of the 4th international conference on persuasive technology (p. 40). ACM.

Appendix: .NET Gadgeteer Links (optional)

Keynote at PerDis2013: Proxemic Interactions by Saul Greenberg

Saul Greenberg presented the opening keynote at PerDis2013, the second international symposium on pervasive displays, held at Google in Mountain View, US.

Saul gave a brief history motivating the challenges that arise from the move to interactive ubiquitous computing environments. The degrees of freedom for interaction, when moving from graphical user interfaces to ubiquitous computing environments, are massively increased and the social context becomes central.

The other line of motivation Saul used is the notion of proxemics as studied in social science. The primary element is the distance between people. By physical proximity a lot in the interaction between people is determined. Interpersonal relationships are at the heart of the theory by Edward Hall, who explored this already in the 1960ties ([1], for a short overview and introduction see the Wiki-Pages on Edward Hall and on Proxemics). It is interesting (and not undisputed) to see that people in computer science have moved the notion of proxemics beyond human-to-human interaction to include technologies.

Saul outlined the dimensions for proximic interactions:

  • Distance 
  • Movement 
  • Location 
  • Orientation 
  • Identity 

In a paper in ACM Interactions Saul provides a really good and easy to read introductory text to proximic interactions – which is also well suitable for teaching [2]. There is more on the dimensions, the overall concept of proximic interactions, and potential applications in a 2010 paper they presented at ITS [3]. One of the aspects they have looked into in their work is at supporting proxemic interactions through a toolkit [4]. For more details we can be looking forward to the PhD thesis of Nicolai Marquardt, who worked in Saul’s group and who will defend in a few weeks.

Proxiemic interaction is a hot topic and several researchers have started to explore this space. There is also a Dagstuhl Seminar on the topic later this year ( orgamized by Saul Greenberg, Kasper Hornbæk, Aaron Quigley, and Harald Reiterer.

[1] Hall, E. T., & Hall, E. T. (1969). The hidden dimension (p. 119). New York: Anchor Books.
[2] Greenberg, S., Marquardt, N., Ballendat, T., Diaz-Marino, R., & Wang, M. (2011). Proxemic interactions: the new ubicomp?. interactions, 18(1), 42-50.
[3] Ballendat, T., Marquardt, N., & Greenberg, S. (2010, November). Proxemic interaction: designing for a proximity and orientation-aware environment. In ACM International Conference on Interactive Tabletops and Surfaces (pp. 121-130). ACM.
[4] Marquardt, N., Diaz-Marino, R., Boring, S., & Greenberg, S. (2011, October). The proximity toolkit: prototyping proxemic interactions in ubiquitous computing ecologies. In Proceedings of the 24th annual ACM symposium on User interface software and technology (pp. 315-326). ACM.

Our lab says “Merry Christmas” 2012!

Dear colleagues and friends,

we hope you received your exclusive hcilab ornament construction kit. In order to fully enjoy the hcilab Christmas experience, 7 quick steps will guide you through the rather intuitive assembly:

The target result:
The target result.

The Christmassy ingredients:
The target result.

Step 1a: Free the tree!
The target result.

Step 1b: Bolden the golden!
The target result.

Step 1c: Take a breath!
The target result.

Step 2: Assemble the tree!
The target result.

Step 3a: Assemble the globe (1st stay)!
The target result.

Step 3b: Assemble the globe (2nd stay)!
The target result.

Step 3c: Connect the stays using the disks!
The target result.

Step 4a: Put the tree in the middle!
The target result.
The target result. The target result.

Step 5a: Put in the remaining stays (3rd stay)!
The target result.

Step 5b: Put in the remaining stays (4th stay)!
The target result.

Step 6: Hook it!
The target result.

Step 7: That’s it. Celebrate!
The target result.

We are curious about your end result and are keen to receive a picture of the final version of your ornament. Feel free to email it to us or to link it in a blog comment.

The year 2012 was very exciting and we more than appreciate your every involvement with us! As an additional treat we have attached to this Christmas packet a quick overview which lists several projects and topics in the field of human computer interaction we have been working on.
In that sense, we have continued to work on Public Displays networks. The following publications give an overview of some of the directions we took this year:

  1. Davies, N., Langheinrich, M., José, R., & Schmidt, A. (2012). Open display networks: A communications medium for the 21st century. Computer, 45(5), 58-64. Alt, F., Schneegaß, S., Schmidt, A., Müller, J., & Memarovic, N. (2012, June). How to evaluate public displays. In Proceedings of the 2012 International Symposium on Pervasive Displays (p. 17). ACM.
  2. Alt, F., Schmidt, A., & Müller, J. (2012). Advertising on Public Display Networks. Computer, 45(5), 50-56.

Automotive User interfaces was another area where we continued our research. We moved more towards multimodality and included speech input in a prototype:

  1. Pfleging, B., Schneegass, S., & Schmidt, A. (2012, October). Multimodal interaction in the car: combining speech and gestures on the steering wheel. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 155-162). ACM.
  2. Pfleging, B., Kern, D., Döring, T., & Schmidt, A. (2012). Reducing Non-Primary Task Distraction in Cars Through Multi-Modal Interaction. it-Information Technology, 54(4), 179-187.

We ventured into new domains this year. In particular we looked at usable security and brain computer interaction. The following two papers show some examples of this work. We are particularly proud of the BCI paper, as this is the first one wih our students in Stuttgart.

  1. Bulling, A., Alt, F., & Schmidt, A. (2012, May). Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (pp. 3011-3020). ACM.
  2. Shirazi, A. S., Funk, M., Pfleiderer, F., Glück, H., & Schmidt, A. MediaBrain: Annotating Videos based on Brain-Computer Interaction.

Finally this paper may be an interesting read, when you are tired …

  1. Schmidt, A., Shirazi, A. S., & van Laerhoven, K. (2012). Are You in Bed with Technology?. Pervasive Computing, IEEE, 11(4), 4-7.


Silvia Miksch talking about time oriented visual analytics

It seems this term we picked a good slot for the lecture. On Thursday we had Prof. Silvia Miksch from Vienna University of Technology visiting our institute. We took this chance for another guest lecture in my advanced HCI class. Silvia presented a talk with the title “A Matter of Time: Interactive Visual Analytics of Time-Oriented Data and Information”. She first introduced the notion of interactive visual analytics and then systematically showed how time oriented data can be visually presented.

I really liked how Silvia motivated visual analytics and could not resist to adapt it with a Christmas theme. The picture shows three representations (1) numbers, always 3 grouped together, (2) a plot of the numbers where the first is the label and the second and the third are coordinates, and (3) a line connecting the labels in order. Her example was much nicer, but I missed to take a photo. And it is obvious that you do not put it on the same slide… Nevertheless I think even this simple Christmas tree example shows the power of visual analytics. This will go in my slide set for presentations in schools 😉

If you are more interested in the details of the visualization of time oriented data, please have a look at the following book: Visallization of Time-Oriented Data, by Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. Springer, 2011. [2]. After the talk there was an interested discussion about the relationship and fundamental difference between time and space. I think this is worthwhile further discussion.

Another direction to follow up is tangible (visual) analytics. It would be interesting to assess the contributions to understanding of further modalities when interactively exploring data, e.g. haptics and sound. Some years back Martin Schrittenloher (one of my students in Munich) visited Morten Fjeld for his project thesis and experimented with force feedback sliders [1], … perhaps we should have this as a project topic again! An approach would be to look specifically at the understanding of data when force-feedback is presented on certain dimensions.

[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51
[2] Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. Visallization of Time-Oriented Data. Springer, 2011.

A proposal to replace non-archival publications

In the CHI community we have the notion of non-archival publications. Some years back this concept may have been good but I find it harder and harder to understand. Over the month I had several people about this concept and in Paris I discussed it with several colleagues, who are involved in SIGCHI. Here are some of the thoughts – hopefully as a starting point for further discussion.

First a short introduction to the concept of Non-archival publications: non-archival publications in the “CHI world” are papers that are published and shown at the conference, but that must not be held against a later publication. In essence these papers are considered as not-published when reviewing an extended version of the paper. A typical example is to publish a work in progress (WIP) paper in one year showing outlining the concept, the research path you have started to take, and some initial findings. Than in the following year you publish a full paper that includes all the data and a solid analysis. In principles this a great way of doing research, getting feedback from the community on the way, and publishing then larger piece of work. Elba in our group did this very well: a WIP in CHI2010 [1] and then the full paper in CHI2011 [2]. This shows there is value to it and understand the motivation why the concept of non-archival publications was created.

Over the last years however I have seen a number points that highlight that the concept of non-archival publication is everything but not straightforward to deal with. The following points are from experience in my group over the last years.

1) Non-archival publications are in fact archival. Once you assign a document a DOI and include them in a digital library (DL) these publications are archived. The purpose of a digital library and the DOI is that things will live on, even if the people’s websites are gone. The point that the authors keep the copyright and can publish it again does not chance the fact that the paper is archived. It is hard to explain someone from another community (e.g. during a TPC meeting of Percom) that there is a paper which has a DOI, is in the ACM DL, it counts into the download and citation statistics of the author in the ACM DL and is indexed by Google Scholar, and yet it has to be considered as not published, when assessing a new publication.

2) Non-archival publications may be the only publication on a topic. Sometimes people have a really cool idea and some initial work and they publish it as WIP (non-archival). Then over the years the authors do not get around to write the full paper, e.g. because they did not get the funding to do it. Hence the non-archival work in progress paper is the only publication that the authors have about this work. As the believe it is interesting they and probably other people will reference this work – but referencing something that is non-archival is questionable, but not in this case as in fact is archival as it is in the DL with DOI. Here is an example from our own experience: Sometimes back we had in our view a cool idea to chance the way smart objects can be created [3] – we did initial prototypes but did not have funding for the full project (we still work on getting it). The WIP is the only “paper” we “published” on it and hence we keep it in our CVs.

3) Chances in authorship between non-archival and full paper. Academia is a dynamic environment and hence things are started in one place and continued somewhere else. In this process the people doing the research are very likely to chance. To account for this we typically include a reference to the first non-archival publication to acknowledge the earlier contributions made. We have one example were we had an idea for navigation system that we explored in Munich very superficial and wrote up a WIP [4]. Enrcio then moved on to Lancaster and did a serious system and study – and as he is a nice person he references the WIP to acknowledged that some other people were involved in initial phase of creating the idea [5]. And by doing so he increased Antonio’s and my citation count, as we list the WIP paper on our Google scholar page.

4) Non-archival publication are part of people’s citation count and h-index. When assessing the performance of individuals academia seems to move more and more towards “measurable” data, hence we see that citation counts and h-index may play a role. I have one “publication”, it was a poster at ISWC 2000 about a wearable RFID reader [6], that has 50+ citations and it hence impacts my h-index (on Google). For ISWC2000 posters were real publications in the IEEE DL, but this could have equally been a WIP at CHI. Hence there is the question: should non-archival material be part of the quantitative assessment of impact?

I have some further hypothetical points (inspired by the real world) that highlight some of the issues I see with the concept of non-archival publications:

Scenario A) Researcher X has great idea for a new device and publishes a non-archival paper including the idea, details about the way the implementation, some initial results, and a plan how she will do the study at Conf201X. She has a clear plan to complete the study and publish the full paper at Conf201X+1. She falls short in time due to be ill for a few months and manages to submit only a low quality full paper. Researcher Y talks to Research X at the conference is impressed and reads the non-archival version of the paper. He likes it and has some funds available hence he decides to do a follow-up building on this research. He hires 3 interns for the summer, gets 20 of the devices build, does a great study, and submits a perfect paper. The paper of Researcher Y is accepted and the paper of researcher X is not. My feeling would be that in this case Y should at least reference the non-archival paper of X, hence non-archival papers should be seen as previous work.

Scenario B) A researcher starts a project, creates a systems and does an initial qualitative study. He publishes the results as non-archival paper (e.g. WIP) including a description of the quantitative study to be conducted. Over the next month he does the quantitative study – it does not provide new insights, but confirms the initial findings. He decides to write a 4 page note in two column format that is over 95% the same text as the 6 page previously published WIP, just with the addition, of one paragraph that a qualitative study was conducted which confirmed the results. In this case having both papers in the digital library feels not write. The obvious solution would be to replace the work in progress by the note.

Here is a proposal how non-archival publication could be replaced: 

  • Everything that is published in the (ACM) digital library and which has an DOI is considered an archival publication (as they are in fact are)
  • Publications carry labels such as WIP, Demo, Note, Full paper, etc.
  • Scientific communities can decided to have certain venues that can be evolutionary, e.g. for SIGCHI this would be to my current understanding WIP, Interactivity, and workshops.
  • Evolutionary publications can be replaced by “better” publications by the authors, e.g. an author of a WIP can replace this WIP in the next year with a Full paper or a Note, the DOI stays the same
  • To ensure accountability (with regard to the DOI) the replaced version remain in the appendix of the new version, e.g. the full paper has then as appendix the WIP it replaces
  • If evolutionary publications are not replaced by the author they stay as they are and other people have to consider these as previous work
  • Citations accumulated along the evolutionary path are accumulated on the latest version include.
  • Authors can decide (e.g. when the project team changes, when the results a contradictory to the initial publication, when significant parts of the system chance, when authors chance) to not go the evolutionary path. In this case they are measured against the state of the art, which includes their own work.

In the CHI context this could be as follows: you have a WIP in year X, in year X+1 you decided to replace the WIP by the accepted Full paper that extended this WIP, in year X+3 you decided to extend Full Paper with your accepted ToCHI paper. When people download the ToCHI paper they will have the full conference paper and the WIP in the appendix. The citations that are done on the WIP and on the full paper are included in the citations of the journal paper. In a case where you combine conference several papers into a consolidated journal paper, you would create a new instance not replacing any of it or you may replace one of the conference papers.

This approach does not solve all the problems but I hope it is a starting point for a new discussion.

Just claiming stuff that is in the ACM DL and has a DOI is not archival feels like we create our own little universe in which we decide that gravity is not relevant…

UPDATE – Discussion in facebook (2012-12-11):

Comment by Alan Dix:
It seems there are three separate notions of ‘archival’:
(i) doesn’t count as prior publication for future, say, journals
(ii) is recorded in some stable way to allow clear citation
(iii) meets some minimum level against some set of quality criteria

In the days before people treated conferences as if they were journal publications. It was common to have major publications in university or industrial lab ‘internal’ report series. These were often cited, and if they made it to journals, it was years later. The institutions distributed and maintained the repositories, hence they were archival by defn (ii). Conference and workshop papers likewise were and have always been cited widely whether or not they were officially declared ‘archival’.

Conference papers, even if from prestigious conferences such as CHI are NOT usually archival by defn (iii) – or at least cannot be guaranteed to – as it is not a minimal standard in all criteria, more a balance between criteria, if something is really novel and important, but maybe not 100% solid it would and *should* be conference publishable, but shuld not be jiurnal publishable until *everything* hits minimum standard may not be fantastic against any though – faultless != best

As for (i) that is about venue, politics and random rubbish rules. For a conference the issue is “is there enough new for the delegates to see?” (unless the conference is pretending to be ‘archival’ meaning (ii), but we should ignore such disingenuous venues).

For a journal, it would quite valid to publish a paper absolutely identical (copyright issues withstanding) one that had previously been published (and is archival by (i)) as its job is to ensure (ii).

This was common in the past with internal reports and common again now with eprints services providing pre-prints during submission as well as pre-publication.

In a web world *all* conference contributions are archival by defn (i) and *none* are by def. (ii).

Conferences are news channels, journals and quality agencies … and when the two get confused the discipline is in crisis.

Comment by Eva Hornecker
Reading Alan’s response I am reminded I used to learn the distinction between ‘grey’ literature (citable, e.g. technical reports) and white/black (not sure anymore which is which) that is either informal and not archived (e.g. workshop position papers) or fully published and peer reviewed. Difference with WiPs etc. is they are peer reviewed (although only gently)

Comment by Rod Murray-Smith
I guess there is also a question about whether WIPs are really still being used as “works in progress”, or more frequently as a way to attend the conference despite the paper not being lucky enough to get in. Do we have any stats on % of papers which are recycled from the main conference, as by submitting them to that, authors are claiming that these are ready for archival. Similar issues for many workshop papers.

Comment by Alan Dix

Of course workshop position papers are often web ‘archived’ (my criteria (ii)), and some even heavily refereed … indeed many people would prefer a CHI workshop paper on their CV than a more heavily refereed conference paper elsewhere … I guess about brand, like a Nike holdall.

There is another orthogonal issue too which is about the level and surety of the process, which is pretty independent of the clarity and kind of criteria. You may have a poor quality journal that is using similar criteria to a better journal, but simply having a lower bar and perhaps, because of quality of reviewing, lower level of confidence. I’m sure both Fiat and Ferrari have quality control, just the level different.

In some ways I am happier with low quality journals that you now are low quality (and therefore readers apply caveat emptor) than high quality conferences, where it is easy for readers to assume high quality = all OK.

This is why I always feel that all reviewing processes should have a non-blind point, as a paper with a fantastic idea, but major methodological flaw, is fine if produced by an unknown person in and unknown institution (as readers will take it with a inch of salt), but should be rejected if from a major name in the field (as it is more liely to be taken as a pattern of how to do it by readers).

Alternatively anonymous refereeing + anonymous publication

… and none of this is about the absolute value, significance, etc. of the work, quality control is about stopping the bad apples, not making good ones.

Comment by Susanne Boll

I fully agree. Coming from the Multimedia community initially, I never understood this concept. SIGMM and the annual conferences will publish anything that undergoes a peer-review. Full papers are the most prestiguous one, short papers (4 pages) are for smaller contributions or more focused work. Workshops are THE platform to start new topics in the field and of course the work is peer-reviewed and published. For example, the Multimedia Information Retrieval run for several years and gained more and more interest in the field until it finally became an own conference.

I also found it strange this year that I reviewed a full paper for one year but had a deja vu as the work was already shown in the interactivity session the year before. This not only makes it difficult to judge novelty but also is contradictory to the blind review. Maybe have a look how other SIG conferences such as Multimedia handle it.

Comment Amanda Marisa Williams

I’m intrigued — no time at the moment but it’s bookmarked for later today. Def wanna have this conversation with some CHI veterans since I have some concerns about the archival/non-archival distinction as well.

Comment by Bo Begole
I think the crux of the issue is simply that we shouldn’t use the term “archival” at all – as you point out, anything published on the DL with a DOI is “archived”. It’s an archaic term. More properly, we should use accurate terms to describe the level of review. CHI uses the terms “refereed” “juried” and “curated” for different levels which map to ACM categories of CHI refereed is roughly equiv to “refereed, formally reviewed” CHI juried is equiv to “reviewed” and CHI curated is roughly equiv to “unreviewed”. CHI also uses the ACM criteria regarding republishability of content

Comment by Chris Schmandt
What Bo says is good, but this distinction is lost on the masses. It’s a “CHI paper” no matter what venue. And even in the old day when we had the separate “abstracts” volume, only the few in the know could recognize the difference between the short …

[1] Elba del Carmen Valderrama Bahamóndez and Albrecht Schmidt. 2010. A survey to assess the potential of mobile phones as a learning platform for panama. In CHI ’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’10). ACM, New York, NY, USA, 3667-3672. DOI=10.1145/1753846.1754036
[2] Elba del Carmen Valderrama Bahamondez, Christian Winkler, and Albrecht Schmidt. 2011. Utilizing multimedia capabilities of mobile phones to support teaching in schools in rural panama. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 935-944. DOI=10.1145/1978942.1979081
[3] Tanja Doering, Bastian Pfleging, Christian Kray, and Albrecht Schmidt. 2010. Design by physical composition for complex tangible user interfaces. In CHI ’10 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’10). ACM, New York, NY, USA, 3541-3546. DOI=10.1145/1753846.1754015
[4] Enrico Rukzio, Albrecht Schmidt, and Antonio Krüger. 2005. The rotating compass: a novel interaction technique for mobile navigation. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’05). ACM, New York, NY, USA, 1761-1764. DOI=10.1145/1056808.1057016
[5] Enrico Rukzio, Michael Müller, and Robert Hardy. 2009. Design, implementation and evaluation of a novel public display for pedestrian navigation: the rotating compass. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09). ACM, New York, NY, USA, 113-122. DOI=10.1145/1518701.1518722
[6] Albrecht Schmidt, Hans-W. Gellersen, and Christian Merz. 2000. Enabling Implicit Human Computer Interaction: A Wearable RFID-Tag Reader. In Proceedings of the 4th IEEE International Symposium on Wearable Computers (ISWC ’00). IEEE Computer Society, Washington, DC, USA, 193-194.

Congratulation to Dr. Florian Alt (No. 6)

Florian Alt defended his PhD thesis “A Design Space for Pervasive Advertising on Public Displays” at the University of Stuttgart. Over the last years Florian work at the crossroads of interactive public displays and pervasive advertising. His research output during the last years and while working on the project was amazing, see his DBLP entry.

The dissertation will be soon available online. If you are curious about his work right now, there are a few papers or a book you should read. A high level description of the findings is described in a paper published in IEEE Computer on Advertising on Public Display Networks [1]. The initial paper that paved the way towards understanding design space of public displays [2] is providing a comprehensive descriptions of ways for interaction with public displays. One of the highlights of the experimental research is the paper “Looking glass: a field study on noticing interactivity of a shop window” [3], which was done during Florian’s time at Telekom Innovation Laboratories in Berlin (it received a best paper award at CHI 2012). Towards the end of the thesis everyone realizes that evaluation is a most tricky thing, hence there is one paper on “How to evaluate public displays” [4]. If you are more interested on the advertising side, have a look at the book he co-edited with Jörg Müller and Daniel Michelis: Pervasive Advertising by Springer Verlag, 2011, available as kindle version at Amazon.

Florian joined my research group already back in Munich as a student researcher, where we explored ubiquitous computing technologies in a hospital environment [5]. He followed to Fraunhofer IAIS to do his MSc thesis, where he created a web annotation system that allowed parasitic applications on the WWW [6]. I nearly believed him lost, when he moved to New York – but he came back to start his PhD in Duisburg-Essen… and after one more move in 2011 to the University of Stuttgart he graduated last week! Congratulations! He is no. 6 following Dagmar Kern, Heiko Drewes, Paul Holleis, Matthias Kranz, and Enrico Rukzio. The photo shows the current team in Stuttgart – when looking at the picture it seems there are soon more to come 😉

[1] Alt, F.; Schmidt, A.; Müller, J.; , “Advertising on Public Display Networks,” Computer , vol.45, no.5, pp.50-56, May 2012. DOI: 10.1109/MC.2012.150, URL:
[2] Jörg Müller, Florian Alt, Daniel Michelis, and Albrecht Schmidt. 2010. Requirements and design space for interactive public displays. In Proceedings of the international conference on Multimedia (MM ’10). ACM, New York, NY, USA, 1285-1294. DOI=10.1145/1873951.1874203
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718
[4] Florian Alt, Stefan Schneegaß, Albrecht Schmidt, Jörg Müller, and Nemanja Memarovic. 2012. How to evaluate public displays. In Proceedings of the 2012 International Symposium on Pervasive Displays (PerDis ’12). ACM, New York, NY, USA, , Article 17 , 6 pages. DOI=10.1145/2307798.2307815
[5] A. Schmidt, F. Alt, D. Wilhelm, J. Niggemann,  and H. Feussner,  Experimenting with ubiquitous computing technologies in productive environments. Journal Elektrotechnik und Informationstechnik. 2006, 135-139.
[6] Florian Alt, Albrecht Schmidt, Richard Atterer, and Paul Holleis. 2009. Bringing Web 2.0 to the Old Web: A Platform for Parasitic Applications. In Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I (INTERACT ’09). Springer-Verlag, Berlin, Heidelberg, 405-418. DOI=10.1007/978-3-642-03655-2_44

Call for Papers: Augmented Human Conference 2013 (AH2013)

In 2013 the 4th Augmented Human Conference will talk place in Stuttgart, Germany. The submission deadline is January 8, 2013 and the conference is in cooperation with ACM SIGCHI. The papers will be published in the ACM digital library. Andreas Bulling and Christian Holz are the program chairs and there is a fabulous technical program committee.

With AH2013 we continue a Conference that over last years has ventures beyond the usual things in human computer interaction and pervasive computing. Improving and augmenting human abilities is at the core of the conference, ranging from navigation systems, to actuator that help human movement, to improved or novel senses. This may include hardware, sensors, actuators, and software, such as web based applications or mobile apps.

We are curious about technologies and solutions that make humans smarter and augment human capabilities. Over the last years the conference has highly valueed novel contributions, inspiring ideas, forward thinking applications and new concepts. Originality, ingenuity, creativity, novelty come in this context before rigorous evaluations and flawless statistical analysis of the study data. We are looking forward to your contributions. Please the web page at

Thanks to Patrick Lühne for the great designs!

3DUI Technologies for Interactive Content by Prof. Yoshifumi Kitamura

In the context of multimodal interaction in ubiquitous computing professor Yoshifumi Kitamura presented a Simtech guest lecture on 3D user interface technologies. His research goal is to create 3D display technologies that allow multi-user direct interaction. Users should be able to move in front of the display and different users should have different perspectives according to the location in front of the display. He showed a set of rotating displays (volumetric displays) that allow for the visual presentation, but not for interaction.

His approach is based on an illusion hole that allows for multiple users and direct manipulation. The approach is to have different projections for different users, that are not visible for others but that creates the illusion of interaction with a single object. It uses a display mask that physically limits the view of each user. Have a look at their SIGGRAPH Paper for more details [1]. More recent work on this can be found on the webpage of Yoshifumi Kitamura’s web page [2]

Example of the IllusionHole from [2].

Over 10 years ago they worked on tangible user interfaces based on blocks. Their system is based on a set of small electronic components with input and output, that can be connected and used to create larger structures and that provide input and output functionality. See [3] and [4] for details and applications of Cognitive Cubes and Active Cubes.

He showed examples of interaction with a map based on the concept of electric materials. Elastic scroll and elastic zoom allow to navigate with maps in an apparently intuitive ways. The mental model is straight forward, as the users can image the surface as an elastic material, see [5].

One really cool new display technology was presented at last year ITS is a furry multi-touch display [6]. This is a must read paper!

The furry display prototype – from [6].

[1] Yoshifumi Kitamura, Takashige Konishi, Sumihiko Yamamoto, and Fumio Kishino. 2001. Interactive stereoscopic display for three or more users. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’01). ACM, New York, NY, USA, 231-240. DOI=10.1145/383259.383285
[3] Ehud Sharlin, Yuichi Itoh, Benjamin Watson, Yoshifumi Kitamura, Steve Sutphen, and Lili Liu. 2002. Cognitive cubes: a tangible user interface for cognitive assessment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02). ACM, New York, NY, USA, 347-354. DOI=10.1145/503376.503438
[4] Ryoichi Watanabe, Yuichi Itoh, Masatsugu Asai, Yoshifumi Kitamura, Fumio Kishino, and Hideo Kikuchi. 2004. The soul of ActiveCube: implementing a flexible, multimodal, three-dimensional spatial tangible interface. Comput. Entertain. 2, 4 (October 2004), 15-15. DOI=10.1145/1037851.1037874
[5] Kazuki Takashima, Kazuyuki Fujita, Yuichi Itoh, and Yoshifumi Kitamura. 2012. Elastic scroll for multi-focus interactions. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ’12). ACM, New York, NY, USA, 19-20. DOI=10.1145/2380296.2380307
[6] Kosuke Nakajima, Yuichi Itoh, Takayuki Tsukitani, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura, and Fumio Kishino. 2011. FuSA touch display: a furry and scalable multi-touch display. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA, 35-44. DOI=10.1145/2076354.2076361

SIGCHI Rebuttals – Some suggestions to write them

ACM SIGCHI has in it’s review process the opportunity for the authors to respond to the comments of the reviewers. I find this a good thing and to me it has two main functions:

  1. The reviewers are usually more careful in what they write as they know they have to face a response for the authors
  2. Authors can clarify points that they did not get across in the first place in the original submission.

We usually write for all submissions with an average score over 2.0 a rebuttal. For lower ranked submissions it may be OK if we think we have a chance to counter some of the arguments, which we believe are wrong or unfair.

For the rebuttal it is most critical to address the meta-review as good as possible. The primary will be in the PC meeting and if the rebuttal wins this person over the job is well done. The other reviews should be addressed, too.

For all the papers where we write a rebuttal I suggest the following steps(a table may be helpful):

  1. read all reviews in detail
  2. copy out all statements that have questions, criticism, suggestions for improvement from each review
  3. for each of these statement make a short version (bullet points, short sentence) in your own words
  4. sort the all the extracted statements by topic
  5. combine all statements that address the same issue
  6. order the combined statements according to priority (highest priority to primary reviewer)
  7. for each combined statement decide if the criticism is justified, misunderstood, or unjustified
  8. make a response for each combined statement
  9. create a rebuttal that addresses as many points as possible, without being short (trade-off in the number of issue to address and detail one can give)

Point 8 is the core…
There are three basic options:

  • if justified: acknowledge that this is an issue and propose how to fix it
  • if misunderstood: explain again and propose you will improve the explanaition in the final version
  • if unjustified: explain that this point may be disputed and provide additional evidence why you think it should be as it is

The unjustified ones are the most tricky ones. We had cases where reviewers stated that the method we used is not appropriate. Here a response could be to cite other work that used this method in the same context. Similarly we had reviewers arguing that the statistical tests we used cannot be used on our data, here we also explained in more details the distribution of the data and why the test is appropriate. Sometimes it may be better to ignore cases where the criticism is unjustified – especially if it is not from the primary.

Some additional points

  • be respectful to the reviewers – they put work in to review the papers
  • if the reviewers did not understand – we probably did not communicate well
  • do not promise unrealistic things in the rebuttal
  • try to answer direct questions with precise and direct answers
  • if you expect that one reviewer did not read the paper – do not directly write this – try to address the points (and perhaps add a hint it is in the paper, e.g. “ANSWER as we outline already in section X)

Karin Bee has defended her dissertation.

Karin Bee (nee Leichtenstern) has defended her dissertation at the University of Augsburg. In her dissertation she worked on methods and tools to support a user centered design process for mobile applications that use a variety of modalities. There are some papers that describe her work, e.g. [1] and [2]. To me it was particularly interesting that she revisited the experiment done in her master thesis in a smart home in Essex [3] and reproduced some of it in her hybrid evaluation environment.

It is great to see that now most of our students (HiWis and project students) who worked with us in Munich on the Embedded Interaction Project have finished their PhD (there are some who still need to hand in – Florian? Raphael?, Gregor? You have enough papers – finish it 😉

In the afternoon I got to see some demos. Elisabeth André has a great team of students. They work on various topics in human computer interaction, including public display interaction, physiological sensing and emotion detection, and gesture interaction. I am looking forward to a joined workshop of both groups. Elisabeth has an impressive set of publications which is always a good starting point for affective user interface technologies.

[1] Karin Leichtenstern, Elisabeth André,and Matthias Rehm. Tool-Supported User-Centred Prototyping of Mobile Applications. IJHCR. 2011, 1-21.

[2] Karin Leichtenstern and Elisabeth André. 2010. MoPeDT: features and evaluation of a user-centred prototyping tool. In Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS ’10). ACM, New York, NY, USA, 93-102. DOI=10.1145/1822018.1822033

[3] Enrico Rukzio, Karin Leichtenstern, Vic Callaghan, Paul Holleis, Albrecht Schmidt, and Jeannette Chin. 2006. An experimental comparison of physical mobile interaction techniques: touching, pointing and scanning. In Proceedings of the 8th international conference on Ubiquitous Computing (UbiComp’06), Paul Dourish and Adrian Friday (Eds.). Springer-Verlag, Berlin, Heidelberg, 87-104. DOI=10.1007/11853565_6

MobiSys 2012, Keynote by Paul Jones on Mobile Health Challenges

This year’s ACM MobiSys conference is in the Lake District in the UK. I really love this region in the UK. Already 15 years back when I studied in Manchester I often came up over the weekend to hike in the mountains here. The setting of the conference hotel is brilliant, overlooking Lake Windermere.
The opening keynote of MobiSys 2012 was presented by Dr. Paul Jones, the NHS Chief Technology Officer who talked about “Mobile Challenges in Health”. Health is very dear to people and the approach to health care around the world is very different.

The NHS is a unique intuition that is providing healthcare to everyone in the UK. It is taxation funded and with its 110 billion pounds per year budget it is one of the cheaper (and yet efficient) health care systems in the world. The UK spends about 7% of its national cross product on health care, whereas the US or Germany nearly spend double of this percentage. Beside the economic size the NHS is also one of the biggest employers in the world, similar in size to the US department of defense and the Chinese people’s army. The major difference to other larger employers is, that a most part of the staff in the NHS is highly educated (e.g. doctors) and is not easily taking orders

Paul started out with the statement: technology is critical to providing health care in the future. Doing healthcare as it is currently done will not work in the future. Carrying on will not work as the cost would not be payable by society. In general information technology in the health sector is helping to create more efficient systems. He had some examples that often very simple system help to make a difference. In one case he explained that changing a hospitals scheduling practice from paper based diaries to a computer based systems reduced waiting times massively (from several month to weeks, without additional personal). In another case laptops were provided to community nurses. This saved 6 hours per week and freed nearly an extra day of work per week as it reduced their need for travelling back to the office. Paul argued, that this is only a starting point and not the best we can do. Mobile computing has the potential to create better solutions than a laptop that are more fitting the real working environment of the users and patients. One further example he used is dealing with vital signs of a patient. Traditionally this is measured and when degrading a nurse is calling a junior doctor and they have to respond in a certain time. In reality nurses have to ask more often and doctors may be delayed. In this case they introduced a system and mobile device to page/call the doctors and document the call (instead of nurses calling the doctors). It improved the response times of doctors – and the main reason is that actions are tracked and performance is measured (and in the medical field nobody wants to be the worst).

Paul shared a set of challenges and problems with the audience – in the hope that researchers take inspiration and solve some of the problems 😉

One major challenge is the fragmented nature of the way health care is provided. Each hospital has established processes and doctors have a way they want do certain procedures. These processes are different from each other – not a lot in many cases but different enough that the same software is not going to work. It is not each to streamline this, as doctors usually know best and many of them make a case why their solution is the only one that does the job properly. Hence general solutions are unlikely to work and solutions need to be customizable to specific needs.

Another interesting point was about records and paper. Paul argued that the amount of paper records in hospital is massive and they are less reliable and save as many think. It is common that a significant portion of the paper documentation is lost or misplaced. Here a digital solution (even if non-perfect) is most certainly better. From our own experience I agree on the observation, but I would think it is really hard to convince people about it.

The common element through the talk was, that it is key to create systems that fit the requirements. To achieve this it seems that having multidisciplinary teams that understand the user and patient needs is inevitable. Paul’s examples were based on his experience of seeing the user users and patient in context. He made firsthand the observation, that real world environments often do not permit the use of certain technologies or create sup-optimal solution. It is crucial that the needs to are understood by the people who design and implement the systems. It may be useful to go beyond the multidisciplinary team and make each developer spending one day in the environment they design for.

Some further problems he discussed are:

  • How to move the data around to the places where it is needed? Patients are transferred (e.g. ambulance to ER, ER to surgeons, etc.) and hence data needs to be handed over. This handover has to work across time (from one visit to the next) and across departments and institutions
  • Personal mobile devices (“bring your own device”) are a major issue. It seems easy for an individual to use them (e.g. a personal tablet to make notes) but on a system-level they create huge problems, from back-up to security. In the medical field another issue arises: the validity of data is guaranteed and hence the data gathered is not useful in the overall process.

A final and very interesting point was: if you are not seriously ill, being in a hospital is a bad idea. Paul argued, that the care you get at home or in the community is likely to be better and you are less likely to be exposed to additional risks. From this the main challenge for the MobiSys community arises: It will be crucial to provide mobile and distributed information systems that work in the context of home care and within the community.

PS: I like one of the side comments: Can we imagine doing a double blind study on a jumbo jet safety? This argument hinted, that some of the approaches to research in the medical field are not always most efficient to prove the validity of an approach.

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Keynote at the Pervasive Displays Symposium: Kenton O’Hara

Kenton O’Hara, a senior researcher in the Socio-Digital-Systems group at Microsoft Research in Cambridge, presented the keynote at the pervasive displays symposium in Porto on the topic “Social context and interaction proxemics in pervasive displays“. He highlighted the importance of the spatial relationship between the users and the interactive displays and the different opportunities for interaction that are available when looking at the interaction context.

Using examples from the medical field (operating theater) he showed the issues that arise from the need of sterile interaction and hence avoiding touch interaction and moving towards a touchless interaction mode. A prototype, that uses a Microsoft Kinect sensor,  allows the surgeon to interact with information (e.g. an x-ray image) while working on the patient. It was interesting to see that gestural interaction in this context is not straightforward, as surgeons use tools (and hence have their hands not free) or gesture as a part of the communication in the team.

Another example is a public space game; there are many balls on a screen and a camera looking at the audience. Users can move the balls by body movement based on a simple edge detection video tracking mechanism and when two balls touch they form a bigger ball.  Kenten argues that “body-based interaction becomes a public spectacle” and interactions of an individum are clearly visible to others. This visibilility can lead to inhibition and may reduce the motivation of user to interact. For the success of this game the designing of the simplistic tracking algorithms is one major factor. By tracking edges/blobs the users can play together (e.g. holding hands, parents with the kids in their arm) and hence a wide range of interaction proxemics are supported. He presented some further examples of public display games on BBC large screens, also showing that the concept of interaction proxemics can be use to explain interaction .

TVs have change eating behavoir. More recent research in displays in the context of food consumptions have been in contrast mainly pragmatic (corrective, problem solving). Kenton argued that we look at the cultural values of meals and see shared eating as a social practice. Using the example of eating in front of the television (even as a family) he discusses the implications on communication and interaction (basically the communication is not happening). Looking at more recent technologies such as phones, laptops and tablets and their impact on social dynamics probably many of us realized that this is impacting many of us in our daily lives already (or who is not taking their phone to table?). It is very obvious that social relationships and culture changes with these technologies. He showed “4Photos” [1] a designed piece of technology to be put on the center of the table showing 4 photographs. Users can interact with it from all sides. It is designed in a way to stimulate rather than inhibit communication and to provide opportunities for conversation. It introduces interaction with technologies as a social gesture.

Interested in more? Kenton published a book on public displays in 2003 [2] and has a set of relevant publications in the space of the symposium.


[1] Martijn ten Bhömer, John Helmes, Kenton O’Hara, and Elise van den Hoven. 2010. 4Photos: a collaborative photo sharing experience. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI ’10). ACM, New York, NY, USA, 52-61. DOI=10.1145/1868914.1868925

[2] Kenton O’Hara, Mark Perry, Elizabeth Churchill, Dan Russell. Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies. Kluwer Academic, 2003

Visiting the Culture Lab in Newcastle

While being in the north of England I stopped by in Newcastle at the Culture Lab. If the CHI-conference is a measure for quality in research in Human Computer Interaction Culture Lab is currently one of the places to be – if you are not convinced have look at Patrick Olivier’s publications. The lab is one of a few places where I think a real ubicomp spirit is left – people developing new hardware and devices (e.g. mini data acquisition boards, specific wireless sensor, embedded actuators) and interdisciplinary research plays a central role. This is very refreshing to see, especially as so many others in Ubicomp have moved to mainly creating software on phones and tables…

Diana, one of our former students from Duisburg-Essen, is currently working on her master thesis in Newcastle. She looks into new tangible forms of interaction on table top UIs. Especially actuation of controls is a central question. The approach she uses for moving things is compared to other approached, e.g. [1], very simple but effective – looking forward to reading the paper on the technical details (I promised not to tell any details here). The example application she has developed is in chemistry education.

Some years back at a visit to the culture lab I had already seen some of the concepts and ideas for the kitchen. Over the last years this has progressed and the current state is very appealing. I really thing the screens behind glass in the black design make a huge difference. Using a set of small sensors they have implemented a set of aware kitchen utensils [2]. Matthias Kranz (back in our group in Munich) worked on a similar idea and created a knife that knows what it cuts [3]. It seems worthwhile to exploring the aware artifacts vision further …

[1] Gian Pangaro, Dan Maynes-Aminzade, and Hiroshi Ishii. 2002. The actuated workbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02). ACM, New York, NY, USA, 181-190. DOI=10.1145/571985.572011 

[2] Wagner, J., Ploetz, T., Halteren, A. V., Hoonhout, J., Moynihan, P., Jackson, D., Ladha, C., et al. (2011). Towards a Pervasive Kitchen Infrastructure for Measuring Cooking Competence. Proc Int Conf Pervasive Computing Technologies for Healthcare (pp. 107-114). PDF

[3] Matthias Kranz, Albrecht Schmidt, Alexis Maldonado, Radu Bogdan Rusu, Michael Beetz, Benedikt Hörnler, and Gerhard Rigoll. 2007. Context-aware kitchen utilities. InProceedings of the 1st international conference on Tangible and embedded interaction (TEI ’07). ACM, New York, NY, USA, 213-214. DOI=10.1145/1226969.1227013 (PDF)

Media art, VIS Excursion to ZKM in Karlsruhe

This afternoon we (over 40 people from VIS and VISUS at the University of Stuttgart) went to Karlsruhe to visit the ZKM. We got guided tours to the panorama laboratory, the historic video laboratory, to the SoundARt exhibition and some parts of the regular exhibition. Additionally Prof. Gunzenhäuser gave a short introduction to the Zuse Z22 that is in on show there, too.

 The ZKM is a leading center for digital and media art that includes a museum for media art and modern art, several research institutes, and an art and design school. The approach is to bring media artists, works of art, research in media art and teaching in this field close together (within a single large building). The exhibitions include major media art works from the last 40 years.

The panorama laboratory is a 360 degree (minus a door) projection. Even though the resolution of the powerwall at VISUS [1] is higher and the presentation is in 3D, the360 degree 10 Megapixel panorama screen results in an exciting immersion. Without 3D, being surrounded by media creates a feeling of being in the middle of something that happens around you. Vivien described the sensation of movement similar to sitting in a train. The moment another train pulls out of the station you have a hard time to tell who is moving. I think such immersive environment could become very common once we will have digital display wallpaper.

The historic video laboratory is concerned with “rescuing” old artistic video material. We sometimes complain about the variety of video codecs, but looking at the many different formats for tapes and cassettes, this problem has a long tradition. Looking at historic split screen videos that were created using analog technologies one appreciates the virtues of digital video editing… Two are two amazing films by Zbigniew Rybczyński: Nowa Książka (New Book): and and Tango:

The current SoundArt exhibition is worthwhile. There are several indoor and outdoor installations on sounds. In the yard there is a monument built of speakers (in analogy to the oracle of Delphi) that you can call from anywhere (+49 721 81001818) and get 3 minutes of time to talk to whom even is in the vicinity of the installation. Another exhibit sonfied electron magnetic fields from different environments in an installation called the cloud.

[1] Powerwall at VISUS at the Univeristy of Stuttgart (6m by 2.20, 88 million pixel in, 44 million pixel per eye for 3D).

Golden Doctorate – 50 years since Prof. Gunzenhäuser completed his PhD

It is 50 years now that Prof. Rul Gunzenhäuser, my predecessor on human computer interaction and interactive systems at the University of Stuttgart, defended his PhD. Some month back I came across his PhD thesis “Ästhetisches Maß und ästhetische Information“ (aesthetic measure and aesthetic information) [1], supervised by Prof. Max Bense, and I was seriously impressed.

He is one of the few truly interdisciplinary people I know. And in contrast to modern interpretations of interdisciplinary (people from different working together) he is himself interdisciplinary in his own education and work. He studied Math, Physics and Philosophy, worked while he studied in a company making (radio) tubes, completed a teacher training, did his PhD in Philosophy but thematically very close to the then emerging field of computer science and became later a post-doc in the computing center. He taught didactic of mathematics in a teacher training University, was a visiting professor at the State University of New York and finally became in 1973 professor for computer science at the University of Stuttgart staring the department of dialog systems. This unique educational path shaped his research and I would expect his whole person. Seeing this career path I have even more trouble accepting the streamlining of our educational system and find it easier to relate to a renaissance educational ideal.

Yesterday evening we had a small seminar and gathering to mark the 50th anniversary of his PhD. Our colleague Prof. Catrin Misselhorn, a successor on the chair of philosophy held by Max Bense, talked about “Aesthetic as Science?” (with a question mark) and started with the statement that what people did in this area 50 years ago is completely dated, if not largely wrong. I found the analysis very interesting and enlightening as it highlights that scientific results, to be relevant, do not have a non-transient nature. For a mathematician this may be hard to grasp, but for someone in computing and especially in human computer interaction this is a relief. It shows that scientific endeavors have to be relevant in their time but the lasting value may be specifically in the fact, that they go a single step forward. Looking back a human computer interaction a lot of the research in 70ties, 80ties, and 90ties looks now really dated, but we should not be fouled, without this work we would not be in interactive systems where we are now, if this work would not have been done.

Prof. Frieder Nake, one of the pioneers of generative art and a friend and colleague of Prof. Gunzenhäuser, reflected on the early work of computers and aesthetics and on computer generated art. He too argued the original approach is ‘dead’, but the spirit of computer generated art is stronger now than ever, with many new tools available. He described early and heated discussions between philosophers, artists, and people who made computer generated art. One interesting approach to solve the dispute is is that the computer generated art is “artificial art” (künstliche Kunst).

The short take away message from the event is:
If you do research in HCI, do something that is fundamentally new. Question the existing approach and creates new ideas and concepts. Don’t worry if this will last forever, accept that your research will likely be ‘only’ one step along the way. It has to be relevant when it is done, it matters less that it may have little relevance some 20 or 50 years later.

[1] Rul Gunzenhäuser. Ästhetisches Maß und ästhetische Information. 1962.

Share your digital activities on Android – AppTicker

If you share an apartment with a friend you know what they do. There is no need to communicate “I am watching TV” or “I am cooking” as this is pretty obvious. In the digital space this is much more difficulty. Sharing what we engage with and peripherally perceive what others do is not yet trivial.

Niels Henze and Alireza Sahami in our group have made a new attempt to research how to bridge this gap. With the AppTicker for Android they have released a software, that offers means to share usage of applications on your phone with your friends on Facebook. You can choose that whenever you start a certain app (e.g. the web browser, the camera, or the public transport app) this is shared in your activities on Facebook. In the middle screen you can see the means for control.

The app provides additionally a personal log (left screen) of all the apps that were used. I found that feature quite interesting and when looking at it I really started to reflect on my app usage patterns. If you are curious, have an android phone and if you use Facebook, please have a go and try it out.

The App homepage on our server:
Get it directly from Google Play or search for AppTicker in Google Play.

To access it directly you can scan the following QR-Code:

Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video:

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718

Introduction to the special issue on interaction beyond the desktop

After coming back from CHI2012 in Austin I found my paper copy of the April 2012 issue of IEEE Computer magazine in my letter box. This is our special issue on interaction beyond the desktop. Having the physical copy is always nice (it is because I probably grew up with paper magazines ;-).

This guest editors’ introduction [1] is an experiment as we include photos from all papers on the theme. The rational is, that probably most people will not have the paper copy in their hand. When having the digital version the overview of the papers is harder to manage, that is why we think including the photos helps to make readers curious to look at the papers in the issue. Please let us know if you think this is a good idea…

[1] Albrecht Schmidt and Elizabeth Churchill. Interaction Beyond the Keyboard. IEEE Computer, April 2012, pp. 21–24. (PDF). Link to the article in Computing Now.

Book launch: Grounded Innovation by Lars Eric Holmquist

At the Museum of the Weird in Austin Lars Erik Holmquist hosted a book launch party for his book: Grounded Innovation: Strategies for Creating Digital Products. The book uses a good number of research examples to highlight the challenges and approaches for digital products. The book has to parts: Methods and Materials and shows how both play together in the design of digital products. There is a preview for the book at Amazon.

Over 10 years back I worked together with Lars Erik on the European Project Smart-Its (, where we created sensor augmented artifacts. The book features also some of this work. To get an overview of the project have a look at [1] and [2]. The concept of Smart-Its Friends is presented in [3]. Smart-Its friends proposed the idea, that products can be linked by sharing the same context (e.g. connecting a phones and a wallet by shaking them together).

[1] Lars Erik Holmquist, Hans-Werner Gellersen, Gerd Kortuem, Albrecht Schmidt, Martin Strohbach, Stavros Antifakos, Florian Michahelles, Bernt Schiele, Michael Beigl, and Ramia Maze;. 2004. Building Intelligent Environments with Smart-Its. IEEE Comput. Graph. Appl. 24, 1 (January 2004), 56-64. (PDF) DOI=10.1109/MCG.2004.1255810

[2] Hans Gellersen, Gerd Kortuem, Albrecht Schmidt, and Michael Beigl. 2004. Physical Prototyping with Smart-Its. IEEE Pervasive Computing 3, 3 (July 2004), 74-82. (PDF) DOI=10.1109/MPRV.2004.1321032

[3] Lars Erik Holmquist, Friedemann Mattern, Bernt Schiele, Petteri Alahuhta, Michael Beigl, and Hans-Werner Gellersen. 2001. Smart-Its Friends: A Technique for Users to Easily Establish Connections between Smart Artefacts. In Proceedings of the 3rd international conference on Ubiquitous Computing (UbiComp ’01), Gregory D. Abowd, Barry Brumitt, and Steven A. Shafer (Eds.). Springer-Verlag, London, UK, UK, 116-122. (PDF)

CHI2012 opening Keynote by Margaret Gould Stewart – Empowerment, Disruption, Magic

Margaret Gould Stewart, a highly regarded user experience designer currently leading UX design at YouTube, presented the opening keynote at CHI2012.  She started her talk with reminding us that humans are story tellers – they always have been and probably always will. What is not constant is the medium – as technologies change so do means for storytelling and sharing.

The topic started out with talking about video connects the world. It extended to a larger view – changing the world through experience design (in the context of video). I often wonder what designers are and she added another quite interesting explanation: designers are humanist. By putting up the definition for humanism she made her point clear that this could apply to good people in design, essentially it is down to caring for humans in their works.

To show the power of video in connecting people she used the following example: the film “Life in a Day” and as it said in the credits “a movie filmed by you”. I have not seen it yet, but the trailer made me curious to look at this one (see the film on YouTube).

By asking the question: what are the things that make sites like YouTube have impact? she introduced 3 principles. Sites have to be:

  • Empowering
  • Disruptive
  • Magical

She outlined what these 3 principles mean for user experience design.

For empowering she had very strong examples: how photo sharing, video sharing, and social networks changed what we see of natural disaster and the effect on people. It also changed way we see it and how we can respond to it. The concrete example was the information coverage on the Hurricane Katrina 2005 (pre-video-sharing age) and the recent flood in Asia. Empowering = helping people to share their stories.

Disruption is in this context the change in use of media and especially how it changes how we perceive the ubiquitous technology of TV. The capabilities of video sharing platforms has, are very different than those of TV – at the same time it is disrupting TV massively. She had a further example of how such technology can disrupt: The Khan Academy (basically sharing educational videos) is challenging the education system. As a further step she had an example where a teacher encourages students to make their own instructional videos as means for them to learn. Disruption = finding new ways that are challenging / overthrowing the old approach.

Magic is what makes technology exciting. There is a quote by Arthur C. Clarke “Any sufficiently advanced technology is indistinguishable from magic”. The term “magic” has a long tradition in human computer interaction. Alan Kay talked about it with regard to graphical user interfaces. We had some years back a paper  a paper on Magic beyond the screen [1]. In the talk Margaret Gould Stewart used as another example Instagram, as software that provides magical capabilities for the person using it. Another example of magic she discussed is the GPS based “moving dot” on a map that makes navigation in mobile maps easy. Even without navigational skills people can “magically” find their way. Her advice is “do not get in the way of magic” – focus on the experience not technology in the back ground. In short she summarized:  “Magic disrupts the notion of reality”.

She combined the principles in one example in the design of YouTube. She discussed the page design using an analogy to a plate.  A great plate makes all food presented on it look more attractive and the design goal of the YouTube page is to be such a plate for video. It should make look all videos look better.

Another example used to highlight how to empower, disrupt, and create magic is the Each participant can manipulate one frame of the video (within given limits) and the outcome of the whole video is amazing. Cannot be described, you have to watch it.

Related to the example above an interesting question comes up: How much control is required and what type of control is applied. Here one example is twitter, which limits how much you can write but it does not limit what you post (limiting the form but not the content). She made an interesting argument about control. If you believe that democracy works and is good you can assume that people in general will make the right decisions. One further indicator is, that positive things go viral much more often than negative things. One of the takeaway messages is to believe in people an empower them.

To sum up, there are three questions to be asked when designing an experience:

  • How to empower people?
  • How to disrupt
  • How to create magic?

A final and important point is that there are things that cannot be explained and she argued that we should value this.

[1]  Albrecht Schmidt, Dagmar Kern, Sara Streng, and Paul Holleis. 2008. Magic Beyond the Screen. IEEE MultiMedia 15, 4 (October 2008), 8-13. DOI=10.1109/MMUL.2008.93

Keynote at Percom 2012: Andy Hopper from Cambridge on Computing for the Future of the Planet

In his Keynote “Computing for the Future of the Planet” Andy Hopper brought up 4 topics and touched shortly on each of them: (1) Optimal digital infrastructure – green computing, (2) Sense and optimize – computing for green, (3) Predict and react – assured computing, and (4) Digital alternatives to physical activities.

In the beginning of his talk he discussed an interesting (and after he said it very obvious) option of Green Computing: move computing towards the energy source as it is easier to transmit data than to transmit power. Thinking about this I could imagine that Google’s server farms are move to a sunny dessert and then the calculations are done while the sun is shining… and using the cold of night to cool down… This could be extended to storage: storing data is easier than storing energy – this should open some opportunities.

As a sample of an embedded sensing systems Andy Hopper presented a shoe with built-in force sensing (FSR) that allows to measure contact time and this helps to work out speed. There initial research was targeted towards athletes, see Rob Harle’s page for details. It is however easy to imagine the potential this has if regular shoes can sense movement in everyday use. He hinted to think about the options if one could go to doctor and analyze the change in walking pattern over the last year.

In various examples Andy showed how Ubisense is used in commercial applications, production, and training. It seems that medium resolution tracking (e.g. below 1 meter accuracy) can be reliably achieved with such an off the shelf systems, even in harsh environments. He mentioned that the university installations of the system at an early product stage were helpful to improve the product and grow the company. This is interesting advices, and could be a strategy for other pervasive computing products, too. For close observers of the slides there were some interesting inside in the different production methods between BMW and Austin Martin and the required quality 😉

Power usage is a central topic in his labs work and he showed several examples of how to monitor power usage in different scenarios. On example is monitoring power usage on the phone, implemented as an App that looks at how power is consumed and how re-charging is done. This data is then collected and shared – at current over 8000 people are participating. For more details see Daniel T. Wagner’ page. A further example is the global personal energy meter. He envisions that infrastructure, e.g. trains and building, are broadcasting information about the use of energy and that they provide information about one individuals share of this.

With an increasing proliferation of mobile phones the users’ privacy becomes a major issue. He showed in his talk an example, where privacy is provided by faking data. In this approach fake data, e.g. for calendar events, location data, and address book, is provided to apps on the phone. By these means you can alter what an application sees (e.g. location accuracy).

For more details and papers see the website of the digital technology group:

Opening talk at the Social Media for Insurances Symposium

I was invited to Leipzig to talk about social networks and in the context of insurance companies ( The main focus of the talk was to show what people currently do in social networks and to speculate why they do it (and  I used a picture of the seven deadly sins as an illustrations…) Additionally I discussed some prototypes of activity recognition and their potential once integrated into social media.

My talk was entitled “500 Freunde (auf Facebook): Wozu noch eine Versicherung?“ – „500 friends (on Facebook) – Is there still need for insurance?“ and discussed how ubiquitous capture and social media may shape the next community [1]. The slides in are in German.

The event was very interesting and I would expect that there is a great potential out there for insurance companies to tap into. Looking back at the original idea of insurance (e.g. old fire insurance communities) or sharing the risk of hail in farming communities can give interesting inspiration for peer-2-peer insurance models. It will be exciting to see if there a new products and services that come out of the “big players” or if new players will come to the game. To me the central issue to address is how to make insurance products more visible – and I think a user centered design approach could be very interesting…

In the future I would expect that finding the right value mix (privacy, price, safety, etc.) will be essential as we argued for other services in [2]. Some years back we wrote in an article about RFID [3] “privacy is sacred but cheap” and the more services we see the more I am convinced that this is more than a slogan. If you can create a service that is of immediate value to the user I would I expect that privacy will be a lesser concern to most? On the other hand if you reduce privacy without any value in exchange there is always an outcry…

[1] “500 Freunde (auf Facebook): Wozu noch eine Versicherung?“ – Ermöglichen allgegenwärtige Aufzeichnungs-technologien und digitale soziale Netze die nächste Solidargemeinschaft? Slides as PDF (in German)
[2] Albrecht Schmidt, Marc Langheinrich, Kristian Kersting, “Perception beyond the Here and Now,” Computer, vol. 44, no. 2, pp. 86-88, Feb. 2011, doi:10.1109/MC.2011.54 (final version at IEEE, free draft version)
[3] Schmidt, A.; Spiekermann, S.; Gershman, A.; Michahelles, F., “Real-World Challenges of Pervasive Computing“, Pervasive Computing, IEEE , vol.5, no.3pp. 91- 93, c3, July-Sept. 2006. 10.1109/MPRV.2006.57

Facebook – a platform to spot when companies go bankrupt? Real world example.

In the Germany the drug store chain Schlecker announced to be insolvent, see the Reuter news post. If you look at the company’s Facebook page and scan the comments from the last 4 weeks it is apparent that some people in the crowd and employees expected it already last year.
Schlecker is a large drug store chain with probably over 10.000 outlets in Europe and more than 30.000 employees.

The following screen shots show some selected examples I took from the following page: 
The posts are in German – the minimal summary should give you some idea…

This one the company wishes a happy Christmas and reminds people of a chance to win a car. The first replies echo the holiday greetings but then one complains that they let their shops bleed out (run empty) and that the order good do not arrive (probably posted by an employee). One further speculates that the company is close to bankruptcy. (over 3 weeks before the official note of insolvency)

The company announces a 2 euro discount on a product. Then employees post that they would like to sell the goods to the customers but that they do not get the goods for their shops. Additionally they complain that the goods they get from other closed down shows are not what they need. One says we want to work but we cat (as they are running out of stock). (over 2 weeks before the official note of insolvency)

The company announces price reductions in some goods. Some says that is great – but would be much better if these goods would be in the shops to buy them. (9 days before the official note of insolvency)

Overall I think this is an instructive real world example of the information that can be found in social networks about the health/value of companies. In particular the mix of customers and employees posting makes it a good example to study. I would expect that companies will learn lessons from this with regard to guidelines for the employees… and about transparency / openness…to understand how reliable such posts are we probably need to do some more research? let us know if you are interested in working this with us.

Congratulations to Frau Doktor Dagmar Kern for a great PhD defense (No. 5)

Dagmar Kern has successfully defended her PhD on “Supporting the Development Process of Multimodal and Natural Automotive User Interfaces” in Essen. External examiner was Antonio Krüger from University of Saarbrücken. Her dissertation will be available online soon. The core contribution of the thesis is the investigation of how to improve a user centered design process for automotive user interfaces. In order to systematically assess user interface designs in cars she developed a design space (inspired by Card et al [5]). In various cases studies she create novel in-car user interfaces and explored experimentally the implications on driver distraction.

Dagmar started working with me as a student of Media Informatics at the LMU Munich in 2005, then jointed my group at Fraunhofer IAIS/BIT in Bonn and move in 2007 with the group to Essen. She was for a short research stay in Saarbrücken and Milton Keynes and was extremely productive over the last years – 18 publications she co-authored are listed in DBLP and here are some highlights of here research:

  • exploration of how to present navigation information (e.g. vibra tactile steering wheel) [1]
  • gazemarks – an approach to aid attention switching between the road and an in car display using eye gaze date [2]
  • a multi-touch steering wheel, that reduced driver distraction [3]
  • a design space for automotive user interfaces [4]

Additionally to the publications one of the side products of here thesis is the CARS open source driving simulator. It is a configurable low cost simulator that can be used to measure driver distraction, e.g. as an alternative to LCT.

Dagmar’s defense brought us back to Essen and it was great to meet many colleagues again. We finally managed to have a group photo taken with nearly all the team (Elba is missing in the Photo).

The doctoral hat may look strange to non-Germans but it has some funny tradition. It is hand crafted by the colleagues and each of the items on the hat tells a story – usually known to the group but in the best case hard to guess for outsiders. Besides others Dagmar’s hat included a scrap heap of cars, a giraffe, a personal vibration device, a yoyo, a railway station building side, and a steering wheel cover.

[1] Dagmar Kern, Paul Marshall, Eva Hornecker, Yvonne Rogers, and Albrecht Schmidt. 2009. Enhancing Navigation Information with Tactile Output Embedded into the Steering Wheel. InProceedings of the 7th International Conference on Pervasive Computing (Pervasive ’09). Springer-Verlag, Berlin, Heidelberg, 42-58. DOI=10.1007/978-3-642-01516-8_5 (free PDF)

[2] Dagmar Kern, Paul Marshall, and Albrecht Schmidt. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international conference on Human factors in computing systems (CHI ’10). ACM, New York, NY, USA, 2093-2102. DOI=10.1145/1753326.1753646 (free PDF)

[3] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 (free PDF)

[4] Dagmar Kern and Albrecht Schmidt. 2009. Design space for driver-based automotive user interfaces. In Proceedings of the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’09). ACM, New York, NY, USA, 3-10. DOI=10.1145/1620509.1620511 (free PDF)

[5] Stuart K. Card, Jock D. Mackinlay, and George G. Robertson. 1991. A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9, 2 (April 1991), 99-122. DOI=10.1145/123078.128726


Willkommen zum VIS(US) Doktorandenkolleg!

Das Institut für Visualisierung und Interaktive Systeme (VIS) lädt Doktorandinnen und Doktoranden zum wissenschaftlichen Austausch und zur Information über Perspektiven nach der Promotion in Wirtschaft und Wissenschaft im Rahmen des Doktorandenkollegs 2012 ein.

Wann? 06.-08. Februar 2012
Wo? Waldhotel Zollernblick, Freudenstadt
Wer? Doktorandinnen und Doktoranden des VIS(US)
Leitung: Tom Ertl, Martin Fuchs, Albrecht Schmidt, Daniel Weiskopf
Institut für Visualisierung und Interaktive Systeme (VIS)
Visualisierungsinstitut der Universität Stuttgart



Vorläufiges Programm 

Tag 1 –   6. Februar 2012
Skifahren bei genügend Schnee (=Urlaubstag 😉 )
Anreise nach Freudenstadt (Organisation nach Absprache)
18 Uhr Gemeinsames Abendessen
20 Uhr


Prof. Dr. Rul Gunzenhäuser:
Thesen und Prognosen aus dem Bereich der Informatik

Albrecht Schmidt: „die Welt in 100 Jahre“
– Rückblick auf ein Buch von Wissenschaftlern von 1910 –
Wir entwickeln von Szenarien für die nächsten 100 Jahre

Tag 2   7. Februar 2012
08:45 Einführung/Eröffnung
09:00 Vortrag: Andrés Bruhn
Vorstellung des Arbeitsgebiets und der neuen Arbeitsgruppe
10:00 „FastForward“ Poster – Session 1
Kurzpräsentation (Elevator-Talk) – 90 Sekunden (strikt!) pro Person
20 Präsentation thematisch bunt gemischt
Ziel: Dissertationsthema und Arbeitsgebiet allgemeinverständlich für Informatiker erklären und auf das eigene Poster neugierig machen
10:30 – 11:30 Kaffeepause, Posterausstellung und Gespräche an den Postern
11:30 – 12:30 Track A: Session 1
3 Vorträge @ 10 Minuten
Track B: Session 1
3 Vorträge @ 10 Minuten
Track C: Session 1
3 Vorträge @ 10 Minuten
12:30 – 14:00 Mittagessen
14:00 – 15:00 Track A: Session 2
3 Vorträge @ 10 Minuten
Track B: Session 2
3 Vorträge @ 10 Minuten
Track C: Session 2
3 Vorträge @ 10 Minuten
15:30 – 16:00 „FastForward“ Poster – Session 2
Kurzpräsentation (Elevator-Talk) – 90 Sekunden (strikt!) pro Person
20 Präsentation thematisch bunt gemischt
Ziel: Dissertationsthema und Arbeitsgebiet allgemeinverständlich für Informatiker erklären und auf das eigene Poster neugierig machen
16:00 – 17:00 Kaffeepause, Posterausstellung und Gespräche an den Postern
17:00 – 18:00 Frei 🙂
18 Uhr Gemeinsames Abendessen
20 Uhr Informatik Studieren – Was macht es attraktiv?
Wie sollten wir unsere Studiengänge gestalten?
Wie gewinnen wir die besten Studierenden?
Diskussion und Gruppenarbeit
Tag 3   8. Februar 2012
08:30 – 10:30 Karrierewege nach der Promotion

  • Profile und Anforderungen
  • Akademische Karriere im Ausland (z.B. USA, UK)
  • Consulting
  • Entwickler (z.B. Google)
  • Management
  • Professor an einer (Fach)-Hochschule
  • Professor an einer Uni
  • Unternehmensgründung
  • Wissenschaftler in einem Forschungslabor

Diskussion ?

10:30 – 11:00 Kaffeepause
11:00 – 12:00 Track A: Session 3
2 Vorträge @ 10 Minuten
Track B: Session 3
2 Vorträge @ 10 Minuten
Track C: Session 3
2 Vorträge @ 10 Minuten
12:00 Mittagessen
Abreise, Rückfahrt nach Stuttgart
Evtl. Skifahren (bei Schnee und Interesse…)

Einreichung von Beiträgen

Ab sofort: Anmeldung per E-Mail an (Betreff: “DOKO-2012”, bitte Anschrift, Arbeitstitel des Promotionsvorhabens und Betreuerin oder Betreuer der Promotion angeben)
bis 24.1.2012: Einreichung einer Kurzfassung des Beitrags zum Doktorandenkolleg
(max. 1 Seite, unter Beachtung der folgenden Hinweise)
Hinweis: Für die Teilnahme am Doktorandenkolleg stehen nur begrenzt Plätze zur Verfügung. Sollte die Zahl der Anmeldungen die verfügbaren Kapazitäten überschreiten, entscheiden die Organisatoren über die Annahme von Beiträgen!
bis 2.2.2012: Feedback
bis 5.2.2012: Abgabe der finalen Version des Beitrags

Anmeldung von Beiträgen

Mit dem Doktoranden-Kolloquium möchten wir alle die in VIS und VISUS promovieren motivieren über Ihr Dissertationsthema zu berichten und zu diskutieren. Jede(r) Teilnehmer(in) soll bis zu, 24.1.2012 einen Beitrag im Umfang von ca. 1 Seite (Vorlagen siehe unten) schreiben, der die folgenden Abschnitte enthält:

Problembeschreibung und Forschungsfrage

  • Welches Problem wollt ihr mit euerer Forschung lösen?
  • Warum ist es wichtig dieses Problem zu lösen?
  • Aus welchem Grund sollte jemand für Forschung an dieser Frage bezahlen?
  • Was ist die zentrale Forschungsfrage und was wollt ihr sie konkret herausfinden?
  • Was ist der zu erwartende Wissensgewinn?

Vorgehensweise und Methode

  • Wie führt ihr eure Forschung durch? Ist eure Forschung theoretisch, experimentell oder empirisch?
  • Wie verifizieren oder evaluieren ihr die Ergebnisse?
  • Wie stellt ihr die Richtigkeit und Qualität eurer Ergebnisse sicher?
  • Erkläre kurz die Vorgehensweise und begründe warum diese für deine Forschungsarbeiten angemessen ist. Welche alternativen Vorgehensweisen wären möglich und warum verwendest du diese nicht?
  • Welche Methoden setzt du ein?

Verwandte Arbeiten

  • Was sind die wichtigsten drei Arbeiten anderer Forschungsgruppen auf die sich deine Forschung bezieht?
  • Wie haben diese Arbeiten dich beeinflusst?
  • Was machst du besser als die bisherigen Arbeiten? Wo ergibt sich etwas Neues durch deine Arbeit?

Vorläufige Ergebnisse

  • Was hast du bis jetzt herausgefunden? Beschreibe die vorläufigen Ergebnisse.
  • Aus welchem Grund sollten wir diesen Ergebnissen vertrauen? Wie hast du diese überprüft?
  • Welche weiteren Ergebnisse erwartest du?

Nächste Schritte

  • Was sind die nächsten Schritte in deiner Arbeit? Was fehlt noch damit aus der Arbeit eine Dissertation wird?
  • Wo brauchst du noch weitere (externe) Expertise? An welchen Stellen wären Kooperationen hilfreich?

Formatvorlage und Einreichung
Bitte verwendet die folgende Vorlage für die Einreichung. Bitte schickt den Beitrag als PDF an (Betreff: “DOKO-2012-Beitrag”)

Beispiel: PDF
Latex-Vorlage: ZIP-Archiv
MS-Word 97-2003 Vorlage: DOC
MS-Word 2007 Vorlage: DOCX