Dear colleagues and friends,

we hope you received your exclusive hcilab ornament construction kit. In order to fully enjoy the hcilab Christmas experience, 7 quick steps will guide you through the rather intuitive assembly:

The target result:
The target result.

The Christmassy ingredients:
The target result.

Step 1a: Free the tree!
The target result.

Step 1b: Bolden the golden!
The target result.

Step 1c: Take a breath!
The target result.

Step 2: Assemble the tree!
The target result.

Step 3a: Assemble the globe (1st stay)!
The target result.

Step 3b: Assemble the globe (2nd stay)!
The target result.

Step 3c: Connect the stays using the disks!
The target result.

Step 4a: Put the tree in the middle!
The target result.
The target result. The target result.

Step 5a: Put in the remaining stays (3rd stay)!
The target result.

Step 5b: Put in the remaining stays (4th stay)!
The target result.

Step 6: Hook it!
The target result.

Step 7: That’s it. Celebrate!
The target result.

We are curious about your end result and are keen to receive a picture of the final version of your ornament. Feel free to email it to us or to link it in a blog comment.

The year 2012 was very exciting and we more than appreciate your every involvement with us! As an additional treat we have attached to this Christmas packet a quick overview which lists several projects and topics in the field of human computer interaction we have been working on.
In that sense, we have continued to work on Public Displays networks. The following publications give an overview of some of the directions we took this year:

  1. Davies, N., Langheinrich, M., José, R., & Schmidt, A. (2012). Open display networks: A communications medium for the 21st century. Computer, 45(5), 58-64. Alt, F., Schneegaß, S., Schmidt, A., Müller, J., & Memarovic, N. (2012, June). How to evaluate public displays. In Proceedings of the 2012 International Symposium on Pervasive Displays (p. 17). ACM.
  2. Alt, F., Schmidt, A., & Müller, J. (2012). Advertising on Public Display Networks. Computer, 45(5), 50-56.

Automotive User interfaces was another area where we continued our research. We moved more towards multimodality and included speech input in a prototype:

  1. Pfleging, B., Schneegass, S., & Schmidt, A. (2012, October). Multimodal interaction in the car: combining speech and gestures on the steering wheel. In Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 155-162). ACM.
  2. Pfleging, B., Kern, D., Döring, T., & Schmidt, A. (2012). Reducing Non-Primary Task Distraction in Cars Through Multi-Modal Interaction. it-Information Technology, 54(4), 179-187.

We ventured into new domains this year. In particular we looked at usable security and brain computer interaction. The following two papers show some examples of this work. We are particularly proud of the BCI paper, as this is the first one wih our students in Stuttgart.

  1. Bulling, A., Alt, F., & Schmidt, A. (2012, May). Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (pp. 3011-3020). ACM.
  2. Shirazi, A. S., Funk, M., Pfleiderer, F., Glück, H., & Schmidt, A. MediaBrain: Annotating Videos based on Brain-Computer Interaction.

Finally this paper may be an interesting read, when you are tired …

  1. Schmidt, A., Shirazi, A. S., & van Laerhoven, K. (2012). Are You in Bed with Technology?. Pervasive Computing, IEEE, 11(4), 4-7.