Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

>Our Research at CHI2012 – usable security and public displays

>This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Paper and demo in Salzburg at Auto-UI-2011

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time :-)

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

Paper and demo in Salzburg at Auto-UI-2011

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time :-)

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

>Paper and demo in Salzburg at Auto-UI-2011

>

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time :-)

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

CHI 2011 in Vancouver, Keynote and Papers

In the opening keynote Howard Rheingold proclaimed that we are in a time for learners and he outlined the possibilities that arise from the interactive media that is available to us. In particular he highlighted the fact that people share and link content and to him this is at the heart of learning. Learning as a joined process where contributions by students – in different forms of media – become major a resource was one example.

I best liked his analogy on how little innovation there is in teaching. “If you take a warrior from 1000 years ago on a battlefield today – they will die – quickly. If you take a surgeon from a 1000 years ago and put them in a modern hospital – they will be lost. If you take a professor from 1000 years ago and put them in a University today he will exactly know what to do. ” I am not sure about the 1000 years but it by 100 years the story works just as well. In essence he argued that there is a lot of potential for new approaches for teaching and learning.

After initially agreeing I gave it some more thoughts and perhaps the little change in learning and teaching shows that learning is very fundamental and technology is overrated in this domain? What is more effective than a teachers discussing in an exciting topic face to face with a small set of students – perhaps even while on a walk? Reminds me about things I read about the Greek teachers and there practices several thousand years ago … and it makes me looking forward to our summer school in the Italian Alps (http://www.ferienakademie.de/).

I found the SIGCHI Lifetime Achievement Award lectures very exciting and educational. Especially the talk by Larry Tesler provided deep insight into how innovation works in user interfaces – beyond the academic environment. He talked about the “invention” of cut and paste – very enjoyable!

This year we had a number of papers describing our research in CHI:

  •  Elba reported on the field study in Panama using mobile phones to enhance teaching and learning [1]
  • Ali presented work on how to increase the connectedness between people by simple means of iconic communication in the context of a sports game [2]
  • Tanja showed how touch and gestural input on a steering wheel can reduce the visual distraction for a driver [3], and
  • Gilbert (from LMU Munich) presented work on interaction with cylindrical screens [4].

The most inspiring and at the same time the most controversial paper for me was the possessed hand by Jun Rekimoto et al. [5]. He reported their results in using electro stimulation in order to move fingers of a hand.

Bill Buxton showed throughout the conference his collection of input and output devices (Buxton Collection). Seeing the collection physically is really exciting, but for all who did not have the chance there is a comprehensive online version with photos and details available at micosoft research: http://research.microsoft.com/en-us/um/people/bibuxton/buxtoncollection/

[1] Elba del Carmen Valderrama Bahamondez, Christian Winkler, and Albrecht Schmidt. 2011. Utilizing multimedia capabilities of mobile phones to support teaching in schools in rural panama. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 935-944. DOI=10.1145/1978942.1979081 http://doi.acm.org/10.1145/1978942.1979081

[2] Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander Müller, and Albrecht Schmidt. 2011. Real-time nonverbal opinion sharing through mobile phones during sports events. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 307-310. DOI=10.1145/1978942.1978985 http://doi.acm.org/10.1145/1978942.1978985

[3] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 http://doi.acm.org/10.1145/1978942.1979010

[4] Gilbert Beyer, Florian Alt, Jörg Müller, Albrecht Schmidt, Karsten Isakovic, Stefan Klose, Manuel Schiewe, and Ivo Haulsen. 2011. Audience behavior around large interactive cylindrical screens. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 1021-1030. DOI=10.1145/1978942.1979095 http://doi.acm.org/10.1145/1978942.1979095

[5] Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 543-552. DOI=10.1145/1978942.1979018 http://doi.acm.org/10.1145/1978942.1979018

>CHI 2011 in Vancouver, Keynote and Papers

>In the opening keynote Howard Rheingold proclaimed that we are in a time for learners and he outlined the possibilities that arise from the interactive media that is available to us. In particular he highlighted the fact that people share and link content and to him this is at the heart of learning. Learning as a joined process where contributions by students – in different forms of media – become major a resource was one example.

I best liked his analogy on how little innovation there is in teaching. “If you take a warrior from 1000 years ago on a battlefield today – they will die – quickly. If you take a surgeon from a 1000 years ago and put them in a modern hospital – they will be lost. If you take a professor from 1000 years ago and put them in a University today he will exactly know what to do. ” I am not sure about the 1000 years but it by 100 years the story works just as well. In essence he argued that there is a lot of potential for new approaches for teaching and learning.

After initially agreeing I gave it some more thoughts and perhaps the little change in learning and teaching shows that learning is very fundamental and technology is overrated in this domain? What is more effective than a teachers discussing in an exciting topic face to face with a small set of students – perhaps even while on a walk? Reminds me about things I read about the Greek teachers and there practices several thousand years ago … and it makes me looking forward to our summer school in the Italian Alps (http://www.ferienakademie.de/).

I found the SIGCHI Lifetime Achievement Award lectures very exciting and educational. Especially the talk by Larry Tesler provided deep insight into how innovation works in user interfaces – beyond the academic environment. He talked about the “invention” of cut and paste – very enjoyable!

This year we had a number of papers describing our research in CHI:

  •  Elba reported on the field study in Panama using mobile phones to enhance teaching and learning [1]
  • Ali presented work on how to increase the connectedness between people by simple means of iconic communication in the context of a sports game [2]
  • Tanja showed how touch and gestural input on a steering wheel can reduce the visual distraction for a driver [3], and
  • Gilbert (from LMU Munich) presented work on interaction with cylindrical screens [4].

The most inspiring and at the same time the most controversial paper for me was the possessed hand by Jun Rekimoto et al. [5]. He reported their results in using electro stimulation in order to move fingers of a hand.

Bill Buxton showed throughout the conference his collection of input and output devices (Buxton Collection). Seeing the collection physically is really exciting, but for all who did not have the chance there is a comprehensive online version with photos and details available at micosoft research: http://research.microsoft.com/en-us/um/people/bibuxton/buxtoncollection/

[1] Elba del Carmen Valderrama Bahamondez, Christian Winkler, and Albrecht Schmidt. 2011. Utilizing multimedia capabilities of mobile phones to support teaching in schools in rural panama. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 935-944. DOI=10.1145/1978942.1979081 http://doi.acm.org/10.1145/1978942.1979081

[2] Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander Müller, and Albrecht Schmidt. 2011. Real-time nonverbal opinion sharing through mobile phones during sports events. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 307-310. DOI=10.1145/1978942.1978985 http://doi.acm.org/10.1145/1978942.1978985

[3] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 http://doi.acm.org/10.1145/1978942.1979010

[4] Gilbert Beyer, Florian Alt, Jörg Müller, Albrecht Schmidt, Karsten Isakovic, Stefan Klose, Manuel Schiewe, and Ivo Haulsen. 2011. Audience behavior around large interactive cylindrical screens. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 1021-1030. DOI=10.1145/1978942.1979095 http://doi.acm.org/10.1145/1978942.1979095

[5] Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 543-552. DOI=10.1145/1978942.1979018 http://doi.acm.org/10.1145/1978942.1979018

CHI 2011 in Vancouver, Keynote and Papers

In the opening keynote Howard Rheingold proclaimed that we are in a time for learners and he outlined the possibilities that arise from the interactive media that is available to us. In particular he highlighted the fact that people share and link content and to him this is at the heart of learning. Learning as a joined process where contributions by students – in different forms of media – become major a resource was one example.

I best liked his analogy on how little innovation there is in teaching. “If you take a warrior from 1000 years ago on a battlefield today – they will die – quickly. If you take a surgeon from a 1000 years ago and put them in a modern hospital – they will be lost. If you take a professor from 1000 years ago and put them in a University today he will exactly know what to do. ” I am not sure about the 1000 years but it by 100 years the story works just as well. In essence he argued that there is a lot of potential for new approaches for teaching and learning.

After initially agreeing I gave it some more thoughts and perhaps the little change in learning and teaching shows that learning is very fundamental and technology is overrated in this domain? What is more effective than a teachers discussing in an exciting topic face to face with a small set of students – perhaps even while on a walk? Reminds me about things I read about the Greek teachers and there practices several thousand years ago … and it makes me looking forward to our summer school in the Italian Alps (http://www.ferienakademie.de/).

I found the SIGCHI Lifetime Achievement Award lectures very exciting and educational. Especially the talk by Larry Tesler provided deep insight into how innovation works in user interfaces – beyond the academic environment. He talked about the “invention” of cut and paste – very enjoyable!

This year we had a number of papers describing our research in CHI:

  •  Elba reported on the field study in Panama using mobile phones to enhance teaching and learning [1]
  • Ali presented work on how to increase the connectedness between people by simple means of iconic communication in the context of a sports game [2]
  • Tanja showed how touch and gestural input on a steering wheel can reduce the visual distraction for a driver [3], and
  • Gilbert (from LMU Munich) presented work on interaction with cylindrical screens [4].

The most inspiring and at the same time the most controversial paper for me was the possessed hand by Jun Rekimoto et al. [5]. He reported their results in using electro stimulation in order to move fingers of a hand.

Bill Buxton showed throughout the conference his collection of input and output devices (Buxton Collection). Seeing the collection physically is really exciting, but for all who did not have the chance there is a comprehensive online version with photos and details available at micosoft research: http://research.microsoft.com/en-us/um/people/bibuxton/buxtoncollection/

[1] Elba del Carmen Valderrama Bahamondez, Christian Winkler, and Albrecht Schmidt. 2011. Utilizing multimedia capabilities of mobile phones to support teaching in schools in rural panama. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 935-944. DOI=10.1145/1978942.1979081 http://doi.acm.org/10.1145/1978942.1979081

[2] Alireza Sahami Shirazi, Michael Rohs, Robert Schleicher, Sven Kratz, Alexander Müller, and Albrecht Schmidt. 2011. Real-time nonverbal opinion sharing through mobile phones during sports events. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 307-310. DOI=10.1145/1978942.1978985 http://doi.acm.org/10.1145/1978942.1978985

[3] Tanja Döring, Dagmar Kern, Paul Marshall, Max Pfeiffer, Johannes Schöning, Volker Gruhn, and Albrecht Schmidt. 2011. Gestural interaction on the steering wheel: reducing the visual demand. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 483-492. DOI=10.1145/1978942.1979010 http://doi.acm.org/10.1145/1978942.1979010

[4] Gilbert Beyer, Florian Alt, Jörg Müller, Albrecht Schmidt, Karsten Isakovic, Stefan Klose, Manuel Schiewe, and Ivo Haulsen. 2011. Audience behavior around large interactive cylindrical screens. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 1021-1030. DOI=10.1145/1978942.1979095 http://doi.acm.org/10.1145/1978942.1979095

[5] Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: techniques for controlling human hands using electrical muscles stimuli. In Proceedings of the 2011 annual conference on Human factors in computing systems (CHI ’11). ACM, New York, NY, USA, 543-552. DOI=10.1145/1978942.1979018 http://doi.acm.org/10.1145/1978942.1979018

How will computing change the world? Our view in Computing Now.

Together with Marc Langheinrich and Kristian Kersting we wrote an article on how computing is going to change our world [1] and featured in Computing Now. We discuss how upcoming technologies will change the our perception. Besides others we make the bold statement “By the middle of this century, the boundaries between direct and remote perception will become blurred“.


We discuss how our perception is extended and augmented by technical means and how this will eventually lead to a new augmented sense of ubiquitous perception. We expect this will radically change the way we live and hence ethical considerations are central. We make the argument that ethics become a major design factor. We are looking forward to feedback on this vision – even if you disagree.

[1] Albrecht Schmidt, Marc Langheinrich, Kritian Kersting, “Perception beyond the Here and Now,” Computer, vol. 44, no. 2, pp. 86-88, Feb. 2011, doi:10.1109/MC.2011.54 (PDF)