Our Paper and Note at CHI 2010

Over the last year we looked more closely into the potential of eye-gaze for implicit interaction. Gazemarks is an approach where the users’ gaze is continuously monitored and when leaving a screen or display the last active gaze area is determined and store [1]. When the user looks back at this display this region is highlighted. By this the time for attention switching between displays was in our study reduced from about 2000ms to about 700ms. See the slides or paper for details. This could make the difference that we enable people to safely read in the car… but before this more studies are needed 🙂

Together with Nokia Research Center in Finland we looked at how we can convey the basic message of an incoming SMS already with the notification tone [2]. Try the Emodetector application for yourself or see the previous post.

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

[2] Sahami Shirazi, A., Sarjanoja, A., Alt, F., Schmidt, A., and Hkkilä, J. 2010. Understanding the impact of abstracted audio preview of SMS. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 1735-1738. DOI= http://doi.acm.org/10.1145/1753326.1753585

PS: the social event was at the aquarium in Atlanta – amazing creatures! Again supprised how well the N95 camera works even under difficult light conditions…