Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Paper and demo in Salzburg at Auto-UI-2011

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time 🙂

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

Our Paper and Note at CHI 2010

Over the last year we looked more closely into the potential of eye-gaze for implicit interaction. Gazemarks is an approach where the users’ gaze is continuously monitored and when leaving a screen or display the last active gaze area is determined and store [1]. When the user looks back at this display this region is highlighted. By this the time for attention switching between displays was in our study reduced from about 2000ms to about 700ms. See the slides or paper for details. This could make the difference that we enable people to safely read in the car… but before this more studies are needed 🙂

Together with Nokia Research Center in Finland we looked at how we can convey the basic message of an incoming SMS already with the notification tone [2]. Try the Emodetector application for yourself or see the previous post.

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

[2] Sahami Shirazi, A., Sarjanoja, A., Alt, F., Schmidt, A., and Hkkilä, J. 2010. Understanding the impact of abstracted audio preview of SMS. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 1735-1738. DOI= http://doi.acm.org/10.1145/1753326.1753585

PS: the social event was at the aquarium in Atlanta – amazing creatures! Again supprised how well the N95 camera works even under difficult light conditions…

MUM 2009 in Cambridge, no technical solution for privacy

The 8th International Conference on Mobile and Ubiquitous Multimedia (MUM 2009) was held in Cambridge, UK. The conference is fairly specific and had an acceptance rate of about 33% – have a look at the table of content for an overview. Florian Michahelles presented our paper on a design space for ubiquitous product recommendation systems [1]. Our work contributes a comprehensive design space that outlines design options for product recommendation systems using mobile and ubiquitous technologies. We think that over the next years mobile recommendation systems have the potential to change the way we shop in the real world. It probably will be normal to have access in-depth information an price comparison while browsing in physical stores. The idea has been around for a while, e.g. the pocket bargain finder presented at the first ubicomp conference [2]. In Germany we see also a reaction of some electronics stores that asked users NOT to use a phone or camera in the shop.

The keynote on Tuesday morning was by Martin Rieser on the Art of Mobility. He blogs on this topic on http://mobileaudience.blogspot.com/.
The examples he presented in his keynote concentrated on locative and pervasive media. He characterized locative media as media that by social interaction that is linked to a specific place. He raised the awareness that mapping is very important for our perception of the world, using several different subjective maps – I particular liked the map encoding travel times to London . A further interesting examples was a project by Christian Nold: Bio mapping – emotional mapping of journeys. QR or other bar code markers on cloth (large and on the outside) have a potential … I see this now.

In the afternoon was panel on “Security and Privacy: Is it only a matter of time before a massive loss of personal data or identity theft happens on a smart mobile platform?” with David Cleevely, Tim Kindberg, and Derek McAuley. I found the discussion very inspiring but in the end I doubt more and more that technical solutions will solve the problem. I think it is essential to consider the technological, social and legal framework in which we live. If I would need to live in a house that provides absolute safety (without a social and legal framework) it would be probably not a very nice place… hence I think here we need really interdisciplinary research in this domain.

[1] von Reischach, F., Michahelles, F., and Schmidt, A. 2009. The design space of ubiquitous product recommendation systems. In Proceedings of the 8th international Conference on Mobile and Ubiquitous Multimedia (Cambridge, United Kingdom, November 22 – 25, 2009). MUM ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1658550.1658552

[2] Brody, A. B. and Gottsman, E. J. 1999. Pocket Bargain Finder: A Handheld Device for Augmented Commerce. InProceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 44-51.

Tangible, Embedded, and Reality-Based Interaction

Together with Antonio’s group we looked at new forms of interaction beyond the desktop. The journal paper Tangible, Embedded, and Reality-Based Interaction [1] gives overview and examples of recent trends in human computer interaction and is a good starting point to learn about these topics.

Abstract: Tangible, embedded, and reality-based interaction are among novel concepts of interaction design that will change our usage of computers and be part of our daily life in coming years. In this article, we present an overview of the research area of tangible, embedded, and reality-based interaction as an area of media informatics. Potentials and challenges are demonstrated with four selected case studies from our research work.

[1] Tanja Döring, Antonio Krüger, Albrecht Schmidt, Johannes Schöning: Tangible, Embedded, and Reality-Based Interaction. it – Information Technology 51 (2009) 6 , S. 319-324. (pdf)

Our PERCI Article in IEEE Internet Computing

Base on work we did together with DoCoMo Eurolabs in Munich we have published the article “Perci: Pervasive Service Interaction with the Internet of Things” in the IEEE Internet Computing special issue on the Internet of Things edited by Frédéric Thiesse and Florian Michahelles.

The paper discusses the linking of digital resources to the real world. We investigated how to augment everyday objects with RFID and Near Field Communication (NFC) tags to enable simpler ways for users to interact with service. We aim at creating a digital identities of real world objects and by this integrating them into the Internet of Things and associating them with digital information and services. In our experiments we explore how these objects can facilitate access to digital resources and support interaction with them-for example, through mobile devices that feature technologies for discovering, capturing, and using information from tagged objects. See [1] for the full article.

[1] Gregor Broll, Massimo Paolucci, Matthias Wagner, Enrico Rukzio, Albrecht Schmidt, and Heinrich Hußmann. Perci: Pervasive Service Interaction with the Internet of Things. IEEE Internet Computing. November/December 2009 (vol. 13 no. 6). pp. 74-81

Workshop on Pervasive Advertising at Informatik 2009 in Lübeck

Following our first workshop on this topic in Nara during Pervasive 2009 earlier this year we had on Friday the 2nd Pervasive Advertising Workshop in Lübeck as part of the German computer science conference Informatik 2009.

The program was interesting and very diverse. Daniel Michelis discussed in his talk how we move from an attention economy towards an engagement economy. He argued that marketing has to move beyond the AIDA(S) model and to consider engagement as central issue. In this context he introduced the notion of Calm Advertising and interesting analogy to Calm Computing [1]. Peter van Waart talked about meaningful adverting and introduced the concept of meaningful experience. To stay with the economy term consider advertising in an experience economy. For more detail see the workshop webpage – proceedings will be soon online.

Jörg Müller talked about contextual advertising and he had a nice picture of the steaming manhole coffee ad – apparently from NY – but it is not clear if it is deployed.

If you are interested in getting sensor data on the web – and having them also geo-referenced – you should have a look at http://www.52north.org. This is an interesting open source software system that appears quite powerful.

Florian Alt presented our work interactive and context-aware advertising insight a taxi [2].

[1] Weiser, M., Brown, J.S.: The coming age of calm technology. (1996)

[2] Florian Alt, Alireza Sahami Shirazi, Max Pfeiffer, Paul Holleis, Albrecht Schmidt. TaxiMedia: An Interactive Context-Aware Entertainment and Advertising System (Workshop Paper). 2nd Pervasive Advertising Workshop @ Informatik 2009. Lübeck, Germany 2009.

Best papers at MobileHCI 2009

At the evening event of MobileHCI2009 the best paper awards were presented. The best short paper was “User expectations and user experience with different modalities in a mobile phone controlled home entertainment system” [1]. There were two full papers that got a best paper award: “Sweep-Shake: finding digital resources in physical environments” [2] and “PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices” [3]. We often look at best papers of a conference to better understand what makes a good paper for this community. All of the 3 papers above are really well done and worthwhile to read.

PhotoMap [3] is a simple but very cool idea. Many of you have probably taken photos of public maps with your mobile phone (e.g. at a park, city map) and PhotoMap explores how to link them to realtime location data from the GPS on the device. The goal is that you can move around in the real space and you have a dot marking where you are on the taken photo. The implementation however seems not completely simple… There is a youtube movie on PhotoMap (there would be more movies from the evening event – but I do not link them here – the photo above gives you an idea…)

Since last year there is also a history best paper award (most influential paper from 10 years ago). Being at the beginning of a new field sometimes pays of… I got this award for the paper on implicit interaction [4] I presented in Edinburgh at MobileHCI 1999.

[1] Turunen, M., Melto, A., Hella, J., Heimonen, T., Hakulinen, J., Mäkinen, E., Laivo, T., and Soronen, H. 2009. User expectations and user experience with different modalities in a mobile phone controlled home entertainment system. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-4. DOI= http://doi.acm.org/10.1145/1613858.1613898

[2] Robinson, S., Eslambolchilar, P., and Jones, M. 2009. Sweep-Shake: finding digital resources in physical environments. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1613858.1613874

[3] Schöning, J., Krüger, A., Cheverst, K., Rohs, M., Löchtefeld, M., and Taher, F. 2009. PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1613858.1613876

[4] Albrecht Schmidt. Implicit human computer interaction through context. Personal and Ubiquitous Computing Journal, Springer Verlag London, ISSN:1617-4909, Volume 4, Numbers 2-3 / Juni 2000. DOI:10.1007/BF01324126, pp. 191-199 (initial version presented at MobileHCI1999). http://www.springerlink.com/content/u3q14156h6r648h8/

Papers are all similar – Where are the tools to make writing more effective?

Yesterday we discussed (again during the evening event of MobileHCI2009) how hard it would be to support the process of writing a high quality research paper and essays. In many conference there is a very defined style what you need to follow, specific things to include, and certain ways of how to present information. This obviously depends on the type of contribution but within one contribution type there could be probably provided a lot of help to create the skeleton of the paper… In many other areas Sounds like another project idea 😉

You ought to keep your essay presentation for the IELTS paper short. Recall that you just have 40 minutes to compose the exposition, and some of this time should be spent arranging. Along these lines, you should have the capacity to compose your presentation decently fast so you can begin composing your body sections and ask if needed.

Ethics as material for innovation – German HCI conference – Mensch und Computer

On Tuesday I was at the German human computer interaction conference called Mensch und Computer. The keynote by Alex Kirlik was on Ethical Design (slides from his talk) and he showed how ethics extends beyond action to technology leading to the central question: Why should we build certain systems? His examples and the following discussion made me wonder whether “Ethics become the next Material for innovation”. Taking his example of 9/11 where old technology (air planes) and a different view on ethics was used to strike this is in contrast to previous/typical warfare where new technologies (e.g. Gun powder, Nuclear bomb) have changed the way wars are conducted.

Considering ethics as material for innovation is obviously risky but looking at successful businesses of the last decade such a trend can be argued for (e.g. google collecting information about the user to provide new services, youtube allowing users to share content with limited insurance that it is not copyrighted). Would be interesting to have a workshop on this topic sometime in the future…

Grace who left our group after finishing her Master’s degree (to work in the real world outside of university 😉 presented her paper on how to aid communication in the car between driver and passenger [1].

In the afternoon the working group on tangible interaction in mixed realities (in German Be-greifbare Interaktion in Gemischten Wirklichkeiten) had a workshop and a meeting. We will host the next workshop of the working group in Essen early next year (probably late February or early March).

PS: the next Mensch & Computer Conference ist at the University of Duisburg-Essen 🙂

[1] Grace Tai, Dagmar Kern, Albrecht Schmidt. Bridging the Communication Gap: A Driver-Passenger Video Link. Mensch und Computer 2009. Berlin.

Interact 2009, Never been to Uppsala

Uppsala in Sweden is still one of the places I have never been to – and this year I missed another chance: Interact 2009

From our group Florian was there an presented his paper on a parasitic applications for the web [1]. We also published joined work with ETH Zürich on a comparison of product identification techniques on mobile devices [2]. Heiko Drewes has submitted his PhD thesis on Eye-tracking for interaction and one of the early projects he did was now published at Interact. The idea is that the mouse courser is positioned to the position where your eye-gaze is in the moment you touch the mouse [3]. Interact 2009 was quite competetive as it had an acceptance rate of 29% for research papers.

[1] Alt, F., Schmidt, A. Atterer, R., Holleis, P. 2009. Bringing Web 2.0 to the Old Web: A Platform for Parasitic Applications. Human-Computer Interaction – INTERACT 2009. 12th IFIP TC 13 International Conference, Uppsala, Sweden, August 24-28, 2009. Springer LNCS 5726. pp 405-418.

[2] von Reischach, F., Michahelles, F.,Guinard, D.,Adelmann, R. Fleisch, E., Schmidt, A. 2009. An Evaluation of Product Identification Techniques for Mobile Phones. Human-Computer Interaction – INTERACT 2009. 12th IFIP TC 13 International Conference, Uppsala, Sweden, August 24-28, 2009. Springer LNCS 5726. pp 804-816

[3] Drewes, H., Schmidt, A. 2009. The MAGIC Touch: Combining MAGIC-Pointing with a Touch-Sensitive Mouse. 2009. Human-Computer Interaction – INTERACT 2009. 12th IFIP TC 13 International Conference, Uppsala, Sweden, August 24-28, 2009. Part II. Springer LNCS 5727. pp 415-428

Maps – still the tool for navigation in the mountains

On Saturday we went to Garmisch and walked up to Höllentalklamm (a nice canyon) and had lunch at Höllentalangerhütter. Our GPS tracking data from the canyon was pretty poor (as one would expect as the canyon is in parts only a few meters wide).

Observing other hikers (especially people who did the larger tours) it was very interesting to see how maps are used in social situations – planning, discussion, reflection, and storytelling (this time n>10). It is hard to image how this experience can be replaced by an implementation on a mobile device.

Will we have to wait till we have 1 meter by 1 meter foldable e-ink displays with 200dpi? Or are there other means to implement a good hiking map on a mobile phone screen? There is a lot of ongoing research in this domain. For driving I would guess the paper map has been largely replaced by electronic devices – when will it happened for hiking?

My guess is that traditional hiking maps will be the standard tool for another 10 years – obviously combined with a mobile device with GPS (e.g. phone, watch, or specific hiking GPS). There many ideas on how to do this – Johannes Schöning and Michael Rohs have worked on that for a while. The WIP they hat at CHI is an interesting example [1] or see the video on youtube.

Projector phones are a hot topic – Enrico had some interesting work on interaction with projector phones at Mobile HCI 2008 [2] & [3]. I would expect that in a years time we will see quite a number of those devices on the market.

[1] Schöning, J., Rohs, M., Kratz, S., Löchtefeld, M., and Krüger, A. 2009. Map torchlight: a mobile augmented reality camera projector unit. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 3841-3846. DOI= http://doi.acm.org/10.1145/1520340.1520581

[2] Hang, A., Rukzio, E., and Greaves, A. 2008. Projector phone: a study of using mobile phones with integrated projector for interaction with maps. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 207-216. DOI= http://doi.acm.org/10.1145/1409240.1409263

[3] Greaves, A. and Rukzio, E. 2008. Evaluation of picture browsing using a projector phone. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 351-354. DOI= http://doi.acm.org/10.1145/1409240.1409286

DFG Emmy Noether Meeting in Potsdam, Art, Ceilings

Meeting with other researchers that run or have run Emmy Noether research groups is very different from normal conferences and meetings. The participants are across all disciplines – from art history to zoology 😉 The meeting focuses mainly on strategic, political, personal, administrative and organizational issues when starting a research career. This year we had child care organized during the meeting and Vivien came with me to Potsdam.

On Saturday night I learned that we (our galaxy) will eventually collide with the Andromeda Galaxy (but after our sun is out fuel – so I do not worry too much). Vivien found this fascinating, too. Learning more about astrophysics (looks defiantly more complicated than the things I usually do) teaches me to worry less about the immediate usefulness and direct utility of research results – also in our domain.

I am fascinated how different research can be and at the same time how similar the enthusiasm is people have for their research. By now – being one of the old guys – I co-organized two workshops. One together with Dr. Hellfeier from DHV on how to negotiate for a professorship and one with Stefanie Scheu and Rainer Hirsch-Luipold on teaching and PhD-supervision.

I talked to Riko Jacob (CS at TU Munich) about teaching computer science in school and he showed me a picture of a tangible shortest path calculator (I took a photo of the photo ;-). Perhaps I have at some point time to play with the installation in Munich.

On Sunday morning we took the water taxi – direct from the hotel peer – to the central train station in Potsdam. Christian Scholl from Göttingen (he does Art History) took some time to show us around the castle Sans Souci. After our discussion I wondered if we should consider a joint seminar from computer science/media informatics and art history – in particular ideas related to ambient media, interactive facades, and robotic buildings would benefit from a more historic awareness. There is an interesting PhD thesis on ceiling displays [1] – for a shorter version see [2]. I met Martin Tomitsch at a Ubicomp DC and I was impressed with the idea and its grounding in history.

[1] Tomitsch M. (2008). Interactive Ceiling – Ambient Information Display for Architectural Environments. PhD Thesis, Vienna University of Technology, Austria.

[2] Tomitsch, M., Grechenig, T., Vande Moere, A. & Sheldon, R. (2008). Information Sky: Exploring Ceiling-based Data Representations. International Conference on Information Visualisation (IV08), London, UK, 100-105. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?tp=&arnumber=4577933&isnumber=4577908

Auto-UI Conference accepts 12 full papers and 10 notes

For the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2009) we got many quality submissions. The review process is now complete and we accepted 12 full papers and 10 notes for oral presentation at the conference. The list of accepted contributions is online at auto-ui.org.

As a number of people have asked if the still can submit to the program and as many of the rejected papers raise interesting aspects we decided to have Posters as a further submission category. We have a continuous submission process for poster abstracts till Sept 1st 2009. Earlier submissions receive feedback within 2 weeks. For details see the poster call for AutomotiveUI 2009.

If you submit somit your poster abstract during the next week, you will get the notification before the early registration deadline, which is August 6, 2009.

The registration is open and the conference is held in Essen, Mon/Tue 21 – 22 September 2009 – right after mobile HCI 2009 (which is in Bonn, just 100km away).

Morten Fjeld visiting

On his way from Eindhoven to Zurich Morten Fjeld was visiting our group. It was great to catch up and talk about a number of exciting research projects and ideas. Some years ago one of my students from Munich did his final project with Morten working on haptic communication ideas, see [1]. Last year at TEI Morten had a paper on a related project – also using actuated sliders, see [2].

In his presentation Morten gave an overview of the research he does and we found a joint interest in capacitive sensing. Raphael Wimmer did his final project in Munich on capacitive sensing for embedded interaction which was published in Percom 2007, see [3]. Raphael has continued the work for more details and the open source hardware and software see http://capsense.org. Morten has a cool paper (combing a keyboard and capacitive sensing) at Interact 2009 – so check the program when it is out.

We talked about interaction and optical tracking and that reminded me that we wanted to see how useful the touchless SDK (http://www.codeplex.com/touchless) could be for final projects and exercise. Matthias Kranz had used it successfully with students in Linz in the unconventional user interfaces class.

[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51

[2] Gabriel, R., Sandsjö, J., Shahrokni, A., and Fjeld, M. 2008. BounceSlider: actuated sliders for music performance and composition. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, 127-130. DOI= http://doi.acm.org/10.1145/1347390.1347418

[3] Wimmer, R., Kranz, M., Boring, S., and Schmidt, A. 2007. A Capacitive Sensing Toolkit for Pervasive Activity Detection and Recognition. In Proceedings of the Fifth IEEE international Conference on Pervasive Computing and Communications (March 19 – 23, 2007). PERCOM. IEEE Computer Society, Washington, DC, 171-180. DOI= http://dx.doi.org/10.1109/PERCOM.2007.1

Some Interesting Papers and random Photos from Pervasive 2009

Pervasive 2009 had a really exciting program and provided a good overview of current research in pervasive and ubiquitous computing. Have a look at the proceedings of the pervasive 2009 conference. The Noh theater in Nara was a very special and enjoyable venue and it was organized perfectly – as one would expect when travelling to Japan.

The idea of having short and long papers together in the main track worked very well in my view. The number of demos and posters was much higher than in the years before – and that was great and very inspiring. Have a look at the photos for some of the posters and demos.
The program consisted of 20 full papers (18 pages) and 7 notes (8 pages) which were selected in a peer review process out of 147 submissions (113 full papers, 34 notes) which is a acceptance rate of 18%.

John Krumm presented his paper Realistic driving tips for location privacy – again having a good idea making the presentation interesting beyond its content (having review snippets in the footer of the slides – including a fake review). The paper explores the difficulties that arise when creating fake GPS tracks. He motivated that the probabilities need to be taken into account (e.g. you are usually on a road). I liked the approach and the paper is worthwhile to read. I think it could be interesting to compare the approach is not create the tracks but just share them between users (e.g. other people can use parts of my track as fake track and in return I get some tracks that I can use as fake tracks). http://dx.doi.org/10.1007/978-3-642-01516-8_4

If you phone knows where you are you can use this information to control your heating system. This was the basic idea of the research presented by Stephen Intille. They explored using GPS location of the users to automate control of the heating / air condition control in a house. It seems there is quite some potential for saving energy with technology typically used in the US (one temperature control for the whole house). In Europe where heating systems typically offer finer control (e.g. room level) the potential is probably larger.

James Scott presented a paper that showed how you can use force gestures to interact with a device. In contrast to previous research (e.g. GUMMI) the approach works with a ridged device and could be used with current screen technologies.

What do you need to figure out who is holding and using the remote control? This question is addressed in the paper “Inferring Identity Using Accelerometers in Television Remote Controls” that was presented by Jeff Hightower. They looked at how well button press sequences and accelerometer data give you information about which person is using the device.

Geo-fencing: confining Wi-Fi Coverage to Physical Boundaries is an example of how to create technological solutions to fit a user’s conceptual model of the world. As people have experience with the physical world and they have mechanisms to negotiate and use space and hence linking technologies that have typically other characteristics (e.g. wireless radio coverage) to the known concept is really interesting.

Situvis, a tool for visualizing sensor data, was presented by Adrian Clear from Aaron’s group in Dublin. The software, papers and a video is available at: http://situvis.com/. The basic idea is to have a parallel coordinate visualization of the different sensor information and to provide interaction mechanisms with the data.

Nathan Eagle presented the paper “Methodologies for continuous cellular tower data analysis”. He talked about the opportunities that arise when we have massive amounts of information from users – e.g. tracks from 200 million mobile phone user. It really is interesting that based on such methods we may get completely new insights into human behavior and social processes.

If you have seen a further interesting paper in the conference (and there are surely some) that I have missed feel free to give a link to them in the comments to this post.

Teaching, Technical Training Day at the EPO

Together with Rene Mayrhofer and Alexander De Luca I organized a technical training at the European Patent Office in Munich. In the lectures we made the attempt to give a broad overview of recent advanced in this domain – and preparing such a day one realizes how much there is to it…. We covered the following topic:
  • Merging the physical and digital (e.g. sentient computing and dual reality [1])
  • Interlinking the real world and the virtual world (e.g. Internet of things)
  • Interacting with your body (e.g. implants for interaction, brain computer interaction, eye gaze interaction)
  • Interaction beyond the desktop, in particular sensor based UIs, touch interaction, haptics, and Interactive surfaces
  • Device authentication with focus on spontaneity and ubicomp environments
  • User authentication focus on authentication in the public 
  • Location-Awareness and Location Privacy
Overall we covered probably more than 100 references – here are just a few nice ones to read: computing tiles as basic building blocks for smart environments [2], a bendable computer interface [3], a touch screen you can also touch on the back side [4], and ideas on phones as basis for people centric censing [5].
[1] Lifton, J., Feldmeier, M., Ono, Y., Lewis, C., and Paradiso, J. A. 2007. A platform for ubiquitous sensor deployment in occupational and domestic environments In Proceedings of the 6th Conference on international information Processing in Sensor Networks (Cambridge, Massachusetts, USA, April 25 – 27, 2007). IPSN ’07. ACM, New York, NY, 119-127. DOI= http://doi.acm.org/10.1145/1236360.1236377
[2] Naohiko Kohtake, et al. u-Texture: Self-organizable Universal Panels for Creating Smart Surroundings. The 7th Int. Conference on Ubiquitous Computing (UbiComp2005), pp.19-38, Tokyo, September, 2005. http://www.ht.sfc.keio.ac.jp/u-texture/paper.html
[3] Schwesig, C., Poupyrev, I., and Mori, E. 2004. Gummi: a bendable computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 263-270. DOI= http://doi.acm.org/10.1145/985692.985726 
[4] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and Shen, C. 2007. Lucid touch: a seethrough mobile device. InProceedings of the 20th Annual ACM Symposium on User interface Software and Technology (Newport, Rhode Island, USA, October 07 – 10, 2007). UIST ’07. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1294211.1294259 
[5] Campbell, A. T., Eisenman, S. B., Lane, N. D., Miluzzo, E., Peterson, R. A., Lu, H., Zheng, X., Musolesi, M., Fodor, K., and Ahn, G. 2008. The Rise of People-Centric Sensing. IEEE Internet Computing 12, 4 (Jul. 2008), 12-21. DOI= http://dx.doi.org/10.1109/MIC.2008.90  

HotMobile09: history repeats – shopping assistance on mobile devices

Comparing prices and finding the cheapest item has been a favorite application example over the last 10 years. I have seen the idea of scanning product codes and compare them to prices in other shops (online or in the neighborhood) first demonstrated in 1999 at the HUC conference. The Pocket BargainFinder [1] was a mobile device with a barcode reader attached that you could scan books and get a online price comparison. Since then I have seen a number of examples that take this idea forward, e.g. a paper here at HotMobile [2] or the Amazon Mobile App.

The idea of making a bargain is certainly very attractive; however I think many of these applications do not take enough into account how price building works in the real world. If the consumer gets more power in comparison it can go two was: (1) shops will get more uniform in pricing or (2) shows will make it again harder to compare. The version (2) is more interesting 😉 and this can range from not allowing the use of mobile devices in the shop (what we see in some areas at the moment) to more sophisticated pricing options (e.g. prices get lowered when you buy combinations of products or when you are repeatedly in the same shop). I am really curious how this develops – would guess the system will penetrate the market over the next 3 years…

[1] Adam B. Brody and Edward J. Gottsman. Pocket BargainFinder: A Handheld Device for Augmented Commerce. First International Symposium on Handheld and Ubiquitous Computing (HUC ’99), 27-29 September 1999, Karlsruhe, Germany

[2] Linda Deng, Landon Cox. LiveCompare: Grocery Bargain Hunting Through Participatory Sensing. HotMobile 2009.

Demo day at TEI in Cambridge

What is a simple and cheap way to get from Saarbrücken to Linz? It’s not really obvious, but going via Stansted/Cambridge makes sense – especially when there is the conference on Tangible and Embedded Interaction (www.tei-conf.org) and Raynair offers 10€ flight (not sure about sustainability though). Sustainability, from a different perspective was also at the center of the Monday Keynote by Tom Igeo which I missed.

Nicolas and Sharam did a great job and the choice to do a full day of demos worked out great. The large set of interactive demos presented captures and communicates a lot of the spirit of the community. To get an overview of the demos one has to read through the proceedings (will post a link as soon as they are online in the ACM-DL) as there are too many to discuss them here.
Nevertheless here is my random pick:
One big topic is tangible interaction on surfaces. Several examples showed how interactive surfaces can be combined with physical artifacts to make interaction more graspable. Jan Borcher’s group showed a table with passive controls that are recognized when placed on the table and they provide tangible means for interaction (e.g. keyboard keys, knobs, etc.). An interesting effect is that the labeling of the controls can be done dynamically.
Microsoft research showed an impressive novel table top display that allows two images to be projected – on the interactive surface and one on the objects above [1]. It was presented at large year’s UIST but I have tried it out now for the first time – and it is a stunning effect. Have a look at the paper (and before you read the details make a guess how it is implemented – at the demo most people guessed wrong 😉
Embedding sensing into artifacts to create a digital representation has always been a topic in tangible – even back to the early work of Hiroshi Ishii on Triangles [2]. One interesting example in this year’s demo was a set of cardboard pieces that are held together by hinges. Each hinge is technically realized as a potentiometer and by measuring the potion the structure can be determined. It is really interesting to think this further.
Conferences like TEI let you inevitably think about the feasibility of programmable matter – and there is ongoing work in this in the robotics community. The idea is to create micro-robots that can create arbitrary shapes – for a starting point see the work at CMU on Claytronics.
[1] Izadi, S., Hodges, S., Taylor, S., Rosenfeld, D., Villar, N., Butler, A., and Westhues, J. 2008. Going beyond the display: a surface technology with an electronically switchable diffuser. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1449715.1449760
[2] Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56. DOI= http://doi.acm.org/10.1145/274644.274652

Two basic references for interaction byond the desktop

Following the workshop I got a few questions on what the important papers are that one should read to start on the topic. There are many (e.g. search in google schoolar for tangible interaction, physical interaction, etc and you will see) and there conference dedicated to it (e.g. the tangible and embedded interaction TEI – next week in cambridge).

But if I have to pick two here is my joice:

[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392

[2] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 201-210. DOI= http://doi.acm.org/10.1145/1357054.1357089

Tactile interfaces, Visit from Gordon Bolduan

This afternoon Gordon Bolduan from Technology Review was visiting the lab. We talked about haptic and tactile interfaces and showed some demos (e.g. navigation with tactile cues). 
When preparing for the visit I looked for some good examples of tactile interaction – and interestingly there is more and more work out there that has the potential to change future interfaces and means of communication. 
Recent work on connecting people [1] and [2] at the boundary between computing and design shows new options for emotional communication. 

We used in our work multiple vibration motors and explored the potential for mobile devices [3]. What to use for tactile interaction beyond vibration is one obvious question, and I find the paper by Kevin Li [4] a good starting point to get some more ideas.
When talking about human computer interaction that includes stroking, tapping and rubbing an association to erotic and sexual interactions seem obvious; and there is more to that if you are curious just search for teledildonics and you will find interesting commercial products as well as a lot of DIY ideas.
[1] Eichhorn, E., Wettach, R., and Hornecker, E. 2008. A stroking device for spatially separated couples. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 303-306. DOI= http://doi.acm.org/10.1145/1409240.1409274 
[2] Werner, J., Wettach, R., and Hornecker, E. 2008. United-pulse: feeling your partner’s pulse. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 535-538. DOI= http://doi.acm.org/10.1145/1409240.1409338 
[3] Alireza Sahami, Paul Holleis, Albrecht Schmidt, Jonna Häkkilä: Rich Tactile Output on Mobile Devices. European Conference on Ambient Intelligence (Ami’08). Springer LNCS Nürnberg 2008, S. 210-221. DOI= http://dx.doi.org/10.1007/978-3-540-89617-3_14
[4] Li, K. A., Baudisch, P., Griswold, W. G., and Hollan, J. D. 2008. Tapping and rubbing: exploring new dimensions of tactile feedback with voice coil motors. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 181-190. DOI= http://doi.acm.org/10.1145/1449715.1449744

Privacy – will our understanding change radically?

As one issue this morning we came across issues related to privacy. In particular it seems that social network analysis based on behavior in the real world (e.g. the reality mining project [1]) is creating serious interest beyond the technology people. Beyond measuring the frequency of encounters qualifying the way people interact (dominance, emotion, …) will reveal even more about social networks… 

In our discussion I made a reference to a book: “The Transparent Society” by David Brin. Even Though it is now nearly 10 years since it was first published I still think it is an interesting starting point for a privacy discussion.

[1] Eagle, N. and (Sandy) Pentland, A. 2006. Reality mining: sensing complex social systems. Personal Ubiquitous Comput. 10, 4 (Mar. 2006), 255-268. DOI= http://dx.doi.org/10.1007/s00779-005-0046-3 

[2] The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? David Brin, Basic Books (June 1, 1999). At Amazon

Trip to Dublin, Aaron’s Display Project

Visiting Dublin is always a pleasure – even if the weather is rainy. Most of the day I was at Trinity College reading master theses (which is the second best part of being external examiner, best part is to have lunch at the 1592 😉
In the evening I met with Aaron Quigley and we talked about some ongoing display and advertsing projects in our groups. He told me about one of their recent workshop papers [1] on public displays where they investigated what people take in and what people remember of the content on displays in an academic environment. It is online available in the workshop proceedings of AIS08 [2]. I found it worthwhile to browse the whole workshop proceedings.
[1] Rashid U. and Quigley A., “Ambient Displays in Academic Settings: Avoiding their Underutilization”, Ambient Information Systems Workshop at UbiComp 2008, September 21, Seoul, South Korea (download [2], see page 26 ff)

My Random Papers Selection from Ubicomp 2008

Over the last days there were a number of interesting papers presented and so it is not easy to pick a selection… Here is my random paper selection from Ubicomp 2008 that link to our work (the conference papers link into the ubicomp 2008 proceedings in the ACM DL, our references are below):

Don Patterson presented a survey on using IM. One of the finding surprised me: people seem to ignore “busy” settings. In some work we did in 2000 on mobile availability and sharing context users indicated that they would respect this or at least explain when interrupt someone who is busy [1,2] – perhaps it is a cultural difference or people have changed. It may be interesting to run a similar study in Germany.

Woodman and Harle from Cambridge presented a pedestrian localization system for large indoor environments. Using a XSens device they combine dead reckoning with knowledge gained from a 2.5D map. In the experiment they seem to get similar results as with a active bat system – by only putting the device on the user (which is for large buildings much cheaper than putting up infrastructure).
Andreas Bulling presented work where he explored the use EOG goggles for context awareness and interaction. The EOG approach is complementary to video based systems. The use of gesturest for context-awarenes follows a similar idea as our work on eye gestures [3]. We had an interesting discussion about further ideas and perhaps there is chance in the future to directly compare the approaches and work together.
In one paper “on using existing time-use study data for ubiquitous computing applications” links to interesting public data sets were given (e.g the US time-use survey). The time-use surevey data covers the US and gives detailed data on how people use their data.
University of Salzburg presented initial work on an augmented shopping system that builds on the idea of implicit interaction [4]. In the note they report a study where they used 2 cameras to observe a shopping area and they calculated the “busy spots” in the area. Additional they used sales data to get best selling products. Everything was displayed on a public screen; and an interesting result was that it seems people where not really interesting in other shoppers behavior… (in contrast to what we observe in e-commerce systems).
Researchers from Hitachi presented a new idea for browsing and navigating content based on the metaphor of using a book. In is based on the concept to have a bendable surface. In complements interestingly previous work in this domain called Gummi presented in CHI 2004 by Schwesig et al.
[1] Schmidt, A., Takaluoma, A., and Mäntyjärvi, J. 2000. Context-Aware Telephony Over WAP. Personal Ubiquitous Comput. 4, 4 (Jan. 2000), 225-229. DOI= http://dx.doi.org/10.1007/s007790070008
[2] Albrecht Schmidt, Tanjev Stuhr, Hans Gellersen. Context-Phonebook – Extending Mobile Phone Applications with Context. Proceedings of Third Mobile HCI Workshop, September 2001, Lille, France.
[3] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.
[4] Albrecht Schmidt. Implicit Human Computer Interaction Through Context. Personal Technologies, Vol 4(2), June 2000

Some random papers from Mobile HCI 2008

During mobile HCI I came across many interesting things (that is why one goes to conferences 😉 here is a selection of papers to look at – if you have more time it is worthwhile to look at the whole proceedings of mobile HCI 2008 in the ACM DL.

Gauntlet: a wearable interface for ubiquitous gaming – exploring a new gaming UI for gestures.

Mobile phones as artifacts children use in their games are discussed. Shows again how creative children are 😉

An Investigation into round touch screen Wristwatch interaction – interesting topic and good example how to do a small study. Ideas to create a tactile rim, e.g. 2 parts moving to have different tactile cues, were brought up in the discussion.

Programming with children – taking programming it into the environment away from the computer, relates to Tangible User Interfaces

Projector phone: a study of using mobile phones with integrated projector for interaction with maps

Interaction based on Speech seems possible – even in noisy environment – the paper reports interesting preliminary results in the context of a fishing boot. Interesting in-situ tests (e.g. platform in a wave tank)

Wearable computing user interfaces. Where should we put the controls and what functions do uses expect?

Learning-oriented vehicle navigation systems: a preliminary investigation in a driving simulator

Enrico Rukzio followed up the work from Munich pushing the idea of touch interaction with NFC devices further.

Color matching using a mobile phone. The idea is to use a color chart, take a photo of face with a color chart, sent by mms to server, server process look up color match, reply by sms; no software installation only using MMS, SMS. Application in cosmetics are discussed.

Using Second Life to demonstrate a concept automobile heads up display (A-HUD)

Paul Holleis presented our paper on Wearable Controls

Last year Paul did an internship a Nokia in Finland. He worked there on the integration of capacitive sensors in phones and clothing. After Paul was back we jointly followed up on the topic which resulted in an interesting set of guidelines for placing wearable controls [1].

The paper gives a good overview of wearable computing and interaction with wearable computers. In the work we focused on integrating touch sensitive controls into garments and accessories for a operating the music player integrated in a phone. The study showed that there are prime locations where to place controls on their body: the right hip and above the right knee (for more details see the paper [1]). It furthermore showed that it is not clear expectations of functions (e.g. forward, backward, volume up/down) with regard to controls laid out on the close.

During his internship he also did research on integrating touch into buttons, which was published at Tangible and Embedded Interaction 2008 [2].

[1] Holleis, P., Schmidt, A., Paasovaara, S., Puikkonen, A., and Häkkilä, J. 2008. Evaluating capacitive touch input on clothes. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 81-90. DOI= http://doi.acm.org/10.1145/1409240.1409250

[2] Paul Holleis, Jonna Häkkilä, Jussi Huhtala. Studying Applications for Touch-Enabled Mobile Phone Keypads. Proceedings of the 2nd Tangible and Embedded Interaction Conference TEI’08. February 2008.

Andrew Greaves presents a study on photo browsing using projector phones

Since Enrico Rukzio (my first PhD student) went to Lancaster he discovered and advanced a very exciting topic for mobile interaction: mobile projector/projector phones. His group has a great presencs at this year’s mobile HCI (3 demonstrations, 2 short papers, 2 full papers, a workshop). In time for the conference the first projector phone appeared on the market (Cking Epoq EGP-PP01) – as to highlight the timeliness of the work.

The mobile projector study [1] revealed several interesting aspects. 1) it is faster to browser on the phone screen than using a project, 2) users do a lot of context switches between projection and device – even nothing is displayed on the screen, 3) the users see a great value in it (even if they may be slower). I am really looking forward to further results in this area. It may be significantly change the way we use mobile phones!

PS: see Enrico watching his student present I remember how exciting it is for a supervisor to just watch…

[1] Andrew Greaves, Enrico Rukzio. Evaluation of Picture Browsing using a Projector Phone. 10th International Conference on Human-Computer Interaction with Mobile Devices and Services (Mobile HCI 2008). Amsterdam, Netherlands. 2-5 September 2008.

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos – ISUVR2008 – GIST – Korea

Visual aid for navigation – using human image processing

While browsing the equator website I came again across an interesting publication – I had seen two years ago at MobileHCI – in the domain of pedestrian navigation [1]. The Basic idea is to use a collection of geo-tagged photos to provide visual cues to people in what direction they should go, e.g. “walk towards this building”. This is an interesting application linking two concepts we discussed in the part on location in my lecture on pervasive computing. It follows the approach of augmenting the user in a way that the user does what he does well (e.g. matching visual images) and the computer what it does well (e.g. acquiring GPS location, finding pictures related to a location in a DB).

[1] Beeharee, A. K. and Steed, A. 2006. A natural wayfinding exploiting photos in pedestrian navigation systems. In Proceedings of the 8th Conference on Human-Computer interaction with Mobile Devices and Services (Helsinki, Finland, September 12 – 15, 2006). MobileHCI ’06, vol. 159. ACM, New York, NY, 81-88. DOI= http://doi.acm.org/10.1145/1152215.1152233

Impressions from Pervasive 2008

Using electrodes to detect eye movement and to detect reading [1] – relates to Heiko’s work but uses different sensing techniques. If the system can really be implemented in goggles this would be a great technologies for eye gestures as suggested in [2].

Utilizing infrastructures that are in place for activity sensing – the example is a heating/air condition/ventilation system [3]. I wondered and put forward the question how well this would work in active mode – where you actively create an airflow (using the already installed system) to detect the state of an environment.

Further interesting ideas:

  • Communicate while you sleep? Air pillow communication… Vivien loves the idea [4].
  • A camera with additional sensors [5] – really interesting! We had in Munich a student project that looked at something similar [6]
  • A cool vision video of the future is SROOM – everything becomes a digital counterpart. Communicates the idea of ubicomp in a great and fun way [7] – not sure if the video is online – it is on the conference DVD.

[1] Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. Andreas Bulling, Jamie A. Ward, Hans-W. Gellersen and Gerhard Tröster. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 19-37, Sydney, Australia, May 2008. http://dx.doi.org/10.1007/978-3-540-79576-6_2

[2] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007. http://murx.medien.ifi.lmu.de/~albrecht/pdf/interact2007-gazegestures.pdf

[3] Shwetak N. Patel, Matthew S. Reynolds, Gregory D. Abowd: Detecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 1-18, Sydney, Australia, May 2008. http://shwetak.com/papers/air_ims_pervasive2008.pdf

[4] Satoshi Iwaki et al. Air-pillow telephone: A pillow-shaped haptic device using a pneumatic actuator (Poster). Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/LBR/lbr11.pdf

[5] Katsuya Hashizume, Kazunori Takashio, Hideyuki Tokuda. exPhoto: a Novel Digital Photo Media for Conveying Experiences and Emotions. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Demo/d4.pdf

[6] P. Holleis, M. Kranz, M. Gall, A. Schmidt. Adding Context Information to Digital Photos. IWSAWC 2005. http://www.hcilab.org/documents/AddingContextInformationtoDigitalPhotos-HolleisKranzGallSchmidt-IWSAWC2005.pdf

[7] S-ROOM: Real-time content creation about the physical world using sensor network. Takeshi Okadome, Yasue Kishino, Takuya Maekawa, Kouji Kamei, Yutaka Yanagisawa, and Yasushi Sakurai. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Video/v2.pdf