My Random Papers Selection from Ubicomp 2008

Over the last days there were a number of interesting papers presented and so it is not easy to pick a selection… Here is my random paper selection from Ubicomp 2008 that link to our work (the conference papers link into the ubicomp 2008 proceedings in the ACM DL, our references are below):

Don Patterson presented a survey on using IM. One of the finding surprised me: people seem to ignore “busy” settings. In some work we did in 2000 on mobile availability and sharing context users indicated that they would respect this or at least explain when interrupt someone who is busy [1,2] – perhaps it is a cultural difference or people have changed. It may be interesting to run a similar study in Germany.

Woodman and Harle from Cambridge presented a pedestrian localization system for large indoor environments. Using a XSens device they combine dead reckoning with knowledge gained from a 2.5D map. In the experiment they seem to get similar results as with a active bat system – by only putting the device on the user (which is for large buildings much cheaper than putting up infrastructure).
Andreas Bulling presented work where he explored the use EOG goggles for context awareness and interaction. The EOG approach is complementary to video based systems. The use of gesturest for context-awarenes follows a similar idea as our work on eye gestures [3]. We had an interesting discussion about further ideas and perhaps there is chance in the future to directly compare the approaches and work together.
In one paper “on using existing time-use study data for ubiquitous computing applications” links to interesting public data sets were given (e.g the US time-use survey). The time-use surevey data covers the US and gives detailed data on how people use their data.
University of Salzburg presented initial work on an augmented shopping system that builds on the idea of implicit interaction [4]. In the note they report a study where they used 2 cameras to observe a shopping area and they calculated the “busy spots” in the area. Additional they used sales data to get best selling products. Everything was displayed on a public screen; and an interesting result was that it seems people where not really interesting in other shoppers behavior… (in contrast to what we observe in e-commerce systems).
Researchers from Hitachi presented a new idea for browsing and navigating content based on the metaphor of using a book. In is based on the concept to have a bendable surface. In complements interestingly previous work in this domain called Gummi presented in CHI 2004 by Schwesig et al.
[1] Schmidt, A., Takaluoma, A., and Mäntyjärvi, J. 2000. Context-Aware Telephony Over WAP. Personal Ubiquitous Comput. 4, 4 (Jan. 2000), 225-229. DOI= http://dx.doi.org/10.1007/s007790070008
[2] Albrecht Schmidt, Tanjev Stuhr, Hans Gellersen. Context-Phonebook – Extending Mobile Phone Applications with Context. Proceedings of Third Mobile HCI Workshop, September 2001, Lille, France.
[3] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.
[4] Albrecht Schmidt. Implicit Human Computer Interaction Through Context. Personal Technologies, Vol 4(2), June 2000

Impressions from Pervasive 2008

Using electrodes to detect eye movement and to detect reading [1] – relates to Heiko’s work but uses different sensing techniques. If the system can really be implemented in goggles this would be a great technologies for eye gestures as suggested in [2].

Utilizing infrastructures that are in place for activity sensing – the example is a heating/air condition/ventilation system [3]. I wondered and put forward the question how well this would work in active mode – where you actively create an airflow (using the already installed system) to detect the state of an environment.

Further interesting ideas:

  • Communicate while you sleep? Air pillow communication… Vivien loves the idea [4].
  • A camera with additional sensors [5] – really interesting! We had in Munich a student project that looked at something similar [6]
  • A cool vision video of the future is SROOM – everything becomes a digital counterpart. Communicates the idea of ubicomp in a great and fun way [7] – not sure if the video is online – it is on the conference DVD.

[1] Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. Andreas Bulling, Jamie A. Ward, Hans-W. Gellersen and Gerhard Tröster. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 19-37, Sydney, Australia, May 2008. http://dx.doi.org/10.1007/978-3-540-79576-6_2

[2] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007. http://murx.medien.ifi.lmu.de/~albrecht/pdf/interact2007-gazegestures.pdf

[3] Shwetak N. Patel, Matthew S. Reynolds, Gregory D. Abowd: Detecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 1-18, Sydney, Australia, May 2008. http://shwetak.com/papers/air_ims_pervasive2008.pdf

[4] Satoshi Iwaki et al. Air-pillow telephone: A pillow-shaped haptic device using a pneumatic actuator (Poster). Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/LBR/lbr11.pdf

[5] Katsuya Hashizume, Kazunori Takashio, Hideyuki Tokuda. exPhoto: a Novel Digital Photo Media for Conveying Experiences and Emotions. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Demo/d4.pdf

[6] P. Holleis, M. Kranz, M. Gall, A. Schmidt. Adding Context Information to Digital Photos. IWSAWC 2005. http://www.hcilab.org/documents/AddingContextInformationtoDigitalPhotos-HolleisKranzGallSchmidt-IWSAWC2005.pdf

[7] S-ROOM: Real-time content creation about the physical world using sensor network. Takeshi Okadome, Yasue Kishino, Takuya Maekawa, Kouji Kamei, Yutaka Yanagisawa, and Yasushi Sakurai. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Video/v2.pdf

Our Papers at Interact 2007

Heiko Drewes and Richard Atterer, collegues from university of Munich, have travelled to Interact 2007. Their emails indicate that the conference is this year at a most interesting place. The conference is in Rio de Janeiro, directly at the Copacabana. The conference was highly competitive and we are happy to have two papers we can present there.

Heiko presents a paper that shows that eye gestures can be used to interact with a computer. In his experiments he shows that users can learn gesture with eyes (basically moving the eyes in a certain pattern, e.g. following the outline of a dialog box). The paper is part of his PhD research on eye-tracking for interaction. More details are in:

Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.

Richard’s paper is on collaboration support with a proxy based approach. Using our previous work on the UsaProxy we extended the functionality to supported synchronous communication while using the Web:

Richard Atterer, Albrecht Schmidt, and Monika Wnuk. A Proxy-Based Infrastructure for Web Application Sharing and Remote Collaboration on Web Pages. Proceedings of INTERACT 2007.