Keynote by Pertti Huuskonen: Ten Views to Context Awareness

Pertti Huuskonen from Nokia presented his keynote at Percom in Mannheim. I worked with Pertti in 1999 on a European Project TEA – creating context-aware phones [1].

After telling us about CERN and some achievements in physics he raised the issue that an essential skill of humans is that they are context-aware. Basically culture is context-awareness – learning how to appropriately behave in life is essential to be accepted. We do this by looking at other people and by learning how how they act and how others react. “Knowing how to behave” we become fit for social life and this questions the notion of intuitive use as it seems that most of it is learned or copied from others.

He gave a nice overview of how we can context-awareness is useful. One very simple example he showed is that people typically create context at the start of a phone call.

One example of a future to come may be ubiquitous spam – where context may be the enabler but also the enabler for blogging adverts. He also showed the potential of context in the large, see Nokoscope. His keynote was refreshing – and as clearly visible he has a good sense of humor 😉

[1] Schmidt, A., Aidoo, K. A., Takaluoma, A., Tuomela, U., Laerhoven, K. V., and Velde, W. V. 1999. Advanced Interaction in Context. In Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 89-101.

Sensor modules for acceleration, gyro, and magnetic field

I came across 2 Sensor module recently released by ST Microelectronics:

There will be in the future probably very few mobile devices without such sensors. When we worked on the project TEA in 1999 it seemed far away
 What can you do with sensors on the mobile? There are a few papers to read: using them for context awareness [1], for interaction [2], [3], and for creating smart devices [4].

Last week in Finland I met Antii Takaluoma (one of the co-authors of [1]) and he works now for offcode.fi – I saw impressive Linux hardware – I expect cool stuff to come 🙂

[1] Schmidt, A., Aidoo, K. A., Takaluoma, A., Tuomela, U., Laerhoven, K. V., and Velde, W. V. 1999. Advanced Interaction in Context. In Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 89-101.

[2] Hinckley, K., Pierce, J., Sinclair, M., and Horvitz, E. 2000. Sensing techniques for mobile interaction. In Proceedings of the 13th Annual ACM Symposium on User interface Software and Technology (San Diego, California, United States, November 06 – 08, 2000). UIST ’00. ACM, New York, NY, 91-100. DOI= http://doi.acm.org/10.1145/354401.354417

[3] Albrecht Schmidt. Implicit human computer interaction through context. Personal and Ubiquitous Computing, 4(2):191-199, June 2000

[4] A. Schmidt and K. Van Laerhoven. How to Build Smart Appliances?, IEEE Personal Communications, p.66 – 71, (2001)

Workshop at MobileHCI: Context-Aware Mobile Media and Mobile Social Networks

Together with colleagues from Nokia, VTT, and CMU we organized a workshop on Context-Aware Mobile Media and Mobile Social Networks at MobileHCI 2009.

The topic came up in discussions some time last year. It is very clear that social network have moved towards mobile scenarios and that utilizing context and contextual media adds a new dimension. The workshop program is very diverse and ranges studying usage practices to novel technological solutions for contextual media and application.

One topic that is interesting to further look at is to use (digital) social networks for health care. Taking an analogy in history it is evident that the direct social group you were in took were the set of people that helped you in case of illness or accident. Looking at conditions and illnesses that cause a loss of mobility or memory it could be interesting to find applications on top of digital social networks to provide help. Seems this could be a project topic.

In one discussion we explored what would happen if we would change our default communication behavior from closed/secret (e.g. Email and SMS) to public (e.g. bulletin boards). I took the example of organizing this workshop: our communication has been largely on email and has not been public. If it would had been open (e.g. public forum) we probably would have organized the workshop in the same way but at the same time provided an example how one can organize a workshop and by this perhaps provided useful information for future workshop chairs. In this case there are little privacy concerns but images all communication is public? We would learn a lot about how the world works


About 10 years ago we published at paper there is more to context than location [1]. However, looking at our workshop it seems: location is still the dominant context people think of. Many of the presentations and discussions included the term context, but the examples focused on location. Perhaps we do need location only? Or perhaps we should look more closely to find the benefit of other contexts?

[1] A. Schmidt, M. Beigl, H.W. Gellersen (1999) There is more to context than location, Computers & Graphics, vol. 23, no. 6, pp. 893-901.

New project on ambient visualization – kick-off meeting in Munich

We met in Munich at Docomo Euro Labs to start a new project that is related to context and ambient visualizations. And everyone already got bunnies 😉

Related to this there is a large and very interesting project: IYOUIT. Besides other things it can record and share your context – if you have a Nokia series 60 phone you should try it out. As far as I remember it was voted best mobile experience at mobile HCI 2008. 

My Random Papers Selection from Ubicomp 2008

Over the last days there were a number of interesting papers presented and so it is not easy to pick a selection… Here is my random paper selection from Ubicomp 2008 that link to our work (the conference papers link into the ubicomp 2008 proceedings in the ACM DL, our references are below):

Don Patterson presented a survey on using IM. One of the finding surprised me: people seem to ignore “busy” settings. In some work we did in 2000 on mobile availability and sharing context users indicated that they would respect this or at least explain when interrupt someone who is busy [1,2] – perhaps it is a cultural difference or people have changed. It may be interesting to run a similar study in Germany.

Woodman and Harle from Cambridge presented a pedestrian localization system for large indoor environments. Using a XSens device they combine dead reckoning with knowledge gained from a 2.5D map. In the experiment they seem to get similar results as with a active bat system – by only putting the device on the user (which is for large buildings much cheaper than putting up infrastructure).
Andreas Bulling presented work where he explored the use EOG goggles for context awareness and interaction. The EOG approach is complementary to video based systems. The use of gesturest for context-awarenes follows a similar idea as our work on eye gestures [3]. We had an interesting discussion about further ideas and perhaps there is chance in the future to directly compare the approaches and work together.
In one paper “on using existing time-use study data for ubiquitous computing applications” links to interesting public data sets were given (e.g the US time-use survey). The time-use surevey data covers the US and gives detailed data on how people use their data.
University of Salzburg presented initial work on an augmented shopping system that builds on the idea of implicit interaction [4]. In the note they report a study where they used 2 cameras to observe a shopping area and they calculated the “busy spots” in the area. Additional they used sales data to get best selling products. Everything was displayed on a public screen; and an interesting result was that it seems people where not really interesting in other shoppers behavior
 (in contrast to what we observe in e-commerce systems).
Researchers from Hitachi presented a new idea for browsing and navigating content based on the metaphor of using a book. In is based on the concept to have a bendable surface. In complements interestingly previous work in this domain called Gummi presented in CHI 2004 by Schwesig et al.
[1] Schmidt, A., Takaluoma, A., and MÀntyjÀrvi, J. 2000. Context-Aware Telephony Over WAP. Personal Ubiquitous Comput. 4, 4 (Jan. 2000), 225-229. DOI= http://dx.doi.org/10.1007/s007790070008
[2] Albrecht Schmidt, Tanjev Stuhr, Hans Gellersen. Context-Phonebook – Extending Mobile Phone Applications with Context. Proceedings of Third Mobile HCI Workshop, September 2001, Lille, France.
[3] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.
[4] Albrecht Schmidt. Implicit Human Computer Interaction Through Context. Personal Technologies, Vol 4(2), June 2000

Which way did you fly to Korea?

We got a new USB GPS tracker(from Mobile Action, GT100) and had to try it out on the trip to Korea. It worked very well compared to the other devices we had so far. It got the bus trip in DĂŒsseldorf airport right and the entire flight from Amsterdam to Seoul. Tracking worked well in the taxi from the Airport to the hotel. While walking in downtown Seoul it still performed OK (given the urban canyons) with some outliers.

It did not get any signal while we were on the Fokker-50 from DĂŒsseldorf to Amsterdam 🙁 I slept a few hours on the flight to Seoul but I think someone took a photo (probably of me) over Mongolia
 If you wonder if it is allowed to used your GPS in the plane or not – it is – at least with KLM (according to a random website http://gpsinformation.net/airgps/airgps.htm 🙂

Back in Korea, Adverts, Driving and Entertainment

On the way into town we got a really good price for the taxi (just make a mental note never to negotiate something with Florian and Alireza at the same time 😉 It seems taxi driving is sort of boring – he too watched television while driving (like the taxi driver some weeks ago in Amsterdam). I think we should seriously think more about entertainment for micro breaks because I still think it is for a good reason not allowed to watch TV while driving.

Seoul is an amazing place. There are many digital signs and electronic adverts. Walking back to the hotel I saw a large digital display on a rooftop (would guess about 10 meters by 6 meters). If working it is probably nice. But now it is mal functioning and the experience walking down the road is worsened as one inevitably looks at it. I wonder if in 10 years we will be used to broken large screen displays
   

Thermo-imaging camera at the border – useful for Context-Awareness?

When we re-entered South Korea I saw guard looking with an infrared camera at all arriving people. It was very hot outside so the heads were very red. My assumption is that this is used to spot people who have fever – however I could not verify this.

Looking at the images created while people moved around I realized that for many tasks in activity recognition, home health care, and wellness this may be an interesting technology to use. For several tasks in context-awareness it seems straightforward to get this information from an infrared camera. In the computer vision domain it seems that there have several papers towards this problem over the recent years.

We could thing of an interesting project topic related to infrared activity recognition or interaction to be integrated in our new lab
 There are probably some fairly cheep thermo-sensing cameras around to used in research – for home brew use you find hints on the internet, e.g. How to turn a digital camera into an IR cam – pretty similar to what we did with the web cams for our multi-touch table.

The photo is from http://en.wikipedia.org/wiki/Thermography

ISUVR 2008, program day2

Norbert Streitz – Trade-off for creating smartness

Norbert gave an interesting overview of research in the domain of ubicomp based on his personal experience – from Xerox PARC to the disappearing computer. He motivated the transition from Information Design to Experience Design. Throughout the work we see a trade-off between providing “smart support” to the user and “privacy” (or control over privacy). One of the questions if we will re-invent privacy or if it will become a commodity

As one of the concrete examples Norbert introduced the Hello.Wall done in the context Ambient Agoras [1]. This again brought up the discussion of public vs. private with regard to the patterns that are displays. (photos of some slides from Norbert’s talk)

[1] Prante, T., Stenzel, R., Röcker, C., Streitz, N., and Magerkurth, C. 2004. Ambient agoras: InfoRiver, SIAM, Hello.Wall. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 763-764. DOI= http://doi.acm.org/10.1145/985921.985924 (Video Hello.Wall)
Albrecht Schmidt – Magic Beyond the Screen
I gave a talk on “Human Interaction in Ubicomp -Magic beyond the screen” highlighting work in user interfaces beyond the screen that we did over the last years. It is motivated by the facts that classical limitations in computer science (e.g. frame rate, processing, storage) are getting less and less important to many application areas and that the human computer interaction becomes in many areas the critical part of the system.
In my talk I suggested using “user illusion” as a design tool for user interfaces beyond the desktop. This involves two steps: 1) describe precisely the user illusion the application will create and the 2) Investigate what parameters have an influence on the quality of the created user illusion for the application. (photos of some slides from Albrecht’s talk, Slides in PDF)
Jonathan Gratch – Agents with Emotions

His talk focused on the domain of virtual reality with a focus on learning/training applications. One central thing I learned is that the timing of non-verbal cues (e.g. nodding) is very crucial to produce an engagement in speaking with an agent. This may also be interesting for other forms of computer created feedback.
He gave a specific example on how assigning blame works. It was really interesting to see that there are solid theories in this domain that can be concretely used to design novel interfaces. He argues that appraisal theory can explain people’s emotional states and this could improve context-awareness.

He showed an example of emotional dynamics and it is amazing how fast emotion happen. One of the ways of explaining this is to look at different dynamics: dynamics in the world, dynamics in the perceived world relationship, and dynamic through action. (photos of some slides from Jonathan’s talk)
Daijin Kim – Vision based human robot interaction
Motivated by the vision that after the personal computer we will see the “Personal Robot” Daijin investigates natural ways to interact with robots. For vision based interaction with robots he named a set of difficulties, in particular: people are moving, robots are moving, and the illuminations and distances are variable. The proposed approach is to generate a pose, expression, and illumination specific active appearance model.
He argues that face detection is a basic requirement for vision based human robot interaction. The examples he showed in demo movie were very robust with regard to movement, rotation, and expression and it works for very variable distances. The talk contained further examples of fast face recognition and recognition of simple head gestures. Related to our research it seems that such algorithms could be really interesting in creating context-aware outdoor advertisement. (photos of some slides from Daijin’s talk)

Steven Feiner – AR for prototyping UIs

Steven showed some work mobile projector and mobile device interaction, were they used augmented reality for prototyping different interaction methods. He introduced Spot-light (position based interaction), orientation based interaction and widget-based interaction for an arm mounted projector. Using the synaptic touchpad and projection may also be an option for our car-ui related research. For interaction with a wrist device (e.g. a watch) he introduced the string-based interaction which is a simple but exciting idea. You pull out a string of a device and the distances as well as the direction are the resulting input parameters [2].
In a further example Steven showed a project that supports field work on identification of plants using capture (of the image of the real leaf), comparison with the data base and matching out of a subset that matches the features. Their prototype was done on a tablet and he showed ideas how to improve this with AR; it is very clear that this may also an interesting application (for the general user) on the mobile phone.

New interfaces and in particular gestures are hard to explore – if you have no idea what is supported by the system. In his example on visual hint for tangible gestures using AR Steven showed interesting options in this domain. One approach follows a “preview style” visualizations – they called it ghosting. (photos of some slides from Stevens’s talk)

[2] Blasko, G., Narayanaswami, C., and Feiner, S. 2006. Prototyping retractable string-based interaction techniques for dual-display mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (MontrĂ©al, QuĂ©bec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ’06. ACM, New York, NY, 369-372. DOI= http://doi.acm.org/10.1145/1124772.1124827
[3] White, S., Lister, L., and Feiner, S.Visual Hints for Tangible Gestures in Augmented Reality.Proc. ISMAR 2007 IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara Japan, November 13-16, 2007. (youtube video)

If you are curious about the best papers, please the photos from the closing 🙂

Finally some random things to remember:

  • Richard W. DeVaul did some work on subliminal user interfaces – working towwrds the vision of zero attention UIs [4]
  • Jacqueline Nadel (development psychologist) did studies on emotions between parents and infants using video conferencing
  • V2 – Toward a Universal Remote Console Standard http://myurc.org/whitepaper.php
  • iCat and Gaze [5]

[4] Richard W. DeVaul. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support. PhD Thesis. MIT 2004. http://devaul.net/~rich/DeVaulDissertation.pdf

[5] Poel, M., Breemen, A.v., Nijholt, A., Heylen, D.K., & Meulemans, M. (2007). Gaze behavior, believability, likability and the iCat. Proceedings Sixth Workshop on Social Intelligence Design: CTIT Workshop Proceedings Series (pp. 109–124). http://www.vf.utwente.nl/~anijholt/artikelen/sid2007-1.pdf

ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks


Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad’s talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen
 I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147

Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds’s talk)

Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent’s talk)

Jun Park – bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.

The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun’s talk)

Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2

Context-Aware adverts, google patent search

This evening I went to MĂŒnster to meet with Antonio KrĂŒger and Lucia Terrenghi (who is now with Vodafone), who was visiting there. Advertisement is a hot topic and it was interesting that we shared an interesting observation “If the advert/information is the least boring thing to look at people will read it ;-)”. Each of us having their favorite anecdotal evidence: my favorites are people reading the same map everyday at their U-station and the advertising flyers in the Munich S-Train. For context-aware advertisement this is the major challenge to find the time/location where people are bored and happy to see an advert 😉

We currently have an ongoing master thesis that looks into this topic – context-aware advertising with cars. There are several interesting examples that this concept could work: e.g. Taxis that show location based ads (you can hire your area where your ad is shown, see [1], [2]). We think it gets really interesting if there are many cars that form a in-town canvas you can paint on. On the way back we checked out the screen adverts (include in the public phones) Jörg MĂŒller works on – even with a navigation feature.

Looking for some more on the topic I realized that Google Patent search works quite well by now: http://www.google.de/patents

Tutorial von Sensor to Context und Activity at Pervasive 2008

Pervasive 2007 introduced a new form of tutorials – having a number of experts talking one hour about their special topic – I was last year as participant and liked it a lot. This year Pervasive 2008 repeated this approach and I contributed a tutorial on how to get context and activity from sensors (tutorial slides in PDF).

Abstract. Intelligent environments, sensor network and smart objects are inherently connected to building systems that sense phenomena in the real world and make the perceived information available to applications. In the first part of the tutorial an overview of sensors and sensor systems commonly used in pervasive computing application is given. Additionally to the sensor properties means for connecting sensors to systems (e.g. ADC, PWM, I2C, serial line) are explained. In the second part it is discussed how to create meaningful information in the application domain. Some basic features, calculated in the time and frequency domain, are introduced to provide basic means for processing and abstraction of raw sensor data. This part is complemented by a brief overview of mechanisms and methods for relating (abstracted) sensor information to context, activity and situations. Additionally general problems that are associated with sensing context and activity will be addressed in this tutorial.

Have Not Changed Profession – Hospitals are complex

This morning we had the great opportunity to observe and discuss workflows and work practice in the operating area in the Elisabeth hospital in Essen. It was amazing how much time from (really busy) personnel we got and this provided us with many new insights.

The complexity of scheduling patients, operations, equipment and consumables in a very dynamic environment poses a great challenges and it was interesting to see how well it works with current technologies. However looking at the systems used and considering upcoming pervasive computing technologies a great potential for easing tasks and processes is apparent. Keeping tracking of things and people as well as well as documentation of actions are central areas that could benefit.

From a user interface perspective it is very clear that paper and phone communication play an important role, even in such high-tech environment. We should look a bit more into the Anoto Pen technology – perhaps this could be an enabler for some ideas we discussed. Several ideas that relate to implicit interaction and context awareness (already partly discussed in the context of a project in Munich [1]) re-surfaced. Similarly questions related to data access and search tools seem to play an interesting role. With all the need for documentation it is relevant to re-thing in what ways data is stored and when to analyses data (at storage time or at retrieval time).

One general message from such a visit is to appreciate people’s insight in these processes which clearly indicates that a user centered design process is the only suitable way to move innovation in such environments forward and create by this ownership and acceptance.

[1] A. Schmidt, F. Alt, D. Wilhelm, J. Niggemann, H. Feussner. Experimenting with ubiquitous computing technologies in productive environments. e & i Elektrotechnik und Informationstechnik, Springer Verlag. Volume 123, Number 4 / April, 2006. pages 135-139

Reminded of the Ubicomp Vision

Today I was reminded of a discussion in 1998 on the implications of computing technologies becoming cheaper and cheaper. Even then it seemed inevitable that many artifacts will include computational and perceptual qualities. The discussion was in the context of the European project TEA (technology for enabling awareness) where we built a context-aware phone [1]. Walter van de Velde suggested imagining that processors, sensors, communication will only cost cents (or will be virtually free as part of the production process) and we worked on the question: what products and services will emerge? One generic answer then was than any product of a value 20$ and above will include computing and sensing capabilities, if there is any (even a minimal) advantage achieved by this.

Michael Beigl made it more concrete and found coffee mugs (which were more than 20$ each) and attached a processor, communication and sensors. The MediaCup [2] showed several interesting results and underlined that such approach makes sense if there is an advantage.

Today I saw in an office of a former colleague in Munich two objects that had perceptual qualities and output (not really processing yet). One object is a plastic toad that makes a noise when you move and the other is a rubber pig that makes a noise when you open the fridge (reacts on change in level, but did not work). This made me wonder if we were only partially right – yes objects will have sensors included, yes there will be processing, but no there is no need that it makes sense. Or perhaps having it as a gadget is advantage enough…

[1] Schmidt, A., Aidoo, K. A., Takaluoma, A., Tuomela, U., Laerhoven, K. V., and Velde, W. V. 1999. Advanced Interaction in Context. In Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 89-101. DOI= http://dx.doi.org/10.1007/3-540-48157-5_10

[2] Gellersen, H. W., Schmidt, A., and Beigl, M. 2002. Multi-sensor context-awareness in mobile devices and smart artifacts. Mob. Netw. Appl. 7, 5 (Oct. 2002), 341-351. DOI= http://dx.doi.org/10.1023/A:1016587515822

UbiLog Workshop in Bremen

This afternoon our UbiLog workshop was held in Bremen as part of the Informatik 2007 conference. We selected 4 papers for presentation and had a lively and interesting discussion.

Following the talk of Nikolai Krambrock we discussed the use context, and in particular location, to restrict or allow access to information. My favourite example is an online-banking appliance that only works in predefined areas (e.g. at home and in my car). Using context appears one option in creating human understandable solutions for secure systems. People have developed means to protect physical objects and valuable, perhaps we should draw more on this experience in the design of secure systems.

Article in the Economist

Some weeks ago Ben Sutherland from The Economist called. He was researching for an article discussing the computing revolution over the last 25 years. In his research he to talked to many different people (from different countries, different fields, different views) and was particularly interested in applications that will come in the future and with me in particular on the concept of context-awareness.

The article “The trouble with computers” appeared 6th of September and discusses a mix of ideas and viewpoints. We talked about 30 minutes on the phone and I am quite surprised what statement he picked from me (I said many things that were more interesting ;-). However I think it is great that people start trying to understanding the radical changes computers introduce – everywhere.

Automotive User Interface Workshop

At the German HCI conference (Mensch und Computer) I organized together with Paul Holleis and Klaus Bengler (BMW Group) a workshop on automotive user interfaces. We were surprised how many people work and research in this area in Germany and Austria.

The 9 talks showed a wide range of research results and questions ranging from activity recognition, search interfaces, cultural issues to research methods. Dagmar Kern presented our work on a new method for interviewing drivers at the gas station. Stefan Graf from BMW groups had an interesting demo on object oriented interaction and in-car text input.

In the final session we discussed on future challenges of automotive user interfaces and it seems that it is a great challenge as cars are very emotional products. One interesting point was that user interfaces may not be central for the decisions which car to buy – but if not satisfied it will influence the decisions not to buy such a car again.

Context and context-awareness (e.g. based on user activity, driving parameters and location) seems to provide a great opportunity for future interfaces and in-car applications. One nice example was presented by Susanne Boll from a joint project with VW (C3World, connected cars in a connected world).

Navigation by calories – New insights useful for next generation navigation systems?

In a German science news ticker I saw an article a inspiring post reporting an experiment on orientation in relation to food. It describes an experiment where men and women were asked to visit a set of market stalls to taste food and afterwards they are asked where the stall was.

The to me surprising result was that women performed better than men (which is to my knowledge not often the case in typical orientation experiments) and that independent of gender the amount of calories that are contained in the tasted food influenced the performance. Basically if there are more calories in the tasted food people could remember better where it was. I have had no change yet to read the original paper (Joshua New, Max M. Krasnow, Danielle Truxaw und Steven J.C. Gaulin. Spatial adaptations for plant foraging: women excel and calories count, August 2007, Royal society publishing, http://www.journals.royalsoc.ac.uk) and my assessment is only based on the post in the newsticker.

This makes me think about future navigation systems and in particular landmark based navigation. What landmarks are appropriate to use (e.g. places where you get rich food) and how much this is gender dependent (e.g. the route for men is explained by car-dealers and computer shops whereas for women by references to shoe shops – is this political correct?).

Apropos: landmark based navigation. There is an interesting short paper that was at last years UIST conference that looks into this issue in the context of personalized routes:
Patel, K., Chen, M. Y., Smith,
I., and Landay, J. A. 2006. Personalizing routes. In Proceedings of the 19th Annual ACM Symposium on User interface Software and Technology (Montreux, Switzerland, October 15 – 18, 2006). UIST ’06. ACM Press, New York, NY, 187-190. DOI= http://doi.acm.org/10.1145/1166253.1166282

Perhaps this ideas could be useful for a future navigation system


Ubiquitous, Pervasive and Ambient Computing – Clarification of Terms

In the resent month the question about ubiquitous, pervasive, ambient computing came up several times. An email by Jos Van Esbroeck motivated me to write my view on it


Clarifying the terms seems an ongoing process as various communities and individuals use each of those terms for new things they are doing.

For me the best way to discriminate the terms ubiquitous computing, pervasive computing, and ambient intelligence is by their origin, history and research communities.

The first term (ubiquitous computing, ubicomp) is linked to Mark Weiser and his vision of computing in the 21st century [1]. In the research community its is very much connected to ubiquitous and pervasive systems that have the user somewhere in the loop. The ubicomp conference [2] seems more focused on user experience than on pure technology.

Pervasive Computing was pushed in the mid 1990s, more by industry and in particular by IBM. Pervasive computing seems from its origin more focused on technologies and solutions than on a particular vision. The two major conferences related to this topic: pervasive [3] and percom [4] are more systems and network focused, however always keeping some attention to the user experience perspective. Here, in particular with percom, many in the research community have their origin in the networking and distributed systems world. To me pervasive computing seems more technical than ubiquitous computing and includes systems that do no have direct human users involved.

The term ambient intelligence was introduced by the European funding agencies in the Framework 5 vision. Around the same time as the Philips Home-lab that drives the term, too. Here, similar to ubicomp, the vision of a new quality of user experience is a driving factor. The research that falls under this label by now is broad and I think it is very similar to the research in ubiquious computing. There is also a European conference on ambient intelligence [5].

Many people that are involved in ubicomp/pervasive/percom are also active in one more traditional research community. In particular many people are additionally involved in user interface research (e.g. CHI-Community), mobile computing and mobile systems, networking and distributed systems.

A very early topic related to the whole field is context-awareness as introduced by Schilit [6] who was working with Weiser. In my PhD dissertation I have looked more into the relationship between ubicomp and context-awareness – it has the title Ubiquitous Computing – Computing in Context [7]

In parallel subtopic in the above field have emerged that look at specific aspects, e.g. internet of things [8] (not necessarily a human in the loop), wearable computing (computing in cloth), smart environments (computing in buildings and furniture), tangible and embedded interaction [9] (looking at the interaction side), smart objects, … and probably many more.

There is also an interesting trend that many of the topics, if they are a bit matured, move back into the traditional communities.

[1] Mark Weiser. The Computer for the Twenty-First Century. Scientific American 265, 3 (September 1991), 94-104
[2] http://www.ubicomp.org/
[3] http://pervasive2008.org/
[4] http://www.percom.org/
[5] http://www.ami-07.org/
[6] B. Schilit, N. Adams, and R. Want. (1994). “Context-aware computing applications“. IEEE Workshop on Mobile Computing Systems and Applications (WMCSA’94), Santa Cruz, CA, US: 89-101 .
[7] Albrecht Schmidt(2003). “Ubiquitous Computing – Computing in Context“. PhD dissertation, Lancaster Univeristy.
[8] http://
www.internetofthings-2008.org
[9] http://www.tei-conf.org/

Bluetooth marketing in the wild


Arriving in Zurich I was quite surprised by the masses of people in the train station. We picked the weekend of the Street Parade for our visit 😉 makes you really think about your own age


In the railway station they had digital giveaways – you just had to switch your Bluetooth on.

What is the Digital Equivalent of a Park in a City?

The visit to the eCulture Factory showed me again that bringing new media into the real public space creates new and very valuable insights, even though it is difficult and costly. Such installations can give a glimpse of what future public space will be. When thinking of the design space for media in public spaces one can image to create completely different and new experiences. Contextuality and awareness seem key design criteria.

Transforming public space using digital technology offers a lot of chances. However it seems that currently a lot of people think about this mainly with regard to new forms of advertising (obviously us included). But after seeing the installations in Bremen I think there is a great chance to improve the quality of life in a place with digital technologies. We probably should think more along the non-short-term-business-lines in this domain.

Thinking of quality of life… who wants to live in a city without a park or at least some green patches? No one – really. Perhaps it is time to invent the digital equivalent of a park for public spaces of the future. I think I have to do some reading to understand the traditional motivation behind parks


bi-t Student demo lab results at Fraunhofer IAIS

This morning we presented selected demos of the lab on location and context awareness to people at the Fraunhofer IAIS. Besides the fact that our main infrastructure component (the Ubisense indoor system) did not work the demos went well. It was very strange – the infrastructure worked for the last 6 weeks (including several reboots) and this morning after rebooting the server it did not find the sensors anymore for several hours.

The majority of demos were based on the second assignment which was to create a novel application that makes use of an indoor location system. The applications implemented by the students included a heat-map (showing where a room is mainly used), co-location depended displays (enabling minimal setup effort and admin effort), museum information system (time and location depend display of different levels of information), and a security system (allowing a functionality only inside a perimeter dynamically defined by tags). Overall it was very interesting what the students created in 4 weeks of hard work.

We also briefly showed the location post its which were based on GPS and were done for the first group assignment, the CardioViz prototype (from the lab in the winter term), and the Web annotation tool that is now nearly ready.

Even though there were some difficulties in running some of the demos I am still convinced in a research environment we need to show live demos and not just ppt-slide-ware 😉 We probably have to demo more to get more professional with non-working components.

More pictures are online at http://foto.ubisys.org/iais_presentation/

Workshop dinner, illuminated faucet, smart sink

I first saw a paper about a context-aware sink at CHI 2005 (Bonanni, L., Lee, C.H., and Selker, T. “Smart Sinks: Real World Opportunities for Context-Aware Interaction.” Short paper in proceedings of Computer Human Interfaction (CHI) 2005, Portland OR).

Yesterday I saw a illuminated faucet in the wild – one which looked in terms of design really great (in the restaurant they even had flyers advertising the product). But after using it I was really disappointed. It uses the concept of color-illumination of the water based on temperature (red hot, blue cold).

The main issue I see with the user experience is that the visualization is not based on the real temperature using sensor but on the setting of the tap. Hence at the beginning when you switch on hot the visualization is immediately red – even though it is initially cold :-(

Conclusion: nice research idea some time ago, a business person saved a few cents for the senor and wiring, created a product with great aesthetics and a poor user experience; hence I left the leaflet with the ordering address there, don’t want to have it.

Large scale sensor network connected to public displays

The airport Köln-Bonn (CGN) has all the parking spaces monitored with a simple sensor (detects if there is a car or not) and provides displays at the entrance showing the number of open spaces and has active signage in the parking garage leading to the free spaces – additionally it is visualized above each space – probably more a maintenance functions to see if the sensor works.

(looking at the pictures I have probably parked on women-only parking spots…)