Back in Korea, Adverts, Driving and Entertainment

On the way into town we got a really good price for the taxi (just make a mental note never to negotiate something with Florian and Alireza at the same time 😉 It seems taxi driving is sort of boring – he too watched television while driving (like the taxi driver some weeks ago in Amsterdam). I think we should seriously think more about entertainment for micro breaks because I still think it is for a good reason not allowed to watch TV while driving.

Seoul is an amazing place. There are many digital signs and electronic adverts. Walking back to the hotel I saw a large digital display on a rooftop (would guess about 10 meters by 6 meters). If working it is probably nice. But now it is mal functioning and the experience walking down the road is worsened as one inevitably looks at it. I wonder if in 10 years we will be used to broken large screen displays…   

Keynote at MobileHCI2008: BJ Fogg – mobile miracle

BJ Fogg gave the opening keynote at mobile HCI 2008 in Amsterdam. The talk explained very well the concept of Captology (computers as persuasive technologies) and the newer projects are very inspiring. He put the following questions at the center: How can machines change people’s minds and hearts? How can you automate persuasion? His current focus is on behavior change.

He reported of a class he is teaching at Stanford on designing facebook applications. The metric for success (and on this students are marked) is the uptake of the created application over the time of the course. He reported that the course attracted 16 million users in total and about 1 million on a daily basis – that is quite impressive. This is also an example of the approach he advocates: “rather try than think”. The rational is to try out a lot of things (in the real market with real users, alpha/beta culture) rather than optimize a single idea. Here the background is that nowadays implementation and distribution is really easy and that the marked decides if it is hot or not… His advice is to create minimal application – simple application and then push it forward. All big players (e.g. google, flickr) have done it this ways…

With regard to the distribution methods for persuasion he referred over and over to social networks (and in particular facebook). His argument is that by these means one is able to reach many people in a trusted way. He compared this to the introduction of radio but highlighted the additional qualities. Overall he feels that Web 2.0 is only a worm up for all the applications to come on the mobile in the future.

At the center of the talk was that prediction that mobile devices will be within 15 years the main technology for persuasion. He argued that mobile phones are the greatest invention of human kind – more important than the writing and transportation systems (e.g. planes, cars). He explained why mobile phones are so interesting based on three metaphors: heart, wrist watch, magic wand.

Heart – we love our mobile phones. He argued that if users do not have their phone with them they miss it and that this is true love. Users form a very close relationship with their phone and spend more time with the phone than with anything/anyone else. He used the image of “mobile marriage”…

Wrist watch – the phone is always by our sides. It is part of the overall experience in the real world provding 3 functions: Concierge (reactive, can be asked for advice, relationship base on trust), Coach (proactive, coach comes to me tells me, pushing advice), and Court Jester (entertains us, be amused by it, create fun with content that persuades).

Magic wand – phones have amazing and magical capabilities. A phone provides humans with a lot of capabilities (remote communication, coordination, information access) that empower many things.

Given this very special relationship it may be a supplement for our decision making (or more general our brain). The phone will advise us what to do (e.g. navigation systems tell us where to go) and we love it. We may have this in other areas, too – getting told what movie to see, what food to eat, when to do exercise, … not fully convinced 😉

He gave a very interesting suggestion how to design good mobile applications. Basically to create a mobile application the steps are: (1) Identify the essence of the application, (2) strip everything of the application that is not essential to provide this and (3) you have a potentially compelling mobile application. Have heard of this before, nevertheless it seems that still features sell but it could by a change with the next generation.

He provided some background on the basics of persuasion. For achieving a certain target behavior you need 3 things – and all at the same time: 1. sufficient motivation (they need to want to do it), 2. Ability to do what they want (you either have to train them or to make it very easy – making easer is better) and 3. a trigger. After the session someone pointed out that this is similar to what you have in crime (means, motive, opportunity 😉

For creating persuasive technologies there are 3 central pairs describing motivation:

  • Instant pleasure and gratification vs. instant pain
  • Anticipation of good or hope vs. anticipation of the bad or fear (it is noted that hope is the most important motivator
  • Social acceptance vs. social rejection

When designing systems it is essential to go for simplicity. He named the following five factors that influence simplicity: (1) money, (2) physical effort, (3) brain cycles, (4) social deviation, and (5) non-routine. Antonio pointed out that this links to work of Gerd Gigerenzer at MPI work on intuitive intelligence.

[1] Gigerenzer, G. Gut feelings: The intelligence of the unconscious. New York: Viking Press.

Workshop on User Experience at Nokia

Together with Jonna Hakkila’s group (currently run by Jani Mantyjarvi) we had a two day workshop at Nokia in Oulu discussion the next big thing* 😉
* motto on the Nokia research centers web page

It seems that many people share the observation that emotions and culture play a more and more important role in the design of services and applications – even outside the research labs. One evening we looked for the Finnish experience… (photo by Paul)

Overall the workshop showed again how many ideas can be created in a very short time – hopefully we can follow up some of them and create some new means for communication. We plan to meet again towards the end of the year in Essen.

PS: Kiss the phone – some take it literarily: http://tech.uk.msn.com/news/article.aspx?cp-documentid=7770403

PPS: we talked about unanticipated use (some call it misuse) of technology, e.g. using the camera on the phone to take a picture of the inside of your fridge instead of writing a shopping list. Alternative uses is not restricted to mobile phones – see for yourself what you dishwasher may be good for…. http://www.salon.com/nov96/salmon961118.html

HCI Doctoral Consortium at VTT Oulu

Jonna Hakkila (Nokia), Jani Mantyjarvi (Nokia & VTT), and I discussed last year how we can improve the doctoral studies of our students and we decided to organize a small workshop to discuss PhD topics.

As Jonna is currently on maternity leave and officially not working we ran the workshop at VTT in Oulu.

The topics varied widely from basic user experience to user interface related security. There was very interesting work the participants did and published. I have selected the following 2 as reading suggestions: [1] by Elina Vartiainen and [2] by Anne Kaikkonen.

We hope we gave some advise – can resist to repeat the most important thing to remember:

  • a PhD thesis is not require to solve all problems in a domain
  • doing a PhD is yet another exam – not more and not less
  • finding/inventing/unterstanding something that makes a real difference to even a small part of the world is a great achievement (an not common in most PhD research)
  • do not start with thinking hard – start with doing your research

A good discussion on doing a PhD in computer science by Jakob Bardram can be found at [3].

[1] Roto, V., Popescu, A., Koivisto, A., and Vartiainen, E. 2006. Minimap: a web page visualization method for mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). CHI ’06. ACM, New York, NY, 35-44. DOI= http://doi.acm.org/10.1145/1124772.1124779

[2] Lehikoinen, J. T. and Kaikkonen, A. 2006. PePe field study: constructing meanings for locations in the context of mobile presence. In Proceedings of the 8th Conference on Human-Computer interaction with Mobile Devices and Services (Helsinki, Finland, September 12 – 15, 2006). MobileHCI ’06, vol. 159. ACM, New York, NY, 53-60. DOI= http://doi.acm.org/10.1145/1152215.1152228

[3] http://www.itu.dk/people/bardram/pmwiki/pmwiki.php?n=Main.ArtPhD

Trip to North Korea

[see the whole set of photos from tour to North Korea]

From Gwangju we took the bus shortly after midnight to go for a trip to North Korea. The students did a great job in organizing ISUVR and the trip. It was great to have again some time to talk to Yoosoo Oh, who was a visiting researcher in Munich in our group.

When entering North Korea there are many rules, including that you are not allowed to take cameras with tele-lenses over 160mm (so I had to take only the 50mm lens) and you must not bring mobile phones and mp3 players with you. Currently cameras, phones and MP3 players are visible with the human eye and to detect in an x-ray. But it does not take much imagination to see in a few years extremely small devices that are close to impossible to spot. I wonder how this will change such security precautions and whether or not I will in 10 years still possible to isolate a country from access to information. I doubt it…

The sightseeing was magnificent – see the photos of the tour for yourself. We went onto the Kaesong tour (see http://www.ikaesong.com/ – in Korea only) It is hard to tell how much of the real North Korea we really saw. And the photos only reflect a positive selection of motives (leaving out soldiers, people in town, ordinary buildings, etc. as it is explicitly forbidden to take photos of those). I was really surprise when leaving the country they check ALL the pictures you took (in my case it took a little longer as it was 350 photos).

The towns and villages are completely different from what I have seen so far. No cars (besides police/emergency services/army/tourist busses) – but many people in the street walking or cycling. There were some buses in a yard but I have not seen public transport in operation. It seemed the convoy of 14 tourist buses is an attraction to the local people…

I have learned that the first metal movable type is from Korea – about 200 years before Gutenberg. Such a metal type is exhibited in North Korea and in the display is a magnifying glass in front of the letter – pretty hard to take a picture of…

ISUVR 2008, program day2

Norbert Streitz – Trade-off for creating smartness

Norbert gave an interesting overview of research in the domain of ubicomp based on his personal experience – from Xerox PARC to the disappearing computer. He motivated the transition from Information Design to Experience Design. Throughout the work we see a trade-off between providing “smart support” to the user and “privacy” (or control over privacy). One of the questions if we will re-invent privacy or if it will become a commodity…
As one of the concrete examples Norbert introduced the Hello.Wall done in the context Ambient Agoras [1]. This again brought up the discussion of public vs. private with regard to the patterns that are displays. (photos of some slides from Norbert’s talk)

[1] Prante, T., Stenzel, R., Röcker, C., Streitz, N., and Magerkurth, C. 2004. Ambient agoras: InfoRiver, SIAM, Hello.Wall. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 763-764. DOI= http://doi.acm.org/10.1145/985921.985924 (Video Hello.Wall)
Albrecht Schmidt – Magic Beyond the Screen
I gave a talk on “Human Interaction in Ubicomp -Magic beyond the screen” highlighting work in user interfaces beyond the screen that we did over the last years. It is motivated by the facts that classical limitations in computer science (e.g. frame rate, processing, storage) are getting less and less important to many application areas and that the human computer interaction becomes in many areas the critical part of the system.
In my talk I suggested using “user illusion” as a design tool for user interfaces beyond the desktop. This involves two steps: 1) describe precisely the user illusion the application will create and the 2) Investigate what parameters have an influence on the quality of the created user illusion for the application. (photos of some slides from Albrecht’s talk, Slides in PDF)
Jonathan Gratch – Agents with Emotions

His talk focused on the domain of virtual reality with a focus on learning/training applications. One central thing I learned is that the timing of non-verbal cues (e.g. nodding) is very crucial to produce an engagement in speaking with an agent. This may also be interesting for other forms of computer created feedback.
He gave a specific example on how assigning blame works. It was really interesting to see that there are solid theories in this domain that can be concretely used to design novel interfaces. He argues that appraisal theory can explain people’s emotional states and this could improve context-awareness.

He showed an example of emotional dynamics and it is amazing how fast emotion happen. One of the ways of explaining this is to look at different dynamics: dynamics in the world, dynamics in the perceived world relationship, and dynamic through action. (photos of some slides from Jonathan’s talk)
Daijin Kim – Vision based human robot interaction
Motivated by the vision that after the personal computer we will see the “Personal Robot” Daijin investigates natural ways to interact with robots. For vision based interaction with robots he named a set of difficulties, in particular: people are moving, robots are moving, and the illuminations and distances are variable. The proposed approach is to generate a pose, expression, and illumination specific active appearance model.
He argues that face detection is a basic requirement for vision based human robot interaction. The examples he showed in demo movie were very robust with regard to movement, rotation, and expression and it works for very variable distances. The talk contained further examples of fast face recognition and recognition of simple head gestures. Related to our research it seems that such algorithms could be really interesting in creating context-aware outdoor advertisement. (photos of some slides from Daijin’s talk)

Steven Feiner – AR for prototyping UIs

Steven showed some work mobile projector and mobile device interaction, were they used augmented reality for prototyping different interaction methods. He introduced Spot-light (position based interaction), orientation based interaction and widget-based interaction for an arm mounted projector. Using the synaptic touchpad and projection may also be an option for our car-ui related research. For interaction with a wrist device (e.g. a watch) he introduced the string-based interaction which is a simple but exciting idea. You pull out a string of a device and the distances as well as the direction are the resulting input parameters [2].
In a further example Steven showed a project that supports field work on identification of plants using capture (of the image of the real leaf), comparison with the data base and matching out of a subset that matches the features. Their prototype was done on a tablet and he showed ideas how to improve this with AR; it is very clear that this may also an interesting application (for the general user) on the mobile phone.

New interfaces and in particular gestures are hard to explore – if you have no idea what is supported by the system. In his example on visual hint for tangible gestures using AR Steven showed interesting options in this domain. One approach follows a “preview style” visualizations – they called it ghosting. (photos of some slides from Stevens’s talk)

[2] Blasko, G., Narayanaswami, C., and Feiner, S. 2006. Prototyping retractable string-based interaction techniques for dual-display mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ’06. ACM, New York, NY, 369-372. DOI= http://doi.acm.org/10.1145/1124772.1124827
[3] White, S., Lister, L., and Feiner, S.Visual Hints for Tangible Gestures in Augmented Reality.Proc. ISMAR 2007 IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara Japan, November 13-16, 2007. (youtube video)

If you are curious about the best papers, please the photos from the closing 🙂

Finally some random things to remember:

  • Richard W. DeVaul did some work on subliminal user interfaces – working towwrds the vision of zero attention UIs [4]
  • Jacqueline Nadel (development psychologist) did studies on emotions between parents and infants using video conferencing
  • V2 – Toward a Universal Remote Console Standard http://myurc.org/whitepaper.php
  • iCat and Gaze [5]

[4] Richard W. DeVaul. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support. PhD Thesis. MIT 2004. http://devaul.net/~rich/DeVaulDissertation.pdf

[5] Poel, M., Breemen, A.v., Nijholt, A., Heylen, D.K., & Meulemans, M. (2007). Gaze behavior, believability, likability and the iCat. Proceedings Sixth Workshop on Social Intelligence Design: CTIT Workshop Proceedings Series (pp. 109–124). http://www.vf.utwente.nl/~anijholt/artikelen/sid2007-1.pdf

ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks…

Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad’s talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen… I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147

Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds’s talk)

Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent’s talk)

Jun Park – bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.

The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun’s talk)

Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos – ISUVR2008 – GIST – Korea

Invited Lecture at CDTM, how fast do you walk?

Today I was at CDTM in Munich (http://www.cdtm.de/) to give a lecture to introduce Pervasive Computing. It was a great pleasure that I was invited again after last year’s visit. We discussed no less than how new computing technologies are going to change our lives and how we as developers are going to shape parts of the future. As everyone is aware there are significant challenges ahead – one is personal travel and I invited students to join our summer factory (basically setting up a company / team to create a news mobility platform). If you are interested, too drop me a mail.

Over lunch I met with Heiko to discuss the progress of his thesis and fishing for new topics as they often come up when writing 😉 To motivate some parts of his work he looked at behavioral research that describes how people use their eyes in communication. In [1] interesting aspects of human behavior are described and explained. I liked the page (251) with the graphs on walking speed as a function of the size of city (the bigger the city the faster people walk – it includes an interesting discussion what this effect is based on) and the eye contacts made dependent on gender and size of town. This can provide insight for some projects we are working on. Many of the results are not surprising – but it is often difficult to pinpoint the reference (at least for a computer science person), so this book may be helpful.

[1] Irenäus Eibl-Eibesfeldt. Die Biologie des menschlichen Verhaltens: Grundriss der Humanethologie. Blank; Auflage: 5. A. Dezember 2004.

Is it easier to design for touch screens if you have poor UI designers?

Flying back from Sydney with Qantas and now flying to Seattle with Lufthansa I had to long distance flights in which I had the opportunity to study (n=1, subject=me, plus over-shoulder-observation-while-walking-up-and-down-the-aisle 😉 the user interface for the in-flight entertainment.

The 2 systems have very different hardware and software designs. The Qantas infotainment system is a regular screen and interaction is done via a wired moveable remote control store in the armrest. The Lufthansa system uses a touch screen (It also has some hard buttons for volume in the armrest). Overall the content on the Qantas system comprised of more content (more movies, more TV-shows) including real games.

The Qantas system seemed very well engineered and the remote control UI worked was greatly suited for playing games. Nevertheless the basic operation (selecting movies etc.) seemed more difficult using the remote control compared to the touch screen interface. In contrast the Lufthansa system seems to have much room for improvement (button size, button arrangement, reactions times of the system) but it appeared very easy to use.

So here are my hypotheses:

Hypothesis 1: if you design (public) information or edutainment systems (excluding games) using a touch screen is a better choice than using an off-screen input device.

Hypothesis 2: with UI design team of a given ability (even a bad UI design team) you will create a significantly better information and edutainment systems (excluding games) if you use a touch screen than using an off-screen input device.

From the automotive domain we have some indications that good off-screen input device are really hard to design so that they work well (e.g. in-build-car navigation system). Probably I should find a student to proof it (with n much larger than 1 and other subjects than me).

PS: the Lufthansa in-flight entertainment runs on Windows-CE 5.0 (the person in front of me had mainly the empty desktop with the Win CE logo showing) and it boots over network (takes over 6 minutes).

Fight for attention – changing cover display of a magazine

Attention is precious and there is a clear fight for it. This is very easy to observe on advertising boards and in news shops. Coming back from Berlin I went in Augsburg into the news agent to get a news paper – and not really looking at magazines is still discovered from the corner of my eyes an issue of FHM with a changing cover page. Technically it is very simple: a lenticular lens that presents and image depending on the viewing angle – alternating between 3 pictures – one of which is a full page advert (for details on how it works see lenticular printing in Wikipedia). A similar approach has already been used in various poster advertising campaigns – showing different pictures as people walk by (http://youtube.com/watch?v=0dqigww4gM8, http://youtube.com/watch?v=iShPBmtajH8). One could also create a context-aware advert, showing different images for small and tall people 😉

In outdoor advertising we see the change to active display happening at the moment. I am really curious when the first really active cover pages on magazines will emerge – thinking of ideas in context-awareness the possibilities seem endless. However it is really a question if electronic paper will be cheap enough before we move to completely electronic reading. Another issue (even with this current version of the magazine) is recycling – which becomes much more difficult when mixing further material with paper.

Poor man’s location awareness

Over the last day I have experienced that very basic location information in the display can already provide a benefit to the user. Being the first time in Sydney I realized that network information of my GSM-phone is very reliable to tell me when to get off the bus – obviously it is not fine grain location information but so far always walking distance. At some locations (such as Bondi beach) visual pattern matching works very well, too 😉 And when to get off the bus seems a concern to many people (just extrapolating from the small sample I had over the last days…).

In my pervasive computing class, which I currently teach, we covered recently different aspects of location based systems – by the way a good starting point on the topic is [1] and [2]. At We discussed issues related to visual pattern matching – and when looking at the skyline of Sydney one becomes very quickly aware of the potential of this approach (especially with all the tagged pictures on flickr) but at the same time the complexity of matching from arbitrary locations becomes apparent.

Location awareness offers many interesting questions and challenging problems – looks like there are ideas for project and thesis topics, e.g. how semantic location information (even of lower quality) can be beneficial to users or finger printing based on radio/TV broadcast information.

[1] J. Hightower and G. Borriello. Location systems for ubiquitous computing. IEEE Computer, 34(8):57–66, Aug. 2001. http://www.intel-research.net/seattle/pubs/062120021154_45.pdf

[2] Jeffrey Hightower and Gaetano Borriello. Location Sensing Techniques. UW-CSE-01-07-01.

Mensch und Computer program committee meeting in Lübeck

Yesterday night I flew to Hamburg and traveled on to Lübeck – a quite nice town in the north of Germany – for the program committee meeting of Mensch und Computer 2008. This morning I got up a little earlier to walk around the city – as it was my first time visiting. The building I could see when walking into town however was oddly familiar, after some moments I recalled that it is the Holstentor which was pictured on the 50DM note (DM = German Mark – the money used in Germany till 2001 before we exchanged it for the Euro ;-).

In the meeting we discussed a large number of submission made to Mensch und Computer 2008. It seems there are quite an interesting number of papers in the program which make the conference worth while. We will also run the second edition of our workshop on Automotive User Interface and Interactive Applications. The automotive workshop we ran 2007 in Weimar was with about 30 participants very successful.

CHI Conference in Florence

On Sunday afternoon I flew to Florence and we met up in the evening with former colleagues – CHI always feels like a school reunion 😉 and it is great to get first hand reports on what everyone is working currently. On the plane I met Peter Thomas (editor of Ubquitous Computing Journal) and we talked about the option of a special issue on automotive…

We have rented a house in the Tuscany Mountains together with Antonio’s group and collaborators from BMW research and T-Labs. Even though we have to commute into Florence everyday it is just great that we have our “own” house – and it is much cheaper (but we have to do our dishes).

The conference is massive – 2300 people. There is a lot of interesting work and hence it is not feasible to cover it in a few sentences. Nevertheless there are some random pointers:

In the keynote a reference to an old reading machine by Athanasius Kircher was mentioned.

Mouse Mischief – educational software – 30 mice connected to 1 PC – cool!

Reality based interaction – conceptual paper – arguing that things should behaves as in the real world – interesting concept bringing together many new UI ideas

Inflatable mouse – cool technology from Korea– interesting use cases – we could integrate this in some of our projects (not inflating the mouse but inflating other things)

Multiple Maps – Synthesizing many maps – could be interesting for new navigation functions

Rub the Stane – interactive surfaces – detection of scratching noises only using a microphone

Usability evaluation considered harmful – the every-year discussion on how to make CHI more interesting continues

It seems there is currently some work going on looking at technologies in religious practice. Over lunch we had developed interesting ideas towards remote access to multimedia information (e.g. services of ‘once’ local church) and sharing awareness. This domain is intriguing because churches often form tight communities and content is regularly produced and available. Perhaps we should follow up on this with a project…

Dairy study on mobile information needs – good base literature of what information people need/use when they are mobile

K-Sketch – cool sketching technique.

Crowdsourcing user studies – reminded me of my visit at http://humangrid.eu

Lean and Zoom – simple idea – you come close it gets bigger – nicely done

Application Workshop of KDUbiq in Porto

After having frost and snow yesterday morning in Germany being in Porto (Portugal) is quite a treat. The KDubiq application workshop is in parallel to the summer school and yesterday evening it was interesting to meet up with some people teaching there.

The more I learn about data mining and machine learning the more I see even greater potential in many ubicomp application domains. In my talk “Ubicomp Applications and Beyond – Research Challenges and Visions” I looked back at selected applications and systems that we have developed over the last 10 year (have a look at the slides – I, too was surprised what variety of projects we did in the last years ;-). So far we have often used basic machine learning methods to implement – in many cases creating a version 2 of these systems where machine learning research is brought together with ubicomp research and new technology platforms could make a real difference.

Alessandro Donati from ESA gave a talk “Technology for challenging future space missions” which introduced several challenges. He explained their approach to technology introduction into mission control. The basic idea is that the technology providers create together with the users a new application or tool. He strongly argued for a user centred design and development process. It is interesting to see that the concept of user centred development processes are becoming more widespread and go beyond classical user interfaces into complex system development.

Reset/reboot is ubiquitous – or my worst train ride so far

What have learned to do when our computer or phone does not work anymore? Easy just reboot it. A colleague recently told me his rental car broke down (basically did not work anymore) but after resetting it, it worked fine again. When he told me I found this pretty strange – ok the radio or opening the car boot – but essential functions related to driving?

Today I was travelling on an ICE high-speed train to Amsterdam for the CHI-Notes committee meeting and shortly after we left Germany the train lost speed and became slower and just rolled out. Then can an interesting announcement: “Sorry it seems we do not get power anymore – but we think it is not a big problem. We reset the train and then we are on our way again”. The reboot did not work 🙁 so they told us we needed to another engine. Perhaps there was more to reboot (e.g. the train power grid nation wide?)…

Extrapolating in the future I can imagine a lot of things we will need to reboot, e.g. your shoes, your furniture, your house, your augmented sense, and your implants – or should we take more care in developing things?

At some point they decided we can not wait on the train and we had to get off the train outside the station (using small ladder) while it was pouring with rain. The left us than waiting for 2 hours (in the rain) – basically till we found ourselves another means of transport (overall delay about 5 hours). This made me realize that a Nokia N95 with GPS is probably really good while travelling – if I would have had it with me I could have called a taxi to where I was 😉

More about train rides… Some more traditional technologies however work very well – this week I was already once stuck on a train were a passenger pulled the emergency train and went of the train – somewhere in the middle of nowhere…

In Search of Excellence

At the Fraunhofer retreat in Westerburg we had very interesting discussions on research and research strategies in computer science. The span of excellent research in computer science is enormous ranging from theoretical work (e.g. math style proofs), to engineering type work (e.g. systems), to experimental and empirical work (e.g. studies). This makes it really challenging to find a common notion of “excellent research”. This reminds me of an interesting book which I started to read (recommended to me at the retreat): In Search of Excellence: Lessons from Americas Best Run Companies by Robert H Waterman et al. – so far it is really interesting to read. However everything in management seems really straightforward on paper – but in my experience in the real world it always comes down to people.

Guest course at the University of Linz, MSc Pervasive Computing

I am teaching a guest course at the University of Linz in the Pervasive Computing master program. The topic is Unconventional User Interaction – User Interfaces in a Pervasive Computing World (http://www.ubicomp.net/uui). Today we started with an introduction to motivate how pervasive computing changes human computer interaction. I am already looking forward to the projects!

At dinner I learned why you can never have enough forks in a good restaurant. In case you loose your pen for the mobile phone a fork will do… The topic of the lecture is everywhere!

Object with included sensing

I often wonder why one would like to include sensing into other objects. It seems however that there is a tradition and has its roots before the digital 🙂

The pencil case has a thermometer included. The function is that the pupil can figure out when they get the rest of the day off due to high temperature (Hitzefrei). Not convinced that is was a great seller…

Museum Audio Guides – is there a way to make this a good experience?

We visited the archeology and Stone Age museum in Bad Buchenau http://www.federseemuseum.de/. For our visit we rented their audio guide system – they had one version for kids and one for adults. The audio guides were done very well and the information was well presented.

Nevertheless such devices break the joint experience of visiting a museum! We had three devices – and we stood next to each other listening but not talking to each other. Even though it may transport more information than the written signs it makes a poorer experience than reading and discussing. I wonder how one would design a good museum guide… There are plenty of projects but so far I have not seen the great solution.

Wall-Sized Printed Adverts with Integrated Screen

At Zurich Airport Orange and Nokia are running a large printed advert. At a first glance it looks just as a printed large scale poster. The TV screen in one poster and the projected writing on top of another poster are seamlessly integrated. The media design of the overall installation is appealing.

The active screen (could be a 50 inch plasma TV) is the screen of the mobile phone and shows the navigation application. In contrast to most other installations, where screens and printed posters are used, this appears right and it catches people’s attention.

There is work from Scott Klemmer’s group at Stanford that looks the relationship between the printed displays and projection/displays for various applications. The Gigaprints project was shown as a video at Ubicomp 2006.

Visit to the Wearable Computing Lab at ETH Zurich

I was at ETH Zurich for the PhD defence of Nagendra Bhargava Bharatula. His thesis is on context-aware wearable nodes and in particular on the trade-offs in design and the design space of these devices.

The tour in Prof. Tröster’s lab was very impressive. It is a very active and probably one of the largest groups world wide doing research in wearable computing. It seams that wearable computing is getting more real, many scenarios and demonstrators are much more realistic and useful than several years ago.

In the backmanager project Corinne Mattmann works on a shirt that measures body posture. Using stretch sensors made of elastic threads, which are fixed with silicon to the fabric they can measure several different body postures. The material is really interesting (probably done by http://www.empa.ch/) and I think such technologies will open up many new opportunities. (further reading: Design Concept of Clothing Recognizing Back Postures; C. Mattmann, G. Tröster; Proc. 3rd IEEE-EMBS International Summer School and Symposium on Medical Devices and Biosensors (ISSS-MDBS 2006), Boston, September 4-6, 2006)

The SEAT project (Smart tEchnologies for stress free Air Travel) looks into integration of sensing into a airplane seat set-up. Having seats is a real set-up allows easy testing of ideas and realistic testing in early phases of the project. This setup made me think again more about an automotive setup in my next lab.

Visiting the pervasive computing labs @ Johannes Kepler University in Linz


It is always great to visit the pervasive computing labs in Linz – always new and cool research to see. Looking at my my Google News-Alert it seems that the term “pervasive” is dominate by Alois 🙂

Alois Ferscha showed me their interaction cube. It is a really interesting piece of research and the background and argument of the cinematic of the hand shows a deep insight. There are some slides on the Telekom Austria Cube that are worthwhile to look at. It is interesting that he has gone successfully the full cycle from concept to product (image is taken from the slide show).

We talked about location systems and what options are available on the market. In Linz they have one room where they have high accuracy tracking based on an array of InterSense systems. Our experience in Bonn with the ubisense system has been mixed so far. Perhaps there are different technologies to come (or we have to develop them).

PhD defence of Mario Pichler in Linz

This Morning Mario Pichler defended his PhD dissertation at the Johannes Kepler University in Linz. One central theme he investigated in his research was how innovation happens between technology push and application/market pull. It is scientifically a very interesting argument when looking at applied research. However considering successful products, especially products from Asia and in particular Korea, it appears that focusing on technology innovation can be a strong and successful strategy.

In the ubicomp community it seems that technology driven projects are seen very critical and that there seems to be a need to justify ubicomp research with applications. The arguments for it is simple – if we let technology drive development we end up with things nobody needs. But I am less and less believing in this argument – many of the things we use daily (phone, SMS, internet, cars) are there because technology has created the need and we did not really need them. Obviously there is a need for communication, entertainment and mobility but this is abstract and the concrete technologies used are not easy to be directly deduce from them.

In Austria they have a general exam as part of the PhD viva. I learnd something about the history of the term dead reckoning (see http://en.wikipedia.org/wiki/Dead_reckoning for a discussion on the Etymology of the term).

acatech workshop: object in context

It was interesting to see that smart objects / smart object services, context, NFC, and RFID become very mainstream. It seems that nearly everyone buys into these ideas now.

Dr. Mohsen Darianian (from Nokia Research, same building as Paul Holleis is at the moment) showed an NFC-advert video which reminded me on the results of an exercise we did on concept videos within an HCI-class at the University of Munich 🙂

Overall it seems that acceptance and business models are of great interest and that to create them a lot of technical insight is required. The issues related to user interfaces, interaction, experience become central factors for the success of products and services.

One discussion was on the motivation for people to contribute (e.g. user generated content, write open source code, answer questions in forums, blogs). Understanding this seem crucial to the prediction whether or not a application is going to fly or not.

Besides contributing for a certain currency (e.g. fame, status, money, access to information) it seems that altruism may be an interesting factor for motivating potential users. Even if it is a low percentage within our species the absolute number on a world wide scale could be still enough to drive a certain application/service. There is interesting research on altruism in the animal world (or at the researchers page http://email.eva.mpg.de/~warneken/ ) maybe we should look more into this and re-think some basic assumptions on business models?

Our break out group was in the rooms of the Institute of Electronic Business e.V (http://www.ieb.net/). It is a very pleasant environment and their link to the art school reflects very positive on the atmosphere and projects. The hand drawn semacodes were really impressive.

Workshop dinner, illuminated faucet, smart sink

I first saw a paper about a context-aware sink at CHI 2005 (Bonanni, L., Lee, C.H., and Selker, T. “Smart Sinks: Real World Opportunities for Context-Aware Interaction.” Short paper in proceedings of Computer Human Interfaction (CHI) 2005, Portland OR).

Yesterday I saw a illuminated faucet in the wild – one which looked in terms of design really great (in the restaurant they even had flyers advertising the product). But after using it I was really disappointed. It uses the concept of color-illumination of the water based on temperature (red hot, blue cold).

The main issue I see with the user experience is that the visualization is not based on the real temperature using sensor but on the setting of the tap. Hence at the beginning when you switch on hot the visualization is immediately red – even though it is initially cold :-(

Conclusion: nice research idea some time ago, a business person saved a few cents for the senor and wiring, created a product with great aesthetics and a poor user experience; hence I left the leaflet with the ordering address there, don’t want to have it.

Public Displays – Making Life More Predictable

On my way home from Toronto it was surprising how many public displays I saw that provided me with ”information about the future”, e.g. telling me when I will be out of time to cross the road, when the next train is due or when my luggage will arrive. These kinds of predictions or contexts are simple to gather and easy to present and best of all: the human is in control and can act on the information. Overall it is reassuring even if the context information is wrong (this is another story about my luggage ;-).

Pervasive 2007 in Toronto

The internatonal conference on pervasive computing in Toronto had an exiting program.

The keynote was by Adam Greenfield on “Everyware: Some Social and Ethical Implications of Ubiquitous Computing” – matching a number of issues we discussed the day before at the doctoral colloquium. The talk was enjoyable even though I think some of the statements made, in particular with regard to opting out and informing the users (e.g. logos) are over-simplified. Furthermore the fact that our society and its values are changing was very little reflected, e.g. privacy is not a constant.

The best paper (by Rene Mayerhofer and Hans Gellersen) “Shake well before use: Authentication based on accelerometer data” was my favourite, too. A further very interesting paper was “Inference Attacks on Location Tracksby John Krumm. Two papers from ETH Zürich were also quite interesting: “Operating Appliances with Mobile Phones – Strengths and Limits of a Universal Interaction Deviceby Christof Roduner et al. showed surprising results for the use of phones as remote control (in short – more usable than one thinks). And “Objects Calling Home: Locating Objects Using Mobile Phones” by Christian Frank et al. showed that phones have a great utility as sensors (in this case to find lost objects.)

We presented a in the video proceedings the smart transport container and a novel supply chain scenario (cutting out all intermediaries and enabling producer to customer transactions).

The tutorial day was excellent – I think the set of tutorials presented can give a good frame for preparing a course or lecture on pervasive computing.

Pervasive 2008 will be in Australia!