Trip to North Korea

[see the whole set of photos from tour to North Korea]

From Gwangju we took the bus shortly after midnight to go for a trip to North Korea. The students did a great job in organizing ISUVR and the trip. It was great to have again some time to talk to Yoosoo Oh, who was a visiting researcher in Munich in our group.

When entering North Korea there are many rules, including that you are not allowed to take cameras with tele-lenses over 160mm (so I had to take only the 50mm lens) and you must not bring mobile phones and mp3 players with you. Currently cameras, phones and MP3 players are visible with the human eye and to detect in an x-ray. But it does not take much imagination to see in a few years extremely small devices that are close to impossible to spot. I wonder how this will change such security precautions and whether or not I will in 10 years still possible to isolate a country from access to information. I doubt it…

The sightseeing was magnificent – see the photos of the tour for yourself. We went onto the Kaesong tour (see http://www.ikaesong.com/ – in Korea only) It is hard to tell how much of the real North Korea we really saw. And the photos only reflect a positive selection of motives (leaving out soldiers, people in town, ordinary buildings, etc. as it is explicitly forbidden to take photos of those). I was really surprise when leaving the country they check ALL the pictures you took (in my case it took a little longer as it was 350 photos).

The towns and villages are completely different from what I have seen so far. No cars (besides police/emergency services/army/tourist busses) – but many people in the street walking or cycling. There were some buses in a yard but I have not seen public transport in operation. It seemed the convoy of 14 tourist buses is an attraction to the local people…

I have learned that the first metal movable type is from Korea – about 200 years before Gutenberg. Such a metal type is exhibited in North Korea and in the display is a magnifying glass in front of the letter – pretty hard to take a picture of…

ISUVR 2008, program day2

Norbert Streitz – Trade-off for creating smartness

Norbert gave an interesting overview of research in the domain of ubicomp based on his personal experience – from Xerox PARC to the disappearing computer. He motivated the transition from Information Design to Experience Design. Throughout the work we see a trade-off between providing “smart support” to the user and “privacy” (or control over privacy). One of the questions if we will re-invent privacy or if it will become a commodity…
As one of the concrete examples Norbert introduced the Hello.Wall done in the context Ambient Agoras [1]. This again brought up the discussion of public vs. private with regard to the patterns that are displays. (photos of some slides from Norbert’s talk)

[1] Prante, T., Stenzel, R., Röcker, C., Streitz, N., and Magerkurth, C. 2004. Ambient agoras: InfoRiver, SIAM, Hello.Wall. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 763-764. DOI= http://doi.acm.org/10.1145/985921.985924 (Video Hello.Wall)
Albrecht Schmidt – Magic Beyond the Screen
I gave a talk on “Human Interaction in Ubicomp -Magic beyond the screen” highlighting work in user interfaces beyond the screen that we did over the last years. It is motivated by the facts that classical limitations in computer science (e.g. frame rate, processing, storage) are getting less and less important to many application areas and that the human computer interaction becomes in many areas the critical part of the system.
In my talk I suggested using “user illusion” as a design tool for user interfaces beyond the desktop. This involves two steps: 1) describe precisely the user illusion the application will create and the 2) Investigate what parameters have an influence on the quality of the created user illusion for the application. (photos of some slides from Albrecht’s talk, Slides in PDF)
Jonathan Gratch – Agents with Emotions

His talk focused on the domain of virtual reality with a focus on learning/training applications. One central thing I learned is that the timing of non-verbal cues (e.g. nodding) is very crucial to produce an engagement in speaking with an agent. This may also be interesting for other forms of computer created feedback.
He gave a specific example on how assigning blame works. It was really interesting to see that there are solid theories in this domain that can be concretely used to design novel interfaces. He argues that appraisal theory can explain people’s emotional states and this could improve context-awareness.

He showed an example of emotional dynamics and it is amazing how fast emotion happen. One of the ways of explaining this is to look at different dynamics: dynamics in the world, dynamics in the perceived world relationship, and dynamic through action. (photos of some slides from Jonathan’s talk)
Daijin Kim – Vision based human robot interaction
Motivated by the vision that after the personal computer we will see the “Personal Robot” Daijin investigates natural ways to interact with robots. For vision based interaction with robots he named a set of difficulties, in particular: people are moving, robots are moving, and the illuminations and distances are variable. The proposed approach is to generate a pose, expression, and illumination specific active appearance model.
He argues that face detection is a basic requirement for vision based human robot interaction. The examples he showed in demo movie were very robust with regard to movement, rotation, and expression and it works for very variable distances. The talk contained further examples of fast face recognition and recognition of simple head gestures. Related to our research it seems that such algorithms could be really interesting in creating context-aware outdoor advertisement. (photos of some slides from Daijin’s talk)

Steven Feiner – AR for prototyping UIs

Steven showed some work mobile projector and mobile device interaction, were they used augmented reality for prototyping different interaction methods. He introduced Spot-light (position based interaction), orientation based interaction and widget-based interaction for an arm mounted projector. Using the synaptic touchpad and projection may also be an option for our car-ui related research. For interaction with a wrist device (e.g. a watch) he introduced the string-based interaction which is a simple but exciting idea. You pull out a string of a device and the distances as well as the direction are the resulting input parameters [2].
In a further example Steven showed a project that supports field work on identification of plants using capture (of the image of the real leaf), comparison with the data base and matching out of a subset that matches the features. Their prototype was done on a tablet and he showed ideas how to improve this with AR; it is very clear that this may also an interesting application (for the general user) on the mobile phone.

New interfaces and in particular gestures are hard to explore – if you have no idea what is supported by the system. In his example on visual hint for tangible gestures using AR Steven showed interesting options in this domain. One approach follows a “preview style” visualizations – they called it ghosting. (photos of some slides from Stevens’s talk)

[2] Blasko, G., Narayanaswami, C., and Feiner, S. 2006. Prototyping retractable string-based interaction techniques for dual-display mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ’06. ACM, New York, NY, 369-372. DOI= http://doi.acm.org/10.1145/1124772.1124827
[3] White, S., Lister, L., and Feiner, S.Visual Hints for Tangible Gestures in Augmented Reality.Proc. ISMAR 2007 IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara Japan, November 13-16, 2007. (youtube video)

If you are curious about the best papers, please the photos from the closing 🙂

Finally some random things to remember:

  • Richard W. DeVaul did some work on subliminal user interfaces – working towwrds the vision of zero attention UIs [4]
  • Jacqueline Nadel (development psychologist) did studies on emotions between parents and infants using video conferencing
  • V2 – Toward a Universal Remote Console Standard http://myurc.org/whitepaper.php
  • iCat and Gaze [5]

[4] Richard W. DeVaul. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support. PhD Thesis. MIT 2004. http://devaul.net/~rich/DeVaulDissertation.pdf

[5] Poel, M., Breemen, A.v., Nijholt, A., Heylen, D.K., & Meulemans, M. (2007). Gaze behavior, believability, likability and the iCat. Proceedings Sixth Workshop on Social Intelligence Design: CTIT Workshop Proceedings Series (pp. 109–124). http://www.vf.utwente.nl/~anijholt/artikelen/sid2007-1.pdf

Korean Dinner – to many dishes to count

In the evening we had a great Korean dinner. I enjoyed it very much – and I imagine we have seen everything people eat in Korea – at some point I lost count of the number of different dishes. The things I tasted were very delicious but completly different to what I typically eat.

Dongpyo Hong convinced me to try a traditional dish (pork, fish and Kimchi) and it was very different in taste. I was not adventures enough to try a dish that still moved (even though the movement was mariginal – can you spot the difference in the picture) – but probably I missed something as Dongpyo Hong enjoyed it.

I made some photos from the conference dinner.

ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks…

Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad’s talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen… I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147

Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds’s talk)

Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent’s talk)

Jun Park – bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.

The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun’s talk)

Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos – ISUVR2008 – GIST – Korea

Embedded Information – Airport Seoul

When I arrived in Seoul at the airport I saw an interesting instance of embedded information. In Munich we wrote a workshop paper [1] about the concept of embedded information and the key criteria are:

  • Embedding information where and when it is useful
  • Embedding information in a most unobtrusive way
  • Providing information in a way that there is no interaction required

Looking at an active computer display (OK it was broken) that circled the luggage belt (it is designed to list the names of people who should contact the information desk) and a fixed display on a suitcase I was reminded of this paper. With this set-up people become aware of the information – without really making an effort. With active displays becoming more ubiquitous I expect more innovation in this domain. We currently work on some ideas related to situated and embedded displays for advertising – if we find funding we push further… the ideas are there.
[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‘Ubiquitous Display Environments’, September 2004

Visitors to our Lab

Christofer Lueg (he is professor at the School of Computing & Information Systems at the University of Tasmania) and Trevor Pering (he is a senior researcher at Intel Research in Seattle) visited our lab this week. The timing is not perfect but at I am not the only interesting person in the lab 😉

Together with Roy Want and others Trevor published some time ago an article in the IEEE Pervasive Magazine that is still worthwhile to read “Disappearning Hardware” [1]. It shows clearly the trend that in the near future it will be feasible to include processing and wireless communication into any manufactured product and outlines resulting challenges. One of those challenges which we look into in our lab is how to interact with such systems… Also in a 2002 paper Christopher raised some very fundamental questions how far we will get with intelligent devices [2].

[1] Want, R., Borriello, G., Pering, T., and Farkas, K. I. 2002. Disappearing Hardware. IEEE Pervasive Computing 1, 1 (Jan. 2002), 36-47. DOI= http://dx.doi.org/10.1109/MPRV.2002.993143

[2] Lueg, C. 2002. On the Gap between Vision and Feasibility. In Proceedings of the First international Conference on Pervasive Computing (August 26 – 28, 2002). Lecture Notes In Computer Science, vol. 2414. Springer-Verlag, London, 45-57.

How to proof that Ubicomp solutions are valid?

Over the last years there have been many workshops and sessions in the ubicomp community that address the evaluation of systems. At Pervasive 2005 in Munich I co-organized a workshop on Application led research with George Coulouris and others. For me one of the central outcomes was that we – as ubicomp researchers – need to team up in evaluating our technologies and solutions with experts in the application domain and that we stay involved in this part of the research. Just handing it over for evaluation into the other domain will not bring us the insights we need to move the field forward. There is a workshop report which appeared in the IEEE Pervasive Magazine, that discusses the topic in more detail [1].

On Friday I met we a very interesting expert in the domain of gerontology. Elisabeth Steinhagen-Thiessen is chief consultant and director of the protestant geriatric centre of Berlin and professor of internal medicine/gerontology at the Charite in Berlin. We talked about opportunities for activity recognition in this domain and discussed potential set-ups for studies.

[1] Richard Sharp, Kasim Rehman. What Makes Good Application-led Research? IEEE Pervasive Computing Magazin. Volume 4, Number 3. July-September 2005.

Innovative in-car systems, Taking photos while driving

Wolfgang just sent me another picture (taken by a colleague of him) with more information in the head-up display. It shows a speed of 180 km/h and I wonder who took the picture. Usually only the driver can see such a display 😉

For assistance, information and entertainment systems in cars (an I assume we could consider taking photos an entertainment task) there are guidelines [1, 2, 3] – an overview presentation in German can be found in [4]. Students in the Pervasive Computing class have to look at them and design a new information/assistance system that is context aware – perhaps photography in the car could be a theme… I am already curious about the results of the exercise.

[1] The European Statement of Principles (ESoP) on Human Machine Interface in Automotive Systems
[2] AAM Guidelines
[3] JAMA Japanese Guidelines
[4] Andreas Weimper, Harman International Industries, Neue EU Regelungen für Safety und Driver Distraction

(thanks to Wolfgang Spießl for sending the references to me)

Integration of Location into Photos, Tangible Interaction

Recently I came across a device that tracks the GPS position and has additionally a card reader (http://photofinder.atpinc.com/). If you plug in a card with photos it will integrate location data into the jpgs using time as common reference.

It is a further interesting example where software moves away from the generic computer/PC (where such programs that use a GPS track an combine it with photos are available, e.g. GPS photo linker) into a appliance and hence the usage complexity (on principle, did not try it out so far this specific device so far) can be massively reduced and the usability can be increased. See the simple analysis:

Tangible Interaction using the appliance:

  • buying the device
  • plug-in a card
  • wait till it is ready

vs.

GUI Interaction:

  • starting a PC
  • buy/download the application
  • install the application
  • finding an application
  • locating the images in a folder
  • locating the GPS track in a folder
  • wait till it is ready

.. could become one of my future examples where tangible UIs work 😉

Wolfgang Spießl introduces context-aware car systems

Wolfgang visited us for 3 days and we talked a lot about context-awareness in the automotive domain. Given the sensors included in the cars and some recent ideas on context-fusion it seems feasible that in the near future context-aware assistance and information systems will get new functionality. Since finishing my PhD dissertation [1] there has been a move towards two directions: context predication and communities as source for context. One example of a community based approach is http://www.iyouit.eu which evolved out of ContextWatcher /IST-Mobilife.

In his lecture he showed many examples how pervasive computing happens in the car already now. After the talk we had the chance see and discuss user interface elements in current cars – in particular the head up display. Wolfgang gave demonstration of the CAN bus signals related to interaction with the car that are available to create context-aware applications. The car head-up display (which appears as being just in front of the car) create discussions on interesting use cases for these types of displays – beyond navigation and essential driving information.
In the lecture questions about how feasible / easy it is to do your own developments using the UI elements in the car – basically how I can run my applications in the car. This is not yet really supported 😉 However I had a previous post [2] where I argue that this is probably to come… and I still see this trend… It may be an interesting though how one can provide third parties access to UI components in the car without giving away control…

Invited Lecture at CDTM, how fast do you walk?

Today I was at CDTM in Munich (http://www.cdtm.de/) to give a lecture to introduce Pervasive Computing. It was a great pleasure that I was invited again after last year’s visit. We discussed no less than how new computing technologies are going to change our lives and how we as developers are going to shape parts of the future. As everyone is aware there are significant challenges ahead – one is personal travel and I invited students to join our summer factory (basically setting up a company / team to create a news mobility platform). If you are interested, too drop me a mail.

Over lunch I met with Heiko to discuss the progress of his thesis and fishing for new topics as they often come up when writing 😉 To motivate some parts of his work he looked at behavioral research that describes how people use their eyes in communication. In [1] interesting aspects of human behavior are described and explained. I liked the page (251) with the graphs on walking speed as a function of the size of city (the bigger the city the faster people walk – it includes an interesting discussion what this effect is based on) and the eye contacts made dependent on gender and size of town. This can provide insight for some projects we are working on. Many of the results are not surprising – but it is often difficult to pinpoint the reference (at least for a computer science person), so this book may be helpful.

[1] Irenäus Eibl-Eibesfeldt. Die Biologie des menschlichen Verhaltens: Grundriss der Humanethologie. Blank; Auflage: 5. A. Dezember 2004.

Hans Visited our Group, Issues on sustainable energy / travel

Hans Gellersen, who was my supervisor while I was in Lancaster, visited our lab in Essen. We discussed options for future collaborations, ranging from student exchange to joined proposals. Besides other topics we discussed sustainable energy as this is more and more becoming a theme of great importance and Pervasive Computing offers many building blocks towards potential solutions. Hans pointed me to an interesting project going on at IBM Hursley “The House That Twitters Its Energy Use“.

At the Ubicomp PC meeting we recently discussed the value of face-2-face meetings in the context of scientific work and it seems there are two future directions to reduce resource consumption: (1) moving from physical travel to purely virtual meetings or (2) making travel feasible based on renewable energies. Personally I think we will see a mix – but I am sure real physical meetings are essential for certain tasks in medium term. I am convinced that in the future we will still travel and this will become viable as travel based on renewable energies will become feasible. Inspiring example project are SolarImpulse (its goal is to create a solar powered airplane) and Helios (solar-powered atmospheric satellites). There are alternative future scenarios and an interesting discussion by John Urry (e.g. a recent article [1], a book – now on my personal reading list [2]). These analyses (from a sociology perspective) are informative to read and can help to create interesting technology interventions. However I reject the dark scenarios, as I am too much of an optimist trusting in peoples good will, common sense, technology research and engineering – especially if the funding is available ;-).

[1] John Urry. Climate change, travel and complex futures. The British Journal of Sociology, Volume 59, Issue 2, Page 261-279, Jun 2008

[2] John Urry. Mobilities. October 2007.

New ways for reducing CO2 in Europe? Impact of pedestrian navigation systems

Arriving this morning in Brussels I was surprised by the length of the queue for taxis. Before seeing the number of people I considered taking a taxi to the meeting place as I had some luggage – but doing a quick count on the taxi frequency and the number of people in the line I decided to walk to make it in time. Then I remembered that some months ago I had a similar experience in Florence, when arriving at the airport for CHI. There I calculated the expected waiting time and choose the bus. Reflecting briefly on this it seems that this may be a new scheme to promote eco-friendly travel in cities… or why otherwise would there be not enough taxis in a free market?

Reflecting a little longer I would expect that with upcoming pedestrian navigation systems we may see a switch to more people walking in the city. My hypothesis (based on minimal observation) is that people often take a taxi or public transport as they have no idea where to walk to and how long it would take when walking. If now a pedestrian navigation system can offer reliably a time of arrival estimation (which is probably more precise for walking than for driving as there is less traffic jam) and the direction the motivation to walk may be increased. We should probably put pedestrian navigation systems on our project topic list as there is still open research on this topic…

Birthday candles going electronic

What is a birthday cake without a candle? Sometimes it is hard to find a candle but having a creative team there is always a solution – less than 3 minutes away 😉 As always with new technologies – after deployments ideas for Version 2 (which will include much more functionality) emerge… An there was another business idea – interactive wedding cakes – perhaps we explore this later this year 😉

Teaching in primary school, digital photography, civilization

I had a day off an was as “teaching assistant” on a school trip with the kids my wife is teaching. The trip went to a museum village (Wackershofen), which tries to preserve and communicate how people lived about 100 years ago.

On side observation was that in digital photography the limiting factor is now not anymore the memory space but the batteries in the camera. This has changed over the last 2 years – there children still selected which pictures they have to delete – now that is no issue anymore. This shows that some of the trends in pervasive computing (in this case unlimited memory) is already there…

In a project we converted manually flax into threads and theoretically into linen fabric. Some years ago I was involved in doing a similar project – with a focus on the multimedia docummentation – also with a primary school. We learned that it took a person one winter to make one piece of garment. Putting this into perspective we see an interesting trend of devaluation of physical object (cloth are one example, but applies also to high tech goods such as MP3 players) due to advances in engineering. This devaluation of physical goods led to a higher standard of living and consequently to a higher life expectancy. I wonder how further advances – especially in digital engineering will affect the quality of life…

Moving again – finally in our new rooms

After several month of building work we could finally move into our new lab space. It is still largely empty but provides great opportunities for the research we have planed.

In order to conserve resources we decided to re-use furniture that was already used by another group within the university (which is not there anymore). This group apparently had a different approach in storing information (physical – real paper) and Florian and Ali had to get rid of several GB before they got their shelves 😉

Talk by Florian Michahelles, RFID showcase at Kaufhof Essen

Florian Michahelles, associate director of the AutoID-Labs in Zürich visited our group and gave a presentation in my course on Pervaisve Computing. He introduced the vision of using RFID in businesses, gave a brief technology overview and discussed the potential impact – in a very interactive session.

Florian and I worked together in the Smart-its project and during his PhD studies he and Stavros were well know as the experts on Ikea PAX [1], [2]. In 2006 and 2007 we ran workshops on RFID technologies and published the results and a discussion on emerging trends in RFID together [3], [4].

At Kaufhof in Essen you can see a showcase of using RFID tags in garment retail. The installation includes augmented shelves, an augmented mirror, and contextual information displays in the changing rooms. The showcase is related to the European Bridge project. …was fun playing with the system – seems to be well engineered for a prototype.

PS: Florian told me that Vlad Coroama finished his PhD. In a different context we talked earlier about his paper discussing the use of sensors to access cost for insurance [5] – he did it with cars but there are other domain where this makes sense, too.

[1] S. Antifakos, F. Michahelles, and B. Schiele. Proactive Instructions for Furniture Assembly. In UbiComp, Gothenburg, Sweden, 2002.
http://www.viktoria.se/fal/exhibitions/smart-its-s2003/furniture.pdf

[2] Florian Michahelles, Stavors Antifakos, Jani Boutellier, Albrecht Schmidt, and Bernt Schiele. Instructions immersed into the real world How your Furniture can teach you. Poster at the Fifth International Conference on Ubiquitous Computing, Seattle, USA, October 2003. http://www.mis.informatik.tu-darmstadt.de/Publications/ubipost03.pdf

[3]Florian Michahelles, Frédéric Thiesse, Albrecht Schmidt, John R. Williams: Pervasive RFID and Near Field Communication Technology. IEEE Pervasive Computing 6(3): 94-96 (2007) http://www.alexandria.unisg.ch/EXPORT/PDF/publication/38445.pdf

[4] Schmidt, A., Spiekermann, S., Gershman, A., and Michahelles, F. 2006. Real-World Challenges of Pervasive Computing. IEEE Pervasive Computing 5, 3 (Jul. 2006), 91-93 http://www.hcilab.org/events/pta2006/IEEE-PvM-b3091.pdf

[5] Vlad Coroama: The Smart Tachograph – Individual Accounting of Traffic Costs and Its Implications. Pervasive 2006: 135-152. http://www.vs.inf.ethz.ch/res/papers/coroama_pervasive2006.pdf

Context-Aware adverts, google patent search

This evening I went to Münster to meet with Antonio Krüger and Lucia Terrenghi (who is now with Vodafone), who was visiting there. Advertisement is a hot topic and it was interesting that we shared an interesting observation “If the advert/information is the least boring thing to look at people will read it ;-)”. Each of us having their favorite anecdotal evidence: my favorites are people reading the same map everyday at their U-station and the advertising flyers in the Munich S-Train. For context-aware advertisement this is the major challenge to find the time/location where people are bored and happy to see an advert 😉

We currently have an ongoing master thesis that looks into this topic – context-aware advertising with cars. There are several interesting examples that this concept could work: e.g. Taxis that show location based ads (you can hire your area where your ad is shown, see [1], [2]). We think it gets really interesting if there are many cars that form a in-town canvas you can paint on. On the way back we checked out the screen adverts (include in the public phones) Jörg Müller works on – even with a navigation feature.

Looking for some more on the topic I realized that Google Patent search works quite well by now: http://www.google.de/patents

Visual aid for navigation – using human image processing

While browsing the equator website I came again across an interesting publication – I had seen two years ago at MobileHCI – in the domain of pedestrian navigation [1]. The Basic idea is to use a collection of geo-tagged photos to provide visual cues to people in what direction they should go, e.g. “walk towards this building”. This is an interesting application linking two concepts we discussed in the part on location in my lecture on pervasive computing. It follows the approach of augmenting the user in a way that the user does what he does well (e.g. matching visual images) and the computer what it does well (e.g. acquiring GPS location, finding pictures related to a location in a DB).

[1] Beeharee, A. K. and Steed, A. 2006. A natural wayfinding exploiting photos in pedestrian navigation systems. In Proceedings of the 8th Conference on Human-Computer interaction with Mobile Devices and Services (Helsinki, Finland, September 12 – 15, 2006). MobileHCI ’06, vol. 159. ACM, New York, NY, 81-88. DOI= http://doi.acm.org/10.1145/1152215.1152233

Visit at Microsoft in Redmond

AJ Brush and John Krumm organize for the people who are in Redmond for the Ubicomp PC meeting a visit to Microsoft. In the morning we got a tour at the home lab – Microsoft’s vision of future home environments – was quite interesting, but had to sign an NDA.
After lunch we went over to Microsoft Research (which is in a new building). We got to see some cool demos. Andy Wilson showed us some new stuff moving the SURFACE forward (physics rocks!). I learned more about depth sensing cameras and Andy showed a fun application [1] – there is video about it, too. Patrick Baudisch talked us through the ideas of LucidTouch [2] and more general about future interaction with small mobile devices. The idea of using the finger behind the screen and the means to increase the precision has many interesting aspects. I found the set of people that work at MSR as impressive as the demos – it seems to be a really exciting work environment.

The atrium of the new building is amazing for playing Frisbee and shoot rubber band missiles. And waiting for the pizza with those toys around proved yet again that researchers are often like kids 😉

[1] Wilson, A. Depth-Sensing Video Cameras for 3D Tangible Tabletop Interaction. Tabletop 2007: The 2nd IEEE International Workshop on Horizontal Interactive Human-Computer Systems, 2007.

[2] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., Shen, C. LucidTouch: A See-Through Mobile Device. In Proceedings of UIST 2007, Newport, Rhode Island, October 7-10, 2007, pp. 269–278 http://www.patrickbaudisch.com/projects/lucidtouch/

Is it easier to design for touch screens if you have poor UI designers?

Flying back from Sydney with Qantas and now flying to Seattle with Lufthansa I had to long distance flights in which I had the opportunity to study (n=1, subject=me, plus over-shoulder-observation-while-walking-up-and-down-the-aisle 😉 the user interface for the in-flight entertainment.

The 2 systems have very different hardware and software designs. The Qantas infotainment system is a regular screen and interaction is done via a wired moveable remote control store in the armrest. The Lufthansa system uses a touch screen (It also has some hard buttons for volume in the armrest). Overall the content on the Qantas system comprised of more content (more movies, more TV-shows) including real games.

The Qantas system seemed very well engineered and the remote control UI worked was greatly suited for playing games. Nevertheless the basic operation (selecting movies etc.) seemed more difficult using the remote control compared to the touch screen interface. In contrast the Lufthansa system seems to have much room for improvement (button size, button arrangement, reactions times of the system) but it appeared very easy to use.

So here are my hypotheses:

Hypothesis 1: if you design (public) information or edutainment systems (excluding games) using a touch screen is a better choice than using an off-screen input device.

Hypothesis 2: with UI design team of a given ability (even a bad UI design team) you will create a significantly better information and edutainment systems (excluding games) if you use a touch screen than using an off-screen input device.

From the automotive domain we have some indications that good off-screen input device are really hard to design so that they work well (e.g. in-build-car navigation system). Probably I should find a student to proof it (with n much larger than 1 and other subjects than me).

PS: the Lufthansa in-flight entertainment runs on Windows-CE 5.0 (the person in front of me had mainly the empty desktop with the Win CE logo showing) and it boots over network (takes over 6 minutes).

CfP: Automotive User Interfaces and Interactive Applications – AUIIA 08

After last year’s successful workshop on automotive user interfaces we are planning to run another one this year. We – Susanne Boll (Uni Oldenburg), Wolfgang Spießl (BMW), Matthias Kranz (DLR) and Albrecht Schmidt – are really looking forward to many interesting submissions and a cool workshop program. The theme gains a the moment some momentum, which was very visible at the Special Interest Group meeting at CHI2008.

More information on the workshop and a call for paper is available at: http://automotive.ubisys.org/

Ali joined our group

Last month Aliresa Sahami finished his master thesis on multi-tactile interaction at BIT Bonn and joined our group in Essen. Ali worked for me a student resesearch assistant at Fraunhofer IAIS. During his studies in Bonn we published an interesting workshop paper on mobile health [1] and gave a related demo at Ubicomp [2].

[1] Alt, F., Sahami Shirazi, A., Schmidt, A. Monitoring Heartbeat per Day to Motivate Increasing Physical Activity. Ubiwell workshop at Ubicomp 2007.

[2] Sahami Shirazi, A.; Cheng, D.; Kroell, O.; Kern, D.; Schmidt, A.: CardioViz: Contextual Capture and Visualization for Long-term ECG Data. In: Adjunct Proceedings of Ubicomp 2007.

Tagging Kids, Add-on to make digital cameras wireless

Reading the new products section in the IEEE pervasive computing magazine (Vol.7, No.2, April-June 2008) I came across a child monitoring systems: Kiddo Kidkeeper – In the smart-its project Henrik Jernström developed 2001 a similar system in his master thesis at PLAY which was published as a Demo at Ubicomp [1]. I remember very lively the discussion about the validity of this application (basically people – including me – asking “Who would want such technology?”). However it seems society and values are constantly changing – there is an interesting ongoing discussion related to that: Free Range Kids (this is the pro side 😉 The article in the IEEE Magazin hinted that the fact the you can take of the device is a problem – I see a clear message ahead – implant the device – and this time I am more careful with arguing that we don’t need it (even though I am sure we do not need it I expect that in 5 to 10 years we will have it)

There were two further interesting links in the article: an SD-card that includes WIFI and hence enables uploading of photos to the internet from any camera having an SD-slot (http://www.eye.fi/products/) – the idea is really simple but very powerful! And finally the UK has an educational laptop, too (http://www.elonexone.co.uk/). Seems the hardware is there (if not this year than next) and where is the software? I think we should put some more effort into this domain in Germany…

Not to forget the issue of the magazine contains our TEI conference report [2].

[1] Henrik Jernström. SiSSy Smart-its child Surveillance System. Poster at Ubicomp 2002, Adjunct Proceedings of Ubicomp 2002. http://citeseer.ist.psu.edu/572976.html

[2] http://doi.ieeecomputersociety.org/10.1109/MPRV.2008.27

Fight for attention – changing cover display of a magazine

Attention is precious and there is a clear fight for it. This is very easy to observe on advertising boards and in news shops. Coming back from Berlin I went in Augsburg into the news agent to get a news paper – and not really looking at magazines is still discovered from the corner of my eyes an issue of FHM with a changing cover page. Technically it is very simple: a lenticular lens that presents and image depending on the viewing angle – alternating between 3 pictures – one of which is a full page advert (for details on how it works see lenticular printing in Wikipedia). A similar approach has already been used in various poster advertising campaigns – showing different pictures as people walk by (http://youtube.com/watch?v=0dqigww4gM8, http://youtube.com/watch?v=iShPBmtajH8). One could also create a context-aware advert, showing different images for small and tall people 😉

In outdoor advertising we see the change to active display happening at the moment. I am really curious when the first really active cover pages on magazines will emerge – thinking of ideas in context-awareness the possibilities seem endless. However it is really a question if electronic paper will be cheap enough before we move to completely electronic reading. Another issue (even with this current version of the magazine) is recycling – which becomes much more difficult when mixing further material with paper.

Ageing, Technology, Products, Services

Today and yesterday I am visiting a conference that is concerned with ageing – looking at the topic from different perspective (computer science, psychology, medicine, economics) run at the MPI in Berlin. The working group is associate with the the German Academy of Sciences Leopoldina and I was invited by Prof. Ulman Lindenberger who is director at the Max Planck Insititut and works in Lifespan Psychology. The working group is called ageing in Germany (in German).

Antonio Krüger and I represented the technology perspective with example from the domain of ubiquitous computing. My talk “ubiquitous computing in adulthood and old age” is a literature review in pictures of selected ubicomp systems targeted as an introduction to non-CS people to the domain. The discussions were really inspiring. In one talk Prof. Jim-Chern Chiou from National Chiao Tung Univeristy in Taiwan (the brain research lab) presented interesting dry electrodes that can be used for EEG – but also for other applications where one need electrodes.

Antonio reported an interesting experiment on the navigation/walking performance of people. The basic message is: if you are old and you can hold on to something while walking you gain cognitive resource – if you are young this effect is not given – has quite interesting impliciations [1]. Antonio worked on more in this domain, see [2].
Over lunch we discussed some ideas related to persuasive technologies and Ulman Lindenberg hinted me some relevant authors (Bargh, Gollwitzer) and I found an interesting manual on subliminal prime on the web.
[1] Martin Lövdén, Michael Schellenbach, Barabra Grossmann-Hutter, Antonio Krüger, Ulman Lindenberger: Environmental topography and postural control demands shape aging-associated decrements in spatial navigation performance. Psychology and Aging, 20, 683-694, 2005 http://www.ncbi.nlm.nih.gov/pubmed/16420142
[2] Aslan, I., Schwalm, M., Baus, J., Krüger, A., and Schwartz, T. 2006. Acquisition of spatial knowledge in location aware mobile pedestrian navigation systems. In Proceedings of the 8th Conference on Human-Computer interaction with Mobile Devices and Services (Helsinki, Finland, September 12 – 15, 2006). MobileHCI ’06, vol. 159. ACM, New York, NY, 105-108. DOI= http://doi.acm.org/10.1145/1152215.1152237

Impressions from Pervasive 2008

Using electrodes to detect eye movement and to detect reading [1] – relates to Heiko’s work but uses different sensing techniques. If the system can really be implemented in goggles this would be a great technologies for eye gestures as suggested in [2].

Utilizing infrastructures that are in place for activity sensing – the example is a heating/air condition/ventilation system [3]. I wondered and put forward the question how well this would work in active mode – where you actively create an airflow (using the already installed system) to detect the state of an environment.

Further interesting ideas:

  • Communicate while you sleep? Air pillow communication… Vivien loves the idea [4].
  • A camera with additional sensors [5] – really interesting! We had in Munich a student project that looked at something similar [6]
  • A cool vision video of the future is SROOM – everything becomes a digital counterpart. Communicates the idea of ubicomp in a great and fun way [7] – not sure if the video is online – it is on the conference DVD.

[1] Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. Andreas Bulling, Jamie A. Ward, Hans-W. Gellersen and Gerhard Tröster. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 19-37, Sydney, Australia, May 2008. http://dx.doi.org/10.1007/978-3-540-79576-6_2

[2] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007. http://murx.medien.ifi.lmu.de/~albrecht/pdf/interact2007-gazegestures.pdf

[3] Shwetak N. Patel, Matthew S. Reynolds, Gregory D. Abowd: Detecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 1-18, Sydney, Australia, May 2008. http://shwetak.com/papers/air_ims_pervasive2008.pdf

[4] Satoshi Iwaki et al. Air-pillow telephone: A pillow-shaped haptic device using a pneumatic actuator (Poster). Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/LBR/lbr11.pdf

[5] Katsuya Hashizume, Kazunori Takashio, Hideyuki Tokuda. exPhoto: a Novel Digital Photo Media for Conveying Experiences and Emotions. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Demo/d4.pdf

[6] P. Holleis, M. Kranz, M. Gall, A. Schmidt. Adding Context Information to Digital Photos. IWSAWC 2005. http://www.hcilab.org/documents/AddingContextInformationtoDigitalPhotos-HolleisKranzGallSchmidt-IWSAWC2005.pdf

[7] S-ROOM: Real-time content creation about the physical world using sensor network. Takeshi Okadome, Yasue Kishino, Takuya Maekawa, Kouji Kamei, Yutaka Yanagisawa, and Yasushi Sakurai. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Video/v2.pdf

Tutorial von Sensor to Context und Activity at Pervasive 2008

Pervasive 2007 introduced a new form of tutorials – having a number of experts talking one hour about their special topic – I was last year as participant and liked it a lot. This year Pervasive 2008 repeated this approach and I contributed a tutorial on how to get context and activity from sensors (tutorial slides in PDF).

Abstract. Intelligent environments, sensor network and smart objects are inherently connected to building systems that sense phenomena in the real world and make the perceived information available to applications. In the first part of the tutorial an overview of sensors and sensor systems commonly used in pervasive computing application is given. Additionally to the sensor properties means for connecting sensors to systems (e.g. ADC, PWM, I2C, serial line) are explained. In the second part it is discussed how to create meaningful information in the application domain. Some basic features, calculated in the time and frequency domain, are introduced to provide basic means for processing and abstraction of raw sensor data. This part is complemented by a brief overview of mechanisms and methods for relating (abstracted) sensor information to context, activity and situations. Additionally general problems that are associated with sensing context and activity will be addressed in this tutorial.

Gregor showed the potential of multi-tag interaction in a Demo

Gregor, a colleague from LMU Munich, presented work that was done in the context of the PERCI project, which started while I was in Munich. The demo showed several applications (e.g. buying tickets) that exploit the potential of interaction with multiple NFC-Tags. The basic idea is to have several NFC-Tags included in a printed poster with which the user can interact using a phone. By touching the tags in a certain order the selection can be made. For more details see the paper accompanying the demo [1].

[1] Gregor Broll, Markus Haarländer, Massimo Paolucci, Matthias Wagner, Enrico Rukzio, Albrecht Schmidt. Collect & Drop: A Technique for Physical Mobile Interaction. Demo at Pervasive 2008. Sydney. http://www.pervasive2008.org/Papers/Demo/d1.pdf