Workshop on Automobile User Interfaces

For the second time we ran this year a workshop on automobile user interfaces and interactive applications in the car at the German HCI conference: http://automotive.ubisys.org/

In the first session we discussed the use of tactile output and haptics in automotive user interfaces. It appears that there is significant interest in this area at the moment. In particular using haptics as an additional modality creates a lot of opportunities for new interfaces. We had a short discussion about two directions in haptic output: naturalistic haptic output (e.g. line assist that feels like going over the side of the road) vs. generic haptic output (e.g. giving a vibration cue when to turn).

 I think the first domain could make an interesting project – how does it naturally feel to drive too fast, to turn the wrong way, to be too close to the car in front of you, etc…

In a further session we discussed framework and concepts for in-car user interfaces. The discussion on the use of context with the interface was very diverse. Some people argued it should be only used in non-critical/optional parts of the UI (e.g. entertainment) as one is not 100% sure if the recognized context is right. Others argue that context may provide a central advantage, especially in safety critical systems, as it gives the opportunity to react faster. 

In the end it comes always down to the question: to what extent do we want to have the human in the loop… But looking at Wolfgang’s overview slide it is impressive how much functionality depends already now on context…

In the third session we discussed tools and methods for developing and evaluating user interfaces in the car context. Dagmar presented our first version of CARS (a simple driving simulator for evaluation of UIs) and discussed findings from initial studies [1]. The simulator is based on the JMonkey Game engine and available open source on our website [2].

There were several interesting ideas on what topics are really hot in automotive UIs, ranging from interfaces for information gather in Car-2-Car / Car-2-Envrionment communication to micro-entertainment while driving.

[1] Dagmar Kern, Marco Müller, Stefan Schneegaß, Lukasz Wolejko-Wolejszo, Albrecht Schmidt. CARS – Configurable Automotive Research Simulator. Automotive User Interfaces and Interactive Applications – AUIIA 08. Workshop at Mensch und Computer 2008 Lübeck 2008

[2] https://www.pcuie.uni-due.de/projectwiki/index.php/CARS

PS: In a taxi in Amsterdam the driver had a DVD running while driving – and I am sure this is not a form of entertainment that works well (it is neither fun to watch, nor is it save or legal).

Implanted Persuasion Technologies

While listening to BJ Fogg, and especially on the motivation pairs (in particular instant pleasure and gratification vs. instant pain) I was wondering how long it will take till we talk about and see implantable persuasion technologies. Take the example of obesity – here one could really image ways of creating an implant that provides motivation for a certain eating behavior… would this be ethical?

Thermo-imaging camera at the border – useful for Context-Awareness?

When we re-entered South Korea I saw guard looking with an infrared camera at all arriving people. It was very hot outside so the heads were very red. My assumption is that this is used to spot people who have fever – however I could not verify this.

Looking at the images created while people moved around I realized that for many tasks in activity recognition, home health care, and wellness this may be an interesting technology to use. For several tasks in context-awareness it seems straightforward to get this information from an infrared camera. In the computer vision domain it seems that there have several papers towards this problem over the recent years.

We could thing of an interesting project topic related to infrared activity recognition or interaction to be integrated in our new lab… There are probably some fairly cheep thermo-sensing cameras around to used in research – for home brew use you find hints on the internet, e.g. How to turn a digital camera into an IR cam – pretty similar to what we did with the web cams for our multi-touch table.

The photo is from http://en.wikipedia.org/wiki/Thermography

Embedded Information – Airport Seoul

When I arrived in Seoul at the airport I saw an interesting instance of embedded information. In Munich we wrote a workshop paper [1] about the concept of embedded information and the key criteria are:

  • Embedding information where and when it is useful
  • Embedding information in a most unobtrusive way
  • Providing information in a way that there is no interaction required

Looking at an active computer display (OK it was broken) that circled the luggage belt (it is designed to list the names of people who should contact the information desk) and a fixed display on a suitcase I was reminded of this paper. With this set-up people become aware of the information – without really making an effort. With active displays becoming more ubiquitous I expect more innovation in this domain. We currently work on some ideas related to situated and embedded displays for advertising – if we find funding we push further… the ideas are there.
[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‘Ubiquitous Display Environments’, September 2004

How to proof that Ubicomp solutions are valid?

Over the last years there have been many workshops and sessions in the ubicomp community that address the evaluation of systems. At Pervasive 2005 in Munich I co-organized a workshop on Application led research with George Coulouris and others. For me one of the central outcomes was that we – as ubicomp researchers – need to team up in evaluating our technologies and solutions with experts in the application domain and that we stay involved in this part of the research. Just handing it over for evaluation into the other domain will not bring us the insights we need to move the field forward. There is a workshop report which appeared in the IEEE Pervasive Magazine, that discusses the topic in more detail [1].

On Friday I met we a very interesting expert in the domain of gerontology. Elisabeth Steinhagen-Thiessen is chief consultant and director of the protestant geriatric centre of Berlin and professor of internal medicine/gerontology at the Charite in Berlin. We talked about opportunities for activity recognition in this domain and discussed potential set-ups for studies.

[1] Richard Sharp, Kasim Rehman. What Makes Good Application-led Research? IEEE Pervasive Computing Magazin. Volume 4, Number 3. July-September 2005.

New ways for reducing CO2 in Europe? Impact of pedestrian navigation systems

Arriving this morning in Brussels I was surprised by the length of the queue for taxis. Before seeing the number of people I considered taking a taxi to the meeting place as I had some luggage – but doing a quick count on the taxi frequency and the number of people in the line I decided to walk to make it in time. Then I remembered that some months ago I had a similar experience in Florence, when arriving at the airport for CHI. There I calculated the expected waiting time and choose the bus. Reflecting briefly on this it seems that this may be a new scheme to promote eco-friendly travel in cities… or why otherwise would there be not enough taxis in a free market?

Reflecting a little longer I would expect that with upcoming pedestrian navigation systems we may see a switch to more people walking in the city. My hypothesis (based on minimal observation) is that people often take a taxi or public transport as they have no idea where to walk to and how long it would take when walking. If now a pedestrian navigation system can offer reliably a time of arrival estimation (which is probably more precise for walking than for driving as there is less traffic jam) and the direction the motivation to walk may be increased. We should probably put pedestrian navigation systems on our project topic list as there is still open research on this topic…

Workshop on Smart Homes at Pervasive 2008

Today we had our Pervasive at home workshop – as part of Pervasive 2008 in Sydney. We had 7 talks and a number of discussions on various topics related to smart homes. Issues ranged from long term experience with smart home deployments (Lasse Kaila et al.), development cycle (Aaron Quigley et al.), to end-user development (Joëlle Coutaz). For the full workshop proceedings see [1].

One trend that can be observed is that researchers move beyond the living lab. In the discussion it became apparent that living labs can start research efforts in this area and function as focus point for researchers with different interests (e.g. technology and user-centred). However it was largely agreed that this can only be a first step and that deployments in actual home settings are becoming more essential to make an impact.

On central problem in smart home research is to develop future devices and services – where prototyping is based on current technologies and where we extrapolate from currently observed user behavior. We had some discussion how this can be done most effectively and what value observational techniques add to technology research and vice versa.

We discussed potential options for future smart home deployments and I suggested creating a hotel where people can experience future living and agree at the same time to give away their data for research purpose. Knowing what theme-hotels are around this idea is not as strange as it sounds 😉 perhaps we have to talk to some companies and propose this idea…

More of the workshop discussion is captured at: http://pervasivehome.pbwiki.com/

There are two interesting references that came up in discussions that I like to share. First the smart home at Duke University (http://www.smarthome.duke.edu/), which is dorm that is a live-in laboratory at Duke University – and it seems it is more expensive that the regular dorm. The second is an ambient interactive device, Joelle Coutaz discussed in the context of her presentation on a new approach to end-user programming and end-user development. The Nabaztag (http://www.nabaztag.com/) is a networked user interface that includes input and output (e.g. text2speech, moveable ears and LEDs) which can be programmed. I would be curious how well it really works to get people more connected – which relates to some ideas of us on having an easy communication channels.

[1] A.J. Brush, Shwetak Patel, Brian Meyers, Albrecht Schmidt (editors). Proceedings of the 1st Workshop on “Pervasive Computing at Home” held at the 6th international Conference on Pervasive Computing, Sydney, May 19 2008. http://murx.medien.ifi.lmu.de/~albrecht/pdf/pervasive-at-home-ws-proceedings-2008.pdf

Poor man’s location awareness

Over the last day I have experienced that very basic location information in the display can already provide a benefit to the user. Being the first time in Sydney I realized that network information of my GSM-phone is very reliable to tell me when to get off the bus – obviously it is not fine grain location information but so far always walking distance. At some locations (such as Bondi beach) visual pattern matching works very well, too 😉 And when to get off the bus seems a concern to many people (just extrapolating from the small sample I had over the last days…).

In my pervasive computing class, which I currently teach, we covered recently different aspects of location based systems – by the way a good starting point on the topic is [1] and [2]. At We discussed issues related to visual pattern matching – and when looking at the skyline of Sydney one becomes very quickly aware of the potential of this approach (especially with all the tagged pictures on flickr) but at the same time the complexity of matching from arbitrary locations becomes apparent.

Location awareness offers many interesting questions and challenging problems – looks like there are ideas for project and thesis topics, e.g. how semantic location information (even of lower quality) can be beneficial to users or finger printing based on radio/TV broadcast information.

[1] J. Hightower and G. Borriello. Location systems for ubiquitous computing. IEEE Computer, 34(8):57–66, Aug. 2001. http://www.intel-research.net/seattle/pubs/062120021154_45.pdf

[2] Jeffrey Hightower and Gaetano Borriello. Location Sensing Techniques. UW-CSE-01-07-01.

A service for true random numbers

After the exam board meeting at Trinity College in Dublin (I am external examiner for the Ubicomp program) I went back with Mads Haahr (the course director) to his office. Besides the screen on which he works he has one extra where constantly the log entries of his web server is displayed. It is an interesting awareness devices 😉 some years ago we did a project where we used the IP-address of incoming HTTP-requests to guess who the visitors are and to show their web pages on an awareness display [1], [2]. Looking back at web visitors works very well in an academic context and with request out of larger companies where one can expect that information is available on the web. Perhaps we should revisit the work and look how we can push this further given the new possibilities in the web.

The web-server Mads has in his office is pretty cool – it provides true random numbers – based on atmospheric noise picked up with 3 real radios (I saw them)! Have a look at the service for yourself: www.random.org. It provides an HTTP interface to use those numbers in your own applications. I would not have though of a web service to provide random numbers – but thinking a little more it makes a lot of sense…

[1] Schmidt, A. and Gellersen, H. 2001. Visitor awareness in the web. In Proceedings of the 10th international Conference on World Wide Web (Hong Kong, Hong Kong, May 01 – 05, 2001). WWW ’01. ACM, New York, NY, 745-753. DOI= http://doi.acm.org/10.1145/371920.372194

[2] Gellersen, H. and Schmidt, A. 2002. Look who’s visiting: supporting visitor awareness in the web. Int. J. Hum.-Comput. Stud. 56, 1 (Jan. 2002), 25-46. DOI= http://dx.doi.org/10.1006/ijhc.2001.0514

Wolfgang Spießl presented our CHI-Note

People take mobile devices into their cars and the amount of information people have on those devices is huge – just consider the number of songs on an MP3-Player, the address database in a navigation system and eventually the mobile web. In our work we looked at ways to design and implement search interfaces that are usable while driving [1]. For the paper we compared a categorized search and a free search. The was another paper in the session looking at practice of GPS use by Leshed et al. which was really interesting and can inform future navigation or context-aware information systems [2]. One interesting finding is that you loose AND at the same time create opportunities for applications and practices. In the questions she hinted some interesting observations on driving in familiar vs. driving in unfamiliar environments using GPS units. Based on these ideas there may be an interesting student project to do…

The interest in Wolfgang’s talk and into automotive user interfaces in general was unexpected high. As you see on the picture there was quite a set of people talking pictures and videos during the presentation.

[1] Graf, S., Spiessl, W., Schmidt, A., Winter, A., and Rigoll, G. 2008. In-car interaction using search-based user interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1685-1688. DOI= http://doi.acm.org/10.1145/1357054.1357317

[2] Leshed, G., Velden, T., Rieger, O., Kot, B., and Sengers, P. 2008. In-car gps navigation: engagement with and disengagement from the environment. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1675-1684. DOI= http://doi.acm.org/10.1145/1357054.1357316

Session on Tactile UIs

Tampere University presented a study where a rotation element is used to create tactile output and the assessed emotional perception of the stimuli (http://mobilehaptics.cs.uta.fi [1]). One application scenario is to use haptics feedback to create applications that allow us to “be in touch”. From Steven Brewsters group a project was presented that looks into how the performance of a touchscreen keyboard can be enhanced by tactile feedback [2]. In one condition they use two actuators. Both papers are interesting and provide insight for two of our current projects on multi-tactile output.

[1] Salminen, K., Surakka, V., Lylykangas, J., Raisamo, J., Saarinen, R., Raisamo, R., Rantala, J., and Evreinov, G. 2008. Emotional and behavioral responses to haptic stimulation. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1555-1562. DOI= http://doi.acm.org/10.1145/1357054.1357298

[2]
Hoggan, E., Brewster, S. A., and Johnston, J. 2008. Investigating the effectiveness of tactile feedback for mobile touchscreens. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1573-1582. DOI= http://doi.acm.org/10.1145/1357054.1357300

CHI Conference in Florence

On Sunday afternoon I flew to Florence and we met up in the evening with former colleagues – CHI always feels like a school reunion 😉 and it is great to get first hand reports on what everyone is working currently. On the plane I met Peter Thomas (editor of Ubquitous Computing Journal) and we talked about the option of a special issue on automotive…

We have rented a house in the Tuscany Mountains together with Antonio’s group and collaborators from BMW research and T-Labs. Even though we have to commute into Florence everyday it is just great that we have our “own” house – and it is much cheaper (but we have to do our dishes).

The conference is massive – 2300 people. There is a lot of interesting work and hence it is not feasible to cover it in a few sentences. Nevertheless there are some random pointers:

In the keynote a reference to an old reading machine by Athanasius Kircher was mentioned.

Mouse Mischief – educational software – 30 mice connected to 1 PC – cool!

Reality based interaction – conceptual paper – arguing that things should behaves as in the real world – interesting concept bringing together many new UI ideas

Inflatable mouse – cool technology from Korea– interesting use cases – we could integrate this in some of our projects (not inflating the mouse but inflating other things)

Multiple Maps – Synthesizing many maps – could be interesting for new navigation functions

Rub the Stane – interactive surfaces – detection of scratching noises only using a microphone

Usability evaluation considered harmful – the every-year discussion on how to make CHI more interesting continues

It seems there is currently some work going on looking at technologies in religious practice. Over lunch we had developed interesting ideas towards remote access to multimedia information (e.g. services of ‘once’ local church) and sharing awareness. This domain is intriguing because churches often form tight communities and content is regularly produced and available. Perhaps we should follow up on this with a project…

Dairy study on mobile information needs – good base literature of what information people need/use when they are mobile

K-Sketch – cool sketching technique.

Crowdsourcing user studies – reminded me of my visit at http://humangrid.eu

Lean and Zoom – simple idea – you come close it gets bigger – nicely done

Work on our new lab space started – ideas for intelligent building material

This week work on our new lab space started 🙂 With all the drilling and hammering leaving for CHI in Florence seemed like perfect timing. Our rooms are located in a listed historical building and hence planning is always a little bit more complicated but we are compensated by working in a really nice building.

As I was involved in the planning space for the lab we had the opportunity to integrate a space dedicated to large interactive surfaces where we can explore different options for interaction.

Seeing the process of planning and carrying out indoor building work ideas related to smart building materials inevitably spring to mind. Much work goes into communication between different people involved in the process and into establishing and communicating the current status (structure, power routing, ventilation shafts, insulation, etc.) of the building. When imagine that brick, fixture, panel, screw and cable used could provide information about its position and status we could create valuable applications. Obviously always based on the assumption that computing and communication gets cheaper… I think it could be an interesting student project to systematically assess what building material would most benefit from sensing (or self-awareness) and processing and what applications this would enable; and in a second step create and validate a prototype.

Humangrid – are humans easier to program than systems?

In the afternoon I visited humangrid, a startup company in Dortmund. Their basic idea is to create a platform that offers opportunities for crowdsourcing – basically outsourcing small tasks that are easy to perform by humans to a large number of clickworkers. One example for such a scenario is tagging and classification of media. It is interesting that they aim to create a platform that offers real contracts and provides guaranties – which makes it in my eyes more ambitious than Amazon’s Mechanical Turk.

One interesting argument is that programming humans (as intelligent processors) to do a certain task that involves intelligence is easier and cheaper than creating software that does this completely automated. Obviously with software there is nearly zero-cost for performing the tasks – after the software is completed, however if the development costs are extremely high paying a small amount to the human processor for each task may still be cheaper. The idea is a bit like creating a prototype using wizard of oz – and not replacing the wizard in the final version.

In our discussion we developed some idea where pervasive computing and mobile technologies can link to the overall concept of the human grid and crowdsourcing creating opportunities for new services that are currently not possible. One of our students will start next month a master thesis on this idea – I am already curious if we get the idea working.

Have Not Changed Profession – Hospitals are complex

This morning we had the great opportunity to observe and discuss workflows and work practice in the operating area in the Elisabeth hospital in Essen. It was amazing how much time from (really busy) personnel we got and this provided us with many new insights.

The complexity of scheduling patients, operations, equipment and consumables in a very dynamic environment poses a great challenges and it was interesting to see how well it works with current technologies. However looking at the systems used and considering upcoming pervasive computing technologies a great potential for easing tasks and processes is apparent. Keeping tracking of things and people as well as well as documentation of actions are central areas that could benefit.

From a user interface perspective it is very clear that paper and phone communication play an important role, even in such high-tech environment. We should look a bit more into the Anoto Pen technology – perhaps this could be an enabler for some ideas we discussed. Several ideas that relate to implicit interaction and context awareness (already partly discussed in the context of a project in Munich [1]) re-surfaced. Similarly questions related to data access and search tools seem to play an interesting role. With all the need for documentation it is relevant to re-thing in what ways data is stored and when to analyses data (at storage time or at retrieval time).

One general message from such a visit is to appreciate people’s insight in these processes which clearly indicates that a user centered design process is the only suitable way to move innovation in such environments forward and create by this ownership and acceptance.

[1] A. Schmidt, F. Alt, D. Wilhelm, J. Niggemann, H. Feussner. Experimenting with ubiquitous computing technologies in productive environments. e & i Elektrotechnik und Informationstechnik, Springer Verlag. Volume 123, Number 4 / April, 2006. pages 135-139

DIY automotive UI design – or how hard is it to design for older people

The picture does not show a research prototype – it shows the actual interior of a 5-series BMW (fairly recent model). The driver (an elderly lady) adapted the UI to suit her needs. This modification includes the labeling of controls which are important, writing some instructions for more complicate controls close to them (hereby implementing one of the key ideas of embedded information [1]), an covering some to the user “useless” controls.

At first I assumed this is a prank* – but it seems to be genuine and that makes it really interesting and carries important lessons with regard to designing for drivers of 80 years and older. Having different skins (and not just GUIs more in a physical approach) as well as UI components that can be composed (e.g. based on user needs) in the embedded and tangible domain seem challenging but may new opportunities for customized UIs. Perhaps investigating ideas for personalizing physical user interfaces – and in particular car UIs – may be an interesting project.

[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‘Ubiquitous Display Environments’, September 2004 http://www.hcilab.org/documents/EmbeddedInformationWorkshopUbiComp2004.pdf

* will try to get more evidence that it is real 🙂

Application Workshop of KDUbiq in Porto

After having frost and snow yesterday morning in Germany being in Porto (Portugal) is quite a treat. The KDubiq application workshop is in parallel to the summer school and yesterday evening it was interesting to meet up with some people teaching there.

The more I learn about data mining and machine learning the more I see even greater potential in many ubicomp application domains. In my talk “Ubicomp Applications and Beyond – Research Challenges and Visions” I looked back at selected applications and systems that we have developed over the last 10 year (have a look at the slides – I, too was surprised what variety of projects we did in the last years ;-). So far we have often used basic machine learning methods to implement – in many cases creating a version 2 of these systems where machine learning research is brought together with ubicomp research and new technology platforms could make a real difference.

Alessandro Donati from ESA gave a talk “Technology for challenging future space missions” which introduced several challenges. He explained their approach to technology introduction into mission control. The basic idea is that the technology providers create together with the users a new application or tool. He strongly argued for a user centred design and development process. It is interesting to see that the concept of user centred development processes are becoming more widespread and go beyond classical user interfaces into complex system development.

User-generated tutorials – implicit interaction as basis for learning

After inspiring discussions during the workshop and in the evening I reconsidered some ideas for automatically generated tutorials by user interaction. The basic idea is to capture usage of applications (e.g. using usa-proxy and doing screen capture) continuously – hard disks are nowadays big enough 😉 Using query mechanisms and data mining a user can ask for a topic and will then get samples of use (related to this situation). It creates some privacy questions but I think this approach could create a new approach to creating e-learning content…. maybe a project topic?

Visiting the inHaus in Duisburg

This morning we visited the inHaus innovation center in Duisburg (run by Fraunhofer, located on the University campus). The inHaus is a prototype of a smart environment and a pretty unique research, development and experimentation facility in Germany. We got a tour of the house and Torsten Stevens from Fraunhofer IMS showed us some current developments and several demos. Some of the demos reminded me of work we started in Lancaster, but never pushed forward beyond a research prototype, e.g. the load sensing experiments [1], [2].

The inHaus demonstrates impressively the technical feasibility of home automation and the potential of intelligent living spaces. However beyond that I strongly believe that intelligent environments have to move towards the user – embracing more the way people life their lives and providing support for user needs. Together with colleagues from Microsoft Research and Georgia Tech we organize the workshop Pervasive Computing at Home which is held as a part of Pervasive 2008 in Sydney that focuses on this topic.

Currently the market size for smart homes is still small. But looking at technological advances it is not hard to image that some technologies and services will soon move from “a luxury gadget” to “a common tool”. Perhaps wellness, ambient assistive living and home health care are initial areas. In this field we will jointly supervise a thesis project of one of our students over the next month.

Currently most products for smart homes are high quality, premium, high priced, and providing a long lifetime (typically 10 to 20 years). Looking what happened in other markets (e.g. navigation systems, now sold at 150€ retail prices including a GPS unit, maps, touch screen and video player) it seems to me there is definitely an interesting space for non-premium products in the domain of intelligent environments.

[1] Schmidt, A., Strohbach, M., Laerhoven, K. v., Friday, A., and Gellersen, H. 2002. Context Acquisition Based on Load Sensing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 – October 01, 2002). G. Borriello and L. E. Holmquist, Eds. Lecture Notes In Computer Science, vol. 2498. Springer-Verlag, London, 333-350.

[2] Albrecht Schmidt, Martin Strohbach, Kristof Van Laerhoven, Hans-Werner Gellersen: Ubiquitous Interaction – Using Surfaces in Everyday Environments as Pointing Devices. User Interfaces for All 2002. Springer LNCS.

OLPC – cute and interesting – but what type of computer is it?

After the conference I had finally some time to try out my new XO Laptop (OLPC). It is fairly small, has a rubber keyboard and a very good screen. It can be used in laptop and e-book mode. A colleague described it as somewhere between a mobile phone and a notebook-computer – first I did not get it – but after using it I fully understand.

There is good documentation out – the getting started manual at laptop.org provides a very good entry point. Getting it up and running was really easy (finding the key for my WIFI-Access point at home was the most difficult part 😉

There are two interesting wikis with material online at olpcaustria.org and laptop.org. I am looking forward to trying the development environments supplied with the standard distribution (Pippy and Etoys).

I would expect when Vivien get up in the morning and sees it I will be second in line for exploring the XO further. It is really designed in a way that makes it attractive for children. To say more about about the usability (in particular the software) I need to explore it more…

To me it is not understandable why it is so difficult to get them in Europe. I think the buy 1 and donate 1 approach was very good (but again this was only in the US)…

Thought on Keys

Many keys (to rooms and buildings) are still tangible objects, where the tangible properties and affordances imply certain ways of usage .Who has not gotten a hotel key that you hand in at reception, because it is too big to be carried in a pocket? Moving digital many keys we get lack craft and unique affordances as they are just plastic cards or RFID tags in a specific form. With moving towards biometric authentication it seems that the key is intangible (so we loose options in the design space) but embedded into us (which opens up new possibilities).

The major drawback of physical and tangible keys is that if you don’t have it with you – when you are in front of the door they can not help you. Even if you know where the key is and you communicate with the person having the key.
… but thinking back a few days to the visions in Hiroshi Ishii’s keynote its seems that this is very short term problem. Having atoms that can be controlled (tangible bits) we can just get the data for the key from remote and reproduce it locally. With current technology this seems already very feasible – on principle – ( some Person uses a 3D scanner, e.g. embedded in a mobile device that has a camera and communication) and the other person has a 3D printer/laser cutter. Still the question remains if moving to digital keys is not much easier.

However if you do not have the key – and even so there is a solution “on principle” – it does not really help 😉

Will cars become a more open platform?

Today I met with Matthias Kranz in Munich. Besides discussing his thesis I got to see his new car (a prius) – quite impressive and interesting interfaces. Later I met with Wolfang Spießl who started recently his PhD in cooperation with BMW – again seeing an interesting and impressive (test)car.

It is really curious to see that there is a lot of interest in the hobbyist communities on car interfaces and protocols. In the June/2007 issues of Elektor (http://www.elektor.de/) was an article on a OBD-2-analyser, in a recent issue of the EAM (http://www.eam-magazin.de/) was a similar article and there are many community sites on the WWW, e.g. http://www.canhack.de/

Perhaps we could do in one of our pervasive computing related classes a project on this topic? There are so many technical opportunities and the challenge is to find the convincing applications!

Sensing a common tools – when will it be integrated in building materials?

This morning a heating and water technician checked on the wet spots on my wall in my new flat in Essen. Using a hygrometer he looked for the area which is most damp and then he broke a hole into the wall. After opening the wall, it was very easy to see that the outside wall is wet and that the heating is OK.

The hole in the wall does not really look good 🙁

This makes me wonder when building materials, with sensing included will move from the lab to the real world. Pipe insulation, plaster boards, stones with integrated sensors would be quite easy to create and there are ideas to do it in a cheap and easy way. In the context of Pin&Play (later Voodoo I/O) we explored some ideas but never completed the prototypes for real use. Perhaps this could be an interesting project…

Interactive window displays – we have better ideas

It seems that in the research community a lot of people are convinced of interactive public spaces and interactive window displays. Over the last month I have see great visions and ideas – as well as reflected on our own multi-touch ideas for interactive shop windows.

The installations I have seen in the real world however are at best boring (and often not functioning at all). It seems that even a student-project-lab-demo is more appealing and works at least as realiable.

Especially combining sensing (e.g. simple activity recognition, context) with low threshold interactive content seems to have great potential. If there is somebody interested in really cool stuff for a shop window (attention grabbing, eye catchers, interactive content, etc.) – talk to me. We are happy to discuss a project proposal 😉

Video conferences – easier but not better?

The Pervasive 2008 TPC meeting on Saturday was held distributed over 3 continents and linked via video conference. In Germany we had a really good time slot (12:00 to 20:00) – Australia and California had a really late/early day.

The meeting worked well over video and considering the saved travel time it seems this is a acceptable alternative to a full physical meeting. It was interesting to see that the video conferencing quality did not really improve much over the last years. We ran the TPC meeting for Ubicomp 2003 between the UK and the USA also with a video conference system. And my first projects (in 1996) I worked on as a student researcher at the University of Ulm were on video conferencing, too.

It seems that over the last 10 years it has gotten much easier to set a conference up and interoperability seems less of issue, but the quality is still poor (even with the professional systems). I wonder if we should look with a master thesis into the topic again – all the topics like high quality AV, context-awareness, sharing, informal exchange, side channels, etc. appear still not to be there yet… or is the setting we used (google docs for sharing, edas as document repository, skype for side channel communication, and a professional video conference system) the natural way this develops?

What do you decide in the car?

While waiting in Stuttgart in the lounge of the railway station I picked up a paper called “Auto-Bild” (the selection of magazines is really poor 😉 and I found an interesting news item in it.

KIA has done a survey (with over 2000 people) in the UK on decision making in the car. It appears that people use the time in the car to discuss major issues in their lives and that they make significant decisions during long journeys. I have not found the original survey from KIA but there are several pages that discuss the results, e.g. gizmag.

Some findings in short, people talked about/made descions: going on holiday (63%), buying a car (50%), moving (40%), getting a pet (26%), getting married (23%). The main reason for the car on a long journey being an effective environment for communication seems the fact the people are close together for a long time and no-one can walk away (41%). Also the fact that you have reason not to look the other person into the eyes, as you have to watch the street, was valued.

Thinking about it there it may also have to do with the function of space. A car puts people close together – in some case to intimate distances (up to 50cm) but defiantly to personal distances (50cm-125cm). There is a comprehensive overview by Nicolas Nova, Socio-cognitive functions of space in collaborative settings: a literature review about Space, Cognition and Collaboration (original reference to my knowledge is Hall, E.T. (1966). The Hidden Dimension: Man’s Use of Space in Public and Private. Garden City, N.Y.: Doubleday.).

This survey made me think more about the design space “car”. Recently two of my students – Anneke Winter and Wolfgang Spießl – finished there master projects at BMW looking into search technologies and user interfaces in the car. It seems there are a lot of ideas that can be pushed forward realizing Ubicomp in the car.

Navigation by calories – New insights useful for next generation navigation systems?

In a German science news ticker I saw an article a inspiring post reporting an experiment on orientation in relation to food. It describes an experiment where men and women were asked to visit a set of market stalls to taste food and afterwards they are asked where the stall was.

The to me surprising result was that women performed better than men (which is to my knowledge not often the case in typical orientation experiments) and that independent of gender the amount of calories that are contained in the tasted food influenced the performance. Basically if there are more calories in the tasted food people could remember better where it was. I have had no change yet to read the original paper (Joshua New, Max M. Krasnow, Danielle Truxaw und Steven J.C. Gaulin. Spatial adaptations for plant foraging: women excel and calories count, August 2007, Royal society publishing, http://www.journals.royalsoc.ac.uk) and my assessment is only based on the post in the newsticker.

This makes me think about future navigation systems and in particular landmark based navigation. What landmarks are appropriate to use (e.g. places where you get rich food) and how much this is gender dependent (e.g. the route for men is explained by car-dealers and computer shops whereas for women by references to shoe shops – is this political correct?).

Apropos: landmark based navigation. There is an interesting short paper that was at last years UIST conference that looks into this issue in the context of personalized routes:
Patel, K., Chen, M. Y., Smith,
I., and Landay, J. A. 2006. Personalizing routes. In Proceedings of the 19th Annual ACM Symposium on User interface Software and Technology (Montreux, Switzerland, October 15 – 18, 2006). UIST ’06. ACM Press, New York, NY, 187-190. DOI= http://doi.acm.org/10.1145/1166253.1166282

Perhaps this ideas could be useful for a future navigation system…

Mirror with memory and a different perspective

This morning I corrected the proofs for the Pervasive and Mobile Compting journal for the paper I had together with Lucia Terrenghi at Percom (Methods and Guidelines for the Design and Development of Domestic Ubiquitous Computing Applications, Proceedings of the Fifth Annual IEEE Conference on Pervasive Computing and Communications (PerCom), New York, NY, USA, Mar. 2007).

This brought again a topic to my attention that we have focused on for some time in Munich but never really completed. Mirrors with enhanced functionality, that can display information, capture what you were wearing at a certain date, or give you a new perspective (e.g. back, top) – such new perspectives can be really revealing, see the top of my head in the picture.

More details on the design concept can be found in the paper in section 5.2.2. I think it is worthwhile to look again more into it in a bachelor or master project. Even though Philips Home Lab has done some work there in there Intelligent Personal Care Environment project, I think there is much potential left.