>Paper and demo in Salzburg at Auto-UI-2011

>

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time :-)

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

Paper and demo in Salzburg at Auto-UI-2011

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time :-)

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

Paper and demo in Salzburg at Auto-UI-2011

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time :-)

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

>Lab Tour on arrival in Tokyo

>After a long day/night of travelling I arrived in Tokyo. Arriving at the hotel we were met by our Japanese Colleague Yoshito Tobe and guided us to a Lab tour in the afternoon. We went by train to the Morikawa Labs in the Komaba Research Campus at the University of Tokyo.

At the lab students from different group showed us their work and discussed their ideas with us. To mention only a few things I got to try out a tutoring system for Japanese calligraphy, we saw prototypes for phone based urban sensing, and saw an implementation of a system that communicates between two devices using accelerometers and vibration motors [1].

After the tour we went up a tower building to look at the sunset above Tokyo and we even had a view on Mount Fuji. And as the observation floor is a touristy place there are all sorts of interesting things – and I operated a nice machine to get a puri-CUBE.

Some more photos are available at: http://tinyurl.com/LabTok11 (facebook account required) or public on flickr.

[1] vib-connect: A Device Selecting Interface Using Vibration by Hiroshi Nakahara et al. Demo at IOT 2010.

Lab Tour on arrival in Tokyo

After a long day/night of travelling I arrived in Tokyo. Arriving at the hotel we were met by our Japanese Colleague Yoshito Tobe and guided us to a Lab tour in the afternoon. We went by train to the Morikawa Labs in the Komaba Research Campus at the University of Tokyo.

At the lab students from different group showed us their work and discussed their ideas with us. To mention only a few things I got to try out a tutoring system for Japanese calligraphy, we saw prototypes for phone based urban sensing, and saw an implementation of a system that communicates between two devices using accelerometers and vibration motors [1].

After the tour we went up a tower building to look at the sunset above Tokyo and we even had a view on Mount Fuji. And as the observation floor is a touristy place there are all sorts of interesting things – and I operated a nice machine to get a puri-CUBE.

Some more photos are available at: http://tinyurl.com/LabTok11 (facebook account required) or public on flickr.

[1] vib-connect: A Device Selecting Interface Using Vibration by Hiroshi Nakahara et al. Demo at IOT 2010.

Lab Tour on arrival in Tokyo

After a long day/night of travelling I arrived in Tokyo. Arriving at the hotel we were met by our Japanese Colleague Yoshito Tobe and guided us to a Lab tour in the afternoon. We went by train to the Morikawa Labs in the Komaba Research Campus at the University of Tokyo.

At the lab students from different group showed us their work and discussed their ideas with us. To mention only a few things I got to try out a tutoring system for Japanese calligraphy, we saw prototypes for phone based urban sensing, and saw an implementation of a system that communicates between two devices using accelerometers and vibration motors [1].

After the tour we went up a tower building to look at the sunset above Tokyo and we even had a view on Mount Fuji. And as the observation floor is a touristy place there are all sorts of interesting things – and I operated a nice machine to get a puri-CUBE.

Some more photos are available at: http://tinyurl.com/LabTok11 (facebook account required) or public on flickr.

[1] vib-connect: A Device Selecting Interface Using Vibration by Hiroshi Nakahara et al. Demo at IOT 2010.

Ubicomp 2010

Today the 12th international conference on Ubiquitous Computing (ubicomp2010) started in Copenhagen. The conference is very competitive showing a wide range of work in the space of computing beyond the desktop. This year 39 of 202 papers and notes were accepted in the main program. In this part of the program there is a focus of work from North America (which seems to go together with conferences becoming ACM conferences).

The opening keynote was by Morton Kyng on “Making dreams come true – or how to avoid a living nightmare”. In his talk he outlined his view on palpable computing which basically described user centered development of pervasive systems.

This years Ubicomp has a large number of demos and it was fun to engage with these and with the people presenting them. Christian Winkler from our group had an invited demo on “Sense-sation: An Extensible Platform for Integration of Phones into the Web” showing a combined web and mobile phone platform that eases the development of applications that run across several phones. For example is it very easy to create an application where you have a map interface and you can mark an area on the map and request that each of the devices currently in this area is going to take a photo and sent it back (given that the devices run the platform and that you have the right to use the camera on these phones). There will be a full paper on this in a few weeks published at the Internet of Things Conference in Japan and you can already check out the web page: http://www.test.sense-sation.de/

As Ubicomp is not held at a hotel (which I like) there is also no conference hotel with a default bar. Hence the organziers name a Ubicomp 2010 bar: Nyhavn 17. I think this is a good idea!

Ubicomp 2010

Today the 12th international conference on Ubiquitous Computing (ubicomp2010) started in Copenhagen. The conference is very competitive showing a wide range of work in the space of computing beyond the desktop. This year 39 of 202 papers and notes were accepted in the main program. In this part of the program there is a focus of work from North America (which seems to go together with conferences becoming ACM conferences).

The opening keynote was by Morton Kyng on “Making dreams come true – or how to avoid a living nightmare”. In his talk he outlined his view on palpable computing which basically described user centered development of pervasive systems.

This years Ubicomp has a large number of demos and it was fun to engage with these and with the people presenting them. Christian Winkler from our group had an invited demo on “Sense-sation: An Extensible Platform for Integration of Phones into the Web” showing a combined web and mobile phone platform that eases the development of applications that run across several phones. For example is it very easy to create an application where you have a map interface and you can mark an area on the map and request that each of the devices currently in this area is going to take a photo and sent it back (given that the devices run the platform and that you have the right to use the camera on these phones). There will be a full paper on this in a few weeks published at the Internet of Things Conference in Japan and you can already check out the web page: http://www.test.sense-sation.de/

As Ubicomp is not held at a hotel (which I like) there is also no conference hotel with a default bar. Hence the organziers name a Ubicomp 2010 bar: Nyhavn 17. I think this is a good idea!

>Ubicomp 2010

>Today the 12th international conference on Ubiquitous Computing (ubicomp2010) started in Copenhagen. The conference is very competitive showing a wide range of work in the space of computing beyond the desktop. This year 39 of 202 papers and notes were accepted in the main program. In this part of the program there is a focus of work from North America (which seems to go together with conferences becoming ACM conferences).

The opening keynote was by Morton Kyng on “Making dreams come true – or how to avoid a living nightmare”. In his talk he outlined his view on palpable computing which basically described user centered development of pervasive systems.

This years Ubicomp has a large number of demos and it was fun to engage with these and with the people presenting them. Christian Winkler from our group had an invited demo on “Sense-sation: An Extensible Platform for Integration of Phones into the Web” showing a combined web and mobile phone platform that eases the development of applications that run across several phones. For example is it very easy to create an application where you have a map interface and you can mark an area on the map and request that each of the devices currently in this area is going to take a photo and sent it back (given that the devices run the platform and that you have the right to use the camera on these phones). There will be a full paper on this in a few weeks published at the Internet of Things Conference in Japan and you can already check out the web page: http://www.test.sense-sation.de/

As Ubicomp is not held at a hotel (which I like) there is also no conference hotel with a default bar. Hence the organziers name a Ubicomp 2010 bar: Nyhavn 17. I think this is a good idea!

Ephemeral User Interfaces – nothing lasts forever and somethings even shorter

Robustness and durability are typical qualities that we aim for when building interactive prototypes and systems. Tanja and Axel explored what user experience we can create when we deliberately design something in a way that is ephemeral (=not lasting, there is a good German word “vergänglich”). Ephemeral User Interfaces are user interface elements and technologies that are designed to be engaging but fragile [1]. In the prototype that we showed at TEI 2010 in Cambridge the user can interact with soap bubbles to control a computer. Axel has some additional photos on his web page.


There is a short video of the installation on the technology review blog: http://www.technologyreview.com/blog/editors/24729

[1] Sylvester, A., Döring, T., and Schmidt, A. 2010. Liquids, smoke, and soap bubbles: reflections on materials for ephemeral user interfaces. In Proceedings of the Fourth international Conference on Tangible, Embedded, and Embodied interaction(Cambridge, Massachusetts, USA, January 24 – 27, 2010). TEI ’10. ACM, New York, NY, 269-270. DOI= http://doi.acm.org/10.1145/1709886.1709941