Call for Papers: IEEE Pervasive Computing special Issue on Automotive Pervasive Computing

Next year will be a special issue of the IEEE Pervasive Computing magazine on Automotive Pervasive Computing. I am honored to edit this issue together with Joe Paradiso and Brian Noble :-) The submission deadline for full papers is October 1st and work in progress one month later – see the call for papers for details.

Cars have become an interesting and challenging microcosm for pervasive computing research and we invite articles relating to pervasive computing in the automotive context. Examples of relevant topics are:

Sensing and context in automotive environments

  • Pervasive sensor systems in the car
  • Use of sensors and context for automotive applications
  • Contextual vehicular applications
  • Collaborative sensing with multiple cars

Automotive user interfaces

  • Concepts for in-car user interfaces based on pervasive computing techology
  • Multi-modal interaction in the car
  • Detecting user intentions, emotions, and distraction
  • User interfaces for assistive functionality and autonomous driving
  • Applications of car to car communication

Pervasive computing applications in the car

  • Contextual information and navigation systems
  • Technologies to improve media consumption while driving
  • Communication appliances for drivers and passengers
  • In-car pervasive gaming for passengers and drivers

Experience with pervasive computing in the car

  • Experiences with pervasive computing technologies in cars
  • Case studies of automotive pervasive computing
  • Ethnographic work on the use of technologies in cars

For details see the cfp at: http://computer.org/pervasive/cfp3

Call for Papers: IEEE Pervasive Computing special Issue on Automotive Pervasive Computing

Next year will be a special issue of the IEEE Pervasive Computing magazine on Automotive Pervasive Computing. I am honored to edit this issue together with Joe Paradiso and Brian Noble :-) The submission deadline for full papers is October 1st and work in progress one month later – see the call for papers for details.

Cars have become an interesting and challenging microcosm for pervasive computing research and we invite articles relating to pervasive computing in the automotive context. Examples of relevant topics are:

Sensing and context in automotive environments

  • Pervasive sensor systems in the car
  • Use of sensors and context for automotive applications
  • Contextual vehicular applications
  • Collaborative sensing with multiple cars

Automotive user interfaces

  • Concepts for in-car user interfaces based on pervasive computing techology
  • Multi-modal interaction in the car
  • Detecting user intentions, emotions, and distraction
  • User interfaces for assistive functionality and autonomous driving
  • Applications of car to car communication

Pervasive computing applications in the car

  • Contextual information and navigation systems
  • Technologies to improve media consumption while driving
  • Communication appliances for drivers and passengers
  • In-car pervasive gaming for passengers and drivers

Experience with pervasive computing in the car

  • Experiences with pervasive computing technologies in cars
  • Case studies of automotive pervasive computing
  • Ethnographic work on the use of technologies in cars

For details see the cfp at: http://computer.org/pervasive/cfp3

>Call for Papers: IEEE Pervasive Computing special Issue on Automotive Pervasive Computing

>Next year will be a special issue of the IEEE Pervasive Computing magazine on Automotive Pervasive Computing. I am honored to edit this issue together with Joe Paradiso and Brian Noble :-) The submission deadline for full papers is October 1st and work in progress one month later – see the call for papers for details.

Cars have become an interesting and challenging microcosm for pervasive computing research and we invite articles relating to pervasive computing in the automotive context. Examples of relevant topics are:

Sensing and context in automotive environments

  • Pervasive sensor systems in the car
  • Use of sensors and context for automotive applications
  • Contextual vehicular applications
  • Collaborative sensing with multiple cars

Automotive user interfaces

  • Concepts for in-car user interfaces based on pervasive computing techology
  • Multi-modal interaction in the car
  • Detecting user intentions, emotions, and distraction
  • User interfaces for assistive functionality and autonomous driving
  • Applications of car to car communication

Pervasive computing applications in the car

  • Contextual information and navigation systems
  • Technologies to improve media consumption while driving
  • Communication appliances for drivers and passengers
  • In-car pervasive gaming for passengers and drivers

Experience with pervasive computing in the car

  • Experiences with pervasive computing technologies in cars
  • Case studies of automotive pervasive computing
  • Ethnographic work on the use of technologies in cars

For details see the cfp at: http://computer.org/pervasive/cfp3

Scientific papers as audio content?

We have started to experiment with reading articles and providing them as MP3 files or podcasts (see the facebook page or the blogpost). So far I found for myself a number of places where I like it – from gym to car. Perhaps we should make it mandatory that the camera ready version of the paper is the PDF, the source, and an audio file (e.g. the paper read by the author or a description of the work – I guess the authors reading their papers could improve some papers as the authors would be finally read what they write ;-)

Coming across this sign in a bookshop in Essen made me smile – especially as we look into ways that may making reading feasible while driving [1].

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

PS: to answer some of the questions about the audio files of research papers I got recently: Yes, I think it is nice to have real humans reading. Yes, I know that there are brilliant text to speech software (but as long as they do the Simpsons with actor’s voices we are not there yet).

Scientific papers as audio content?

We have started to experiment with reading articles and providing them as MP3 files or podcasts (see the facebook page or the blogpost). So far I found for myself a number of places where I like it – from gym to car. Perhaps we should make it mandatory that the camera ready version of the paper is the PDF, the source, and an audio file (e.g. the paper read by the author or a description of the work – I guess the authors reading their papers could improve some papers as the authors would be finally read what they write ;-)

Coming across this sign in a bookshop in Essen made me smile – especially as we look into ways that may making reading feasible while driving [1].

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

PS: to answer some of the questions about the audio files of research papers I got recently: Yes, I think it is nice to have real humans reading. Yes, I know that there are brilliant text to speech software (but as long as they do the Simpsons with actor’s voices we are not there yet).

>Scientific papers as audio content?

>We have started to experiment with reading articles and providing them as MP3 files or podcasts (see the facebook page or the blogpost). So far I found for myself a number of places where I like it – from gym to car. Perhaps we should make it mandatory that the camera ready version of the paper is the PDF, the source, and an audio file (e.g. the paper read by the author or a description of the work – I guess the authors reading their papers could improve some papers as the authors would be finally read what they write ;-)

Coming across this sign in a bookshop in Essen made me smile – especially as we look into ways that may making reading feasible while driving [1].

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

PS: to answer some of the questions about the audio files of research papers I got recently: Yes, I think it is nice to have real humans reading. Yes, I know that there are brilliant text to speech software (but as long as they do the Simpsons with actor’s voices we are not there yet).

Handheld Laser Projector


Enrico got for his project a handheld laser projector. The nice thing about this technology is that the projected image is always in focus – it does not matter if you move the projector or if you project on an uneven surface. The AAXA L1 Laser Pico Projector (even though there are plenty of things that can be improved, noise for a start) provides some inspiration what will become possible if these devices will be common in mobile phones. In Lancaster Enrico explored already 2007/2008 some of the usage scenarios and interaction techniques [1] and I am really curious of further ones. There was a workshop at Pervasive 2010 looking at current research on personal projectors [2].

With this piece of technology we can also move on the idea of a device where interactive shell and functional core of a product is separated. We have published the concept as work in progress at CHI [3] and perhaps it is now time to look into a realistic implementation using such a laser projector.

[1] Hang, A., Rukzio, E., and Greaves, A. 2008. Projector phone: a study of using mobile phones with integrated projector for interaction with maps. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 207-216. DOI= http://doi.acm.org/10.1145/1409240.1409263

[2] UBIPROJECTION 2010 http://eis.comp.lancs.ac.uk/workshops/ubiproject2010/

[3] Doering, T., Pfleging, B., Kray, C., and Schmidt, A. 2010. Design by physical composition for complex tangible user interfaces. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3541-3546. DOI= http://doi.acm.org/10.1145/1753846.1754015

Handheld Laser Projector


Enrico got for his project a handheld laser projector. The nice thing about this technology is that the projected image is always in focus – it does not matter if you move the projector or if you project on an uneven surface. The AAXA L1 Laser Pico Projector (even though there are plenty of things that can be improved, noise for a start) provides some inspiration what will become possible if these devices will be common in mobile phones. In Lancaster Enrico explored already 2007/2008 some of the usage scenarios and interaction techniques [1] and I am really curious of further ones. There was a workshop at Pervasive 2010 looking at current research on personal projectors [2].

With this piece of technology we can also move on the idea of a device where interactive shell and functional core of a product is separated. We have published the concept as work in progress at CHI [3] and perhaps it is now time to look into a realistic implementation using such a laser projector.

[1] Hang, A., Rukzio, E., and Greaves, A. 2008. Projector phone: a study of using mobile phones with integrated projector for interaction with maps. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 207-216. DOI= http://doi.acm.org/10.1145/1409240.1409263

[2] UBIPROJECTION 2010 http://eis.comp.lancs.ac.uk/workshops/ubiproject2010/

[3] Doering, T., Pfleging, B., Kray, C., and Schmidt, A. 2010. Design by physical composition for complex tangible user interfaces. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3541-3546. DOI= http://doi.acm.org/10.1145/1753846.1754015

>Handheld Laser Projector

>
Enrico got for his project a handheld laser projector. The nice thing about this technology is that the projected image is always in focus – it does not matter if you move the projector or if you project on an uneven surface. The AAXA L1 Laser Pico Projector (even though there are plenty of things that can be improved, noise for a start) provides some inspiration what will become possible if these devices will be common in mobile phones. In Lancaster Enrico explored already 2007/2008 some of the usage scenarios and interaction techniques [1] and I am really curious of further ones. There was a workshop at Pervasive 2010 looking at current research on personal projectors [2].

With this piece of technology we can also move on the idea of a device where interactive shell and functional core of a product is separated. We have published the concept as work in progress at CHI [3] and perhaps it is now time to look into a realistic implementation using such a laser projector.

[1] Hang, A., Rukzio, E., and Greaves, A. 2008. Projector phone: a study of using mobile phones with integrated projector for interaction with maps. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 207-216. DOI= http://doi.acm.org/10.1145/1409240.1409263

[2] UBIPROJECTION 2010 http://eis.comp.lancs.ac.uk/workshops/ubiproject2010/

[3] Doering, T., Pfleging, B., Kray, C., and Schmidt, A. 2010. Design by physical composition for complex tangible user interfaces. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3541-3546. DOI= http://doi.acm.org/10.1145/1753846.1754015

Teaching Ubicomp? What is it we should teach?

Why should we teach Ubicomp? What are the core issues when teaching Ubicomp? How do we cope with the rapid changes in technologies if we provide practical exercises in our Pervasive Computing classes? What skills will students take away from the course?

As Ubicomp is still a young and dynamic subject it is inevitable that we have to ask these questions. To share our experiences in teaching we met at the ETH Zürich. Friedemann Mattern, Marc Langheinrich, Michael Rohs, Kay Römer, and many of our PhD students (and me ;-) spent two days in Zürich to collect materials and discuss the above questions. The hardest one is obviously the what is ubicomp question…

For me the key thing is that we teach about distributed computing systems that are aware and linked to the real world and which are used by humans. The systems aspect is key and I think which specific technologies, tools, methods, we teach are exchangeable. The second point I want to make in my pervasive computing class is to get students excited and aware of the potential of computing in the future and how we are at the heart of a major change that goes beyond technology.

We are currently compiling a Wiki with teaching materials which we hope will become public in the future (at least in parts). If you teach a ubicomp related course or if you know of course please feel free to add a comment with link to the webpage and we will try to include it in the collection.

PS: examples of the UI challenge of ubicomp are everywhere.