Aaron Quigley will become director of HITLab Australia

Aaron announced that he is going to be the founding director of the Human Interface Technology Laboratory Australia and Professor at the University of Tasmania. After HITLab in Washington and New Zealand this is the third one. It is quite a challenge- but he is the person for it!

What can one say? Congratulations and a quote from Mark Twan: Twenty years from now you will be more disappointed by the things you didn’t do than by the ones you did do. So throw off the bowlines. Sail away from the safe harbor. Catch the trade winds in your sails. Explore. Dream. Discover.

PS: Found myself checking two things: (1) where Tasmania is and (2) when I have my next sabbatical …

Linking the activities in the physical world to actions in the digital/virtual

Currently we have an assignment in our Pervasive Computing class that asks students to design and develop a system where actions are associated with artifacts. Technically students should develop a web based solution using RFID. Apropos RFID, … if you look for a good introduction on RFID read Roy Want’s IEEE Pervasive Magazin paper [1].

We use the hardware from http://nabaztag.com/ (Ztamp:s and Mir:ror) as the focus is on the concept and application and not on the underlying technology. To ease development Florian and Ali have developed a little system that offers WebCallBacks (students can register a URL and that is called when a tag is read).

Linking by tagging of objects has been well explored, e.g. [2] and [3], and I think it is about time that this technologies will make an impact in the consumer market – the technology gets cheap enough now (and perhaps one of our students has a great idea).

Some years back (in the last millennium) a company tried to push linking of paper adverts and digital content with the CueCat (http://en.wikipedia.org/wiki/CueCat) – I was impressed and inspired at that time but in my view it had two major weaknesses: (1) technically too early and (2) encoding of serial numbers instead of URLs. The RadioShack catalog and the Wired Magazine that included codes showed the potential – but it was too cumbersome as it was restricted to the PC …

We did some work on the topic, too around that time – at RFID reader integrated in a glove – which resulted in a Poster at ISWC [4] and a patent [5].

[1] Want, R. 2006. An Introduction to RFID Technology. IEEE Pervasive Computing 5, 1 (Jan. 2006), 25. DOI= http://dx.doi.org/10.1109/MPRV.2006.2

[2] Harrison, B. L., Fishkin, K. P., Gujar, A., Portnov, D., and Want, R. 1999. Bridging physical and virtual worlds with tagged documents, objects and locations. In CHI ’99 Extended Abstracts on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, May 15 – 20, 1999). CHI ’99. ACM, New York, NY, 29-30. DOI= http://doi.acm.org/10.1145/632716.632738

[3] Ljungstrand, P. and Holmquist, L. E. 1999. WebStickers: using physical objects as WWW bookmarks. In CHI ’99 Extended Abstracts on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, May 15 – 20, 1999). CHI ’99. ACM, New York, NY, 332-333. DOI= http://doi.acm.org/10.1145/632716.632916

[4] Schmidt, A., Gellersen, H., and Merz, C. 2000. Enabling Implicit Human Computer Interaction: A Wearable RFID-Tag Reader. In Proceedings of the 4th IEEE international Symposium on Wearable Computers (October 18 – 21, 2000). ISWC. IEEE Computer Society, Washington, DC, 193. (Poster as large PNG)

[5] US Patent 6614351 – Computerized system for automatically monitoring processing of objects. September 2, 2003. http://www.patentstorm.us/patents/6614351/description.html

Human Computer Confluence – Information Day in Brussels

By the end of the month FET Open will launch the call for the proactive initiative on Human Computer Confluence. The term is new and hopefully it will really lead to new ideas. Today was already an information day on the upcoming proactive initiatives. I arrived the evening before and it is always a treat to talk a walk in the city.

The presentations were not really surprising and also the short intros by the participants remained very generic. Seeing the call that is now finalized and having been at the consultation meetings it seems to me that the focus is rather broad for a proactive initiative… but with many people wanting a piece of the cake this seems inevitable.

I presented a short idea of “breaking space and time boundaries” – the idea is related to a previous post on predicting the future. The main idea is that with massive sensing (by a large number of people) and with uniform access to this information – independ of time and space – we will be able to create a different view of our realty. We think of putting a consortium together for an IP. Interested? Then give me a call.

Happy Birthday – Prof. Thomas Christaller 60

It was a great honor to be invited to Prof Thomas Christaller’s 60th Birthday. During my time at Fraunhofer IAIS I had the pleasure of working with him and learning from him! He has many interests and skills! See his web page at Fraunhofer IAIS and at Lebenskunst.

The symposium at Schloß Birlinghoven featured an impressive list of people and I learned more about the history of German computer science. It is impressive to see that many people that shaped AI in Germany worked at some point together in one project (HAM-RPM, HAM-ANS, see [1]). This highlighted to me again the importance of education people in research and not just getting research done – as nicely described by Patterson in “Your students are your legacy” [2] – an article worthwhile to read for anyone advising students.

The afternoon and evening was much too short to catch up with everyone. It was great to meet Christian Bauckhage, who took over my office in Bonn, in person. He is now professor at B-IT and at Fraunhofer IAIS and I hope we have a chance to work together in the future. At WWW2009 he published a paper on a new approach to social network analysis [3] applied to Slashdot. This approach which discriminates negative and positive connections could also be an interesting approach in social networks that are grounded in the real world… seems there is already an idea for a joined project.

After telling Karl-Heinz Sylla that I am currently teaching a software engineering class he recommended me the following book: Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin [4]. The books looks good and one interesting argument is that programming well in the small (clean code) is a pre-requisite for large systems – or the other way round you break big software systems by bad programming in the small. Perhaps there is some time over the summer to read the book.

PS: Thomas chose an interesting option for birthday presents: bicycles for Africa – a quite remarkable project. I will see if I find the URL and post it in a comment…

[1] Wolfgang Hoeppner, Thomas Christaller, Heinz Marburger, Katharina Morik, Bernhard Nebel, Mike O’Leary, Wolfgang Wahlster: Beyond Domain-Independence: Experience With the Development of a German Language Access System to Highly Diverse Background Systems. IJCAI 1983: 588-594

[2] Patterson, D. A. 2009. Viewpoint
Your students are your legacy. Commun. ACM 52, 3 (Mar. 2009), 30-33. DOI= http://doi.acm.org/10.1145/1467247.1467259

[3] Kunegis, J., Lommatzsch, A., and Bauckhage, C. 2009. The slashdot zoo: mining a social network with negative edges. In Proceedings of the 18th international Conference on World Wide Web (Madrid, Spain, April 20 – 24, 2009). WWW ’09. ACM, New York, NY, 741-750. DOI= http://doi.acm.org/10.1145/1526709.1526809

[4] Robert C. Martin. Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall International. 2008 (Amazon-Link)

Steve Hinske defents his PhD Thesis at ETH Zurich

“Sounds like a fun project” was my first reaction when I read some time back the first paper on Steve‘s work on augmented toys and augmented games. Reading through his thesis and seeing more of his papers it seems there was a lot of hard work, too.

Thinking more about it I was wondering how toys are really going to change in the future and to what extent this is going to happen. Technically a lot is feasible as it is well demonstrated by Steve in his thesis (photo from www.vs.inf.ethz.ch); if you do not have time to read the thesis I recommend to look at two of his papers: [1] and [2]. They give a good overview of the systems he created. In the discussion we could see that there can be very interesting business model involving third party developers for such toys.

… but nevertheless the playing experience is something very special and I would bet the augmented toys will come but the ordinary non-augmented dolls will stay.

PS: The cafeteria at ETH provided another example of my collection “if you need a sign/label – you have got the UI design wrong” – great example how gestalt law would have been so easy and arrows look so bad 😉

[1] Hinske, S. and Langheinrich, M. 2009. W41K: digitally augmenting traditional game environments. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 – 18, 2009). TEI ’09. ACM, New York, NY, 99-106. DOI= http://doi.acm.org/10.1145/1517664.1517691

[2] Hinske, S., Langheinrich, M., and Lampe, M. 2008. Towards guidelines for designing augmented toy environments. InProceedings of the 7th ACM Conference on Designing interactive Systems (Cape Town, South Africa, February 25 – 27, 2008). DIS ’08. ACM, New York, NY, 78-87. DOI= http://doi.acm.org/10.1145/1394445.1394454

Automotive UIs – conference update, cool UI

The automotive user interface conference has received nearly 40 (to be exact 37) high quality submissions – we are really thrilled about the contributions – and now the review process is on! We will have more details on the program in a number of weeks.

Not a submission to the conference – but nevertheless cool: the MINI center globe UI – a 3D display concept for cars:

ebook, tangibe programming, iPhones bring back wired telephony

Having used the Sony PRS-505 now for a few weeks (mainly to read dissertations and project reports) I have quickly gotten used to carrying less weight. The user interface requires some learning – as the screen is pretty slow pressing a button does not give immediate feedback and that feels strange – more than expected. I wonder if there are studies on traditional interacton with electronic paper? Another issue: it seems to depend on the crew whether or not it is OK to read from an eBook during the entire flight (including take-off and landing)…

While reading a thesis I was reminded of an interesting paper on tangible programming [1] from a special issue of Personal and Ubiquitous Computing we did in 2004. The paper situates the topic historically and gives an interesting introduction.

In recent meetings as well as in airports around the world one can observe a trend: wired telephony! Whereas people with traditional mobile phone walk up and down and talk on the phone iPhone users often sit wired up to the next power plug an phone… seems apple has re-invented wired telephony 😉 and other brands will soon follow (make sure to reserve a seat with a power connection).

[1] McNerney, T. S. 2004. From turtles to Tangible Programming Bricks: explorations in physical language design. Personal Ubiquitous Comput. 8, 5 (Sep. 2004), 326-337. DOI= http://dx.doi.org/10.1007/s00779-004-0295-6

Interesting articles in Wired Magazine

In Miami airport I picked up the current issue of the wired magazine – and Airberlin gave me plenty of time to read it – was nearly through when we finally departed after 2 hours without air condition in the plane 🙁

Not really complaining as there is a set of inspiring articles about the digital economy:

hope you find a more comfortable place to read them 😉

Statistical Data on phone usage and ICT

Ever wanted to cite the number of “Mobile cellular subscriptions per 100 inhabitance” in Albania, Algeria, Argentina, Armenia, Australia, …., United States, Uruguay, Uzbekistan, Venezuela, Viet Nam, Yemen, Zambia or Zimbabwe? Or the spending on mobile telephony or the computer penetration in these countries? Then the website I just came across may be interesting for you too: http://measuring-ict.unctad.org/

Here are the direct links to documents containing data:

Some of the figures seem really high to me – but I have not looked into detail. They have also publish a handbook on how to measuring ICT access and uses:
MANUAL for Measuring ICT Access and Use by Households and Individuals

Interessting tool to find flights…

Looking for a way to get from Essen (leaving not before 18:00) to Newcastle (arriving before 10:00 the next day) and going back from Newcastle (leaving not before 17:00) to Zürich (arriving before 10:00 the next day) Chis pointed me to a website that is very helpful for such tasks… (at least with the flying part of it): http://www.skyscanner.de

I wonder how hard it is to build a similar tool that takes further modes of transport (e.g. train and rental car) into account…

Morten Fjeld visiting

On his way from Eindhoven to Zurich Morten Fjeld was visiting our group. It was great to catch up and talk about a number of exciting research projects and ideas. Some years ago one of my students from Munich did his final project with Morten working on haptic communication ideas, see [1]. Last year at TEI Morten had a paper on a related project – also using actuated sliders, see [2].

In his presentation Morten gave an overview of the research he does and we found a joint interest in capacitive sensing. Raphael Wimmer did his final project in Munich on capacitive sensing for embedded interaction which was published in Percom 2007, see [3]. Raphael has continued the work for more details and the open source hardware and software see http://capsense.org. Morten has a cool paper (combing a keyboard and capacitive sensing) at Interact 2009 – so check the program when it is out.

We talked about interaction and optical tracking and that reminded me that we wanted to see how useful the touchless SDK (http://www.codeplex.com/touchless) could be for final projects and exercise. Matthias Kranz had used it successfully with students in Linz in the unconventional user interfaces class.

[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51

[2] Gabriel, R., Sandsjö, J., Shahrokni, A., and Fjeld, M. 2008. BounceSlider: actuated sliders for music performance and composition. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, 127-130. DOI= http://doi.acm.org/10.1145/1347390.1347418

[3] Wimmer, R., Kranz, M., Boring, S., and Schmidt, A. 2007. A Capacitive Sensing Toolkit for Pervasive Activity Detection and Recognition. In Proceedings of the Fifth IEEE international Conference on Pervasive Computing and Communications (March 19 – 23, 2007). PERCOM. IEEE Computer Society, Washington, DC, 171-180. DOI= http://dx.doi.org/10.1109/PERCOM.2007.1

SEGA World – relaxing after the conference :-)

On the way back from the PC-dinner we needed to get an update on another aspect of Japanese technologies and so we went into SEGA World in Nara.

Many of the games are very similar to other toys around the world – shooter, sports games and racing games. Each time you use games in such a setting one is reminded of the power a physical controls and the concept of tangible interaction…


The photo maker however was very different from what I have seen before. Technically it is interesting and well engineered: you make photos in a well lit area, it removes the background, and then you can choose background, borders, frames etc. Marc’s Japanese helped us to get our pictures out of the machine – with more time an more Japanese reading skill we could have manipulated our pictures some more. It was interesting that the machine offered two options for output: paper and transfer to your mobile phone.

PS: remember not to play basketball against James and not to race against Antonio 😉

Rubber-like stretchable display

Jörg just sent me a link on a rubber-like stretchable display that is published in Nature Material. There is a previous press release with some photos [2]. This is a significant step towards new nteractive devices, such as the one suggested in the GUMMI project [3].

[1] Stretchable active-matrix organic light-emitting diode display using printable elastic conductors, Tsuyoshi Sekitani et al., Nature Materials, doi: 10.1038/nmat2459
http://www.nature.com/nmat/journal/vaop/ncurrent/abs/nmat2459.html

[2] http://www.ntech.t.u-tokyo.ac.jp/Archive/Archive_press_release/press_stretchable/documents/press_release_en.pdf

[3] Schwesig, C., Poupyrev, I., and Mori, E. 2004. Gummi: a bendable computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 263-270. DOI= http://doi.acm.org/10.1145/985692.985726

Some Interesting Papers and random Photos from Pervasive 2009

Pervasive 2009 had a really exciting program and provided a good overview of current research in pervasive and ubiquitous computing. Have a look at the proceedings of the pervasive 2009 conference. The Noh theater in Nara was a very special and enjoyable venue and it was organized perfectly – as one would expect when travelling to Japan.

The idea of having short and long papers together in the main track worked very well in my view. The number of demos and posters was much higher than in the years before – and that was great and very inspiring. Have a look at the photos for some of the posters and demos.
The program consisted of 20 full papers (18 pages) and 7 notes (8 pages) which were selected in a peer review process out of 147 submissions (113 full papers, 34 notes) which is a acceptance rate of 18%.

John Krumm presented his paper Realistic driving tips for location privacy – again having a good idea making the presentation interesting beyond its content (having review snippets in the footer of the slides – including a fake review). The paper explores the difficulties that arise when creating fake GPS tracks. He motivated that the probabilities need to be taken into account (e.g. you are usually on a road). I liked the approach and the paper is worthwhile to read. I think it could be interesting to compare the approach is not create the tracks but just share them between users (e.g. other people can use parts of my track as fake track and in return I get some tracks that I can use as fake tracks). http://dx.doi.org/10.1007/978-3-642-01516-8_4

If you phone knows where you are you can use this information to control your heating system. This was the basic idea of the research presented by Stephen Intille. They explored using GPS location of the users to automate control of the heating / air condition control in a house. It seems there is quite some potential for saving energy with technology typically used in the US (one temperature control for the whole house). In Europe where heating systems typically offer finer control (e.g. room level) the potential is probably larger.
http://dx.doi.org/10.1007/978-3-642-01516-8_8

James Scott presented a paper that showed how you can use force gestures to interact with a device. In contrast to previous research (e.g. GUMMI) the approach works with a ridged device and could be used with current screen technologies.
http://dx.doi.org/10.1007/978-3-642-01516-8_10

What do you need to figure out who is holding and using the remote control? This question is addressed in the paper “Inferring Identity Using Accelerometers in Television Remote Controls” that was presented by Jeff Hightower. They looked at how well button press sequences and accelerometer data give you information about which person is using the device.
http://dx.doi.org/10.1007/978-3-642-01516-8_11

Geo-fencing: confining Wi-Fi Coverage to Physical Boundaries is an example of how to create technological solutions to fit a user’s conceptual model of the world. As people have experience with the physical world and they have mechanisms to negotiate and use space and hence linking technologies that have typically other characteristics (e.g. wireless radio coverage) to the known concept is really interesting.
http://dx.doi.org/10.1007/978-3-642-01516-8_19

Situvis, a tool for visualizing sensor data, was presented by Adrian Clear from Aaron’s group in Dublin. The software, papers and a video is available at: http://situvis.com/. The basic idea is to have a parallel coordinate visualization of the different sensor information and to provide interaction mechanisms with the data.
http://dx.doi.org/10.1007/978-3-642-01516-8_22

Nathan Eagle presented the paper “Methodologies for continuous cellular tower data analysis”. He talked about the opportunities that arise when we have massive amounts of information from users – e.g. tracks from 200 million mobile phone user. It really is interesting that based on such methods we may get completely new insights into human behavior and social processes.
http://dx.doi.org/10.1007/978-3-642-01516-8_23

If you have seen a further interesting paper in the conference (and there are surely some) that I have missed feel free to give a link to them in the comments to this post.

Tutorials at Pervasive, HCI Library

I did a tutorial on Mobile Human Computer interaction at Pervasive 2009. The tutorial tried to give an overview of challenges of mobile HCI and was partly based on last year’s tutorial day at MobileHCI2008 in Amsterdam. For the slides from last year have a look at: http://albrecht-schmidt.blogspot.com/2008/09/mobilehci-2008-tutorial.html


Listening to Marc Langheinrich‘s tutorial on privacy I remembered that that I still have the photos of his HCI library – and to not forget them I upload them. Marc highlighted the risk of data analysis with the AOL Stalker example (some comments about the AOL Stalker). His overall tutorial is always good to hear and has many inspring issues – even so I am not agreeing with all the conclusions 😉


For me seeing the books my collegues use on a certain topic still works better than the amazon recommendations I get 😉 perhaps people (or we?) should work harder on using social network based product recommendation systems…

Our Publications at Pervasive – Public Displays, Car Adverts, and Tactile Output for Navigation

Our group was involved in 3 papers that are published at Pervasive 2009 in Nara.

The first contribution is a study on public display that was presented by Jörg Müller from Münster. The paper explores display blindness that can be observed in the real world (similarly to banner blindness) and concludes that the extent to which people look at displays is very much correlated to the users expectation of the content of a display in a certain location [1].

The second short paper is a survey on car advertising and has been conducted in the context of the master thesis of Christoph Evers. The central question is about the design space of dynamic advertising on cars and how the users perceive such a technology [2].

Dagmar presented a paper on vibra-tactile output integrated in the steering wheel for navigation systems in cars. The studies explored how multi-modal presentation of information impact driving performance and what modalities are preferred by users. The general conclusion is that combining visual information with vibra-tactile output is the best option and that people prefer multi-modal output over a single modality [3].

[1] Jörg Müller, Dennis Wilmsmann, Juliane Exeler, Markus Buzeck, Albrecht Schmidt, Tim Jay, Antonio Krüger. Display Blindness: The Effect of Expectations on Attention towards Digital Signage. 7th International Conference on Pervasive Computing 2009. Nara, Japan. Springer LNCS 5538, pp 1-8.
http://www.springerlink.com/content/gk307213786207g2

[2] Florian Alt, Christoph Evers, Albrecht Schmidt. User’s view on Context-Aware Car Advertisement. 7th International Conference on Pervasive Computing 2009. Nara, Japan. Springer LNCS 5538, pp 9-16.
http://www.springerlink.com/content/81q8818683315523

[3] Dagmar Kern, Paul Marshall, Eva Hornecker, Yvonne Rogers, Albrecht Schmidt. Enhancing Navigation Information with tactile Output Embedded into the Steering Wheel. 7th International Conference on Pervasive Computing 2009. Nara, Japan. Springer LNCS 5538, pp 42-58.
http://www.springerlink.com/content/x13j7547p8303113

Keynote at Pervasive 2009 – Toshio Iwai

Toshio Iwai gave the keynote at Pervasive 2009 on expanding media art. He introduced us to the basics of moving images and films. The examples were fun and I think I will copy some for my introductory class on user interfaces for explaining the visual system (afterimages with a black-and-white negative image; the concept of combining images on two sides of a disk; the idea of moving images by using a flip book).

In his introduction he also went back to explain what he learned as a child and I found this very interesting and encouraging to expose smaller children more to technology than we usually tend to do (especially in Germany I think we do not give children much chance to explore technologies while they are in kindergarten and primary school). Hope to go with Vivien to the space center in Florida in few weeks 🙂

Following up from the basic visual effects he showed some really funny life video effects. He introduced a delay to some parts (lines) in the picture when displaying which led to ghostly movements. Everything that is not moving appears in its real shape and everything that is in motion will be deformed.

In the final part of his talk he argued that the Theremin is the only electronic instrument that has been newly invented in the 20th century. For him an instrument has to have unique interaction, unique shape, and unique sound. Additional for the interaction it is essential that the interaction can be perceived by the audience (you can see how one plays a violin but not how one makes digital music with a laptop computer). Based on this he show a new musical instrument he developed that is inspired by a music box. The instrument is the TENORI-ON [1]. It has a surface with 16×16 switches (that include an LED) and 16×16 LEDs on the back. It has a unique interaction, its shape and sound is unique and it supports visibility of interaction as the sound is combined with light pattern. The basic idea is that the horizontal direction is the time line and the vertical the pitch (similar to a music box).

[1] Yu Nishibori, Toshio Iwai. TENORI-ON. Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06), Paris, France. http://www.nime.org/2006/proc/nime2006_172.pdf

Workshop on Pervasive Computing in Advertising

We got a good set of submission for our workshop and had about 20 participants who joined us in Nara to discuss how pervasive computing will shape advertising in the future. The papers and a selection of talks is online on the workshop website: http://pervasiveadvertising.org

One question that was central to our discussion was: what is advertising and how is it different from information. It became quickly clear that there is a lot of information that has an influence on behavior and in particular shopping decisions and some of it is considered advertising but much is not. Hence it seems really interesting to imagine a world where advertising is replaced by information. One could image that replacing advertising by information (e.g. as it happens already in some domains such a hotel recommendations) would change the whole approach for creating product or providing services.

We have presented in the workshop our work on contextual mobile displays. The idea is that in the future we could have mobile displays (that replace current printed items, like bumper stickers, bags with printed logos, and t-shirts with prints) could become active and could act as contextual displays. Have a look at the paper for more details [1].

[1] Florian Alt, Albrecht Schmidt, Christoph Evers. Mobile Contextual display system. Pervasive Advertising Workshop at Pervasive 2009. (contact Florian Alt for a copy of the paper)

Japan – sightseeing (an less phone usage than expected)

To get cheaper flights we took a flight on Thursday/Friday to fly from Europe to Japan (never really understood the pricing model of flights). So we had two days off before the actual conference and many colleagues (who also took cheap flights) were also there. We went to do some sightseeing in Nara and Kyoto – which was great.

In Kyoto we got personal guides – students from a University in Kyoto – who offered to show us run and use this to practice their English. It was great for us as we got many insights we would have missed by ourselves and it was great to talk to some locals. Hopefully they enjoyed their time with us, too. In the evening we learned once more that the Japanese people are very social; we met the Nara Air Rescue team in a restaurant – and this was proof :-).

One thing that surprised me greatly was that very few people in Nara and Kyoto used their phone in public. On the train nearly nobody spoke on the phone, watch mobile TV or browsed the web. This is obviously very different from Tokyo. Overall Nara and Kyoto are very enjoyable and calming places. I hope to have at some point the time to spend more time in Japan (… when is my next sabbatical? 😉

For more photos see: http://foto.ubisys.org/pervasive2009/

PS: an some people find a disco in the street…

Open Lab Day in Essen

Today we had an open lab day – our first one in Essen. We invited colleagues, admin staff, students, friends, and family to have a look how we spent our days 😉 and what interesting systems we create with our students and in our research projects. We had several applications on our multi-touch table running, showed two prototypes in the automotive domain (text input while driving and vibration feedback in the steering wheel), demonstrated a new form of interaction with a public display and let people try an eye-tracking application.

Andreas Riener visits our lab

Andreas Riener from the University of Linz came to visit us for 3 days. In his research he works on multimodal and implicit interaction in the car. We talked about several new ideas for new user multimodal interfaces. Andreas had a preseure matt with him and we could try out what sensor readings we get in different setups. It seems that in particular providing redundancy in the controls could create interesting opportunities – hopefully we find means to explore this further.

Meeting on public display networks

Sunday night I travelled to Lugano for a meeting public display networks. I figured out that going there by night train is the best option – leaving midnight in Karlsruhe and arriving at 6am there. As I planned to sleep all the time my assumption was that the felt travel time would be zero. Made my plan without the rail company… the train was 2 hours late and I walked up and down for 2 hours in Karlsruhe at the track – and interestingly the problem would have been less annoying if public displays would provide the relevant information … The most annoying thing was passengers had no information if or when the train will come and no one could tell (neither was anyone at the station nor was anyone taking calls at the hotline).
The public display – really nice state of the art hardware – showed for 1 hour nothing and that it showed that the train is one hour late (was already more than 1 hour after the scheduled time) and finally the train arrived 2 hours late (the display still showing 1 hour delay). How hard can it be to provide this information? It seems with current approaches it is too hard…

On my way back I could observe a further example of short comings with content on public display. In the bus office they had a really nice 40-50 inch screen showing teletext of the departure. The problem was it was the teletext for the evening as the staff has to manually switch the pages. Here too it is very clear the information is available but current delivery systems are not well integrated.

In summary it is really a pity how poorly the public display infrastructures are used. It seems there are a lot of advances in the hardware but little on the content delivery, software and system side.

Offline Tangible User Interface

When shopping for a sofa I used an interesting tangible user interface – magnetic stickers. For each of the sofas systems the customer can create their own configuration using these magnetic stickers on a background (everything in a scale 1:50).

After the user is happy with the configuration the shop assistant makes a xerox copy (I said I do not need a black and white copy I make my own color copy with the phone) and calculates the price and writes up an order. The interaction with the pieces is very good and also great as a shared interface – much nicer than comparable systems that are screen based. I could imaging with a bit of effort one could create a phone application that scans the customer design, calculates the prices, and provides a rendered image of the configuration – with the chosen color (in our case green ;-). Could be an interesting student project…

App store of a car manufacturer? Or the future of cars as application platform.

When preparing my talk for the BMW research colloquium I realized once more how much potential there is in the automotive domain (if you looks from am CS perspective). My talk was on the interaction of the driver with the car and the environment and I was assessing the potential of the car as a platform for interactive applications (slides in PDF). Thinking of the car as a mobile terminal that offers transportation is quite exciting…

I showed some of our recent project in the automotive domain:

  • enhance communication in the car; basically studying the effect of a video link between driver and passenger on the driving performance and on the communication
  • handwritten text input; where would you put the input and the output? Input on the steering wheel and visual feedback in the dashboard is a good guess – see [1] for more details.
  • How can you make it easier to interrupt tasks while driving – we have some ideas for minimizing the cost of interruptions for the driver on secondary tasks and explored it with a navigation task.
  • Multimodal interaction and in particular tactile output are interesting – we looked at how to present navigation information using a set of vibra tactile actuators. We will publish more details on this at Pervasive 2009 in a few weeks.

Towards the end of my talk I invited the audience to speculate with me on future scenarios. The starting point was: Imagine you store all the information that goes over the bus systems in the car permanently and you transmit it wireless over the network to a backend storage. Then image 10% of the users are willing to share this information publicly. That is really opening a whole new world of applications. Thinking this a bit further one question is how will the application store of a car manufacturer look in the future? What can you buy online (e.g. fuel efficiency? More power in the engine? A new layout for your dashboard? …). Seems like an interesting thesis topic.

[1] Kern, D., Schmidt, A., Arnsmann, J., Appelmann, T., Pararasasegaran, N., and Piepiera, B. 2009. Writing to your car: handwritten text input while driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 4705-4710. DOI= http://doi.acm.org/10.1145/1520340.1520724

Visit to Newcastle University, digital jewelry

I went to see Chris Kray at Culture Lab at Newcastle University. Over the next months we will be working on a joined project on a new approach to creating and building interactive appliances. I am looking forward to spending some more time in Newcastle.

Chris showed me around their lab and I was truly impressed. Besides many interesting prototypes in various domains I have not seen this number of different ideas and implementations of table top systems and user interface in another place. For picture of me in the lab trying out a special vehicle see Chris’ blog.

Jayne Wallace showed me some of her digital jewelry. A few years back she wrote a very intersting article with the title “all the useless beauty” [1] that provides an interesting perspective on design and suggests beauty as a material in digital design. The approach she takes it to design deliberately for a single individual. The design fits their personality and their context. She created a communication device to connect two people in a very simple and yet powerful way [2]. A further example is a piece of jewelry that makes the environment change to provide some personal information – technically it is similar to the work we have started with encoding interest in the Bluetooth friendly names of phones [3] but her artefacts are much more pretty and emotionally exciting.

[1] Wallace, J. and Press, M. (2004) All this useless beauty The Design Journal Volume 7 Issue 2 (PDF)

[2] Jayne Wallace. Journeys. Intergeneration Project.

[3] Kern, D., Harding, M., Storz, O., Davis, N., and Schmidt, A. 2008. Shaping how advertisers see me: user views on implicit and explicit profile capture. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 3363-3368. DOI= http://doi.acm.org/10.1145/1358628.1358858

Ubicomp Spring School in Nottingham – prototyping user interfaces

On Tuesday and Wednesday afternoon I ran practical workshops on creating novel user interfaces complementing the tutorial on Wednesday morning. The aim of the practical was to motivate people to more fundamentally question user interface decisions that we make in our research projects.

On a very simple level an input user interface can be seen as a sensor, a transfer function or mapping, and an action in the system that is controlled. To motivate that this I showed two simple javascript programs that allowed to play with the mapping of the mouse to a movement of a button on the screen and with moving through a set of images. If you twist the mapping functions really simple tasks (like moving one button on top of the other) may get complicated. Similarly if you change the way you use the sensor (e.g. instead of moving the mouse on a surface, having several people moving a surface over the mouse) such simple tasks may become really difficult, too.

With this initial experience, a optical mouse, a lot of materials (e.g. fabrics, cardboard boxes, picture frames, toys, etc.), some tools, and 2 hours of time the groups started to create their novel interactive experience. The results created included a string puppet interface, a frog interface, a interface to the (computer) recycling, a scarf, and a close contact dancing interface (the music only plays if bodies touch and move).

The final demos of the workshop were shown before dinner. Seeing the whole set of the new interface ideas one wonders why there is so little of this happening beyond the labs in the real world and why people are happy to live with current efficient but rather boring user interfaces – especially in the home context…

Ubicomp Spring School in Nottingham – Tutorial

The ubicomp spring school in Nottingham had an interesting set of lectures and practical sessions, including a talk by Turing Award winner Robin Milner on a theoretical approach to ubicomp. When I arrived on Tuesday I had the chance to see Chris Baber‘s tutorial on wearable computing. He provided really good examples of wearable computing and its distinct qualities (also in relation to wearable use of mobile phones). One example that captures a lot about wearable computing is an adaptive bra. The bra one example of a class of interesting future garments. The basic idea is that these garments detects the activity and changes their properties accordingly. A different example in this class is a shirt/jacket/pullover/trouser that can change its insulation properties (e.g. by storing and releasing air) according to the external temperature and the users body temperature.

My tutorial was on user interface engineering and I discussed: what is different in creating ubicomp UIs compared to traditional user interfaces. I showed some trends (including technologies as well as a new view on privacy) that open the design space for new user interfaces. Furthermore we discussed the idea about creating magical experiences in the world and the dilemma of user creativity and user needs.

There were about 100 people the spring school from around the UK – it is really exciting how much research in ubicomp (and somehow in the tradition of equator) is going on in the UK.

Mobile Boarding Pass, the whole process matters

Yesterday night I did an online check-in for my flight from Düsseldorf to Manchester. For convenience and curiosity I chose the mobile boarding pass. It is amazingly easy and it worked in principle very well. Only not everyone can work without paper yet. At some point in the process (after border control) I got a hand written “boarding pass” because this person needs to stamp it 😉 and we would probably have gotten into an argument if he tried to stamp my phone. There is some further room for improvement. The boarding pass shows besides the 2D barcode all the important information for the traveler – but you have to scroll to the bottom of the page to get the boarding number (which seems quite important for everyone else than the traveler – it was even on my handwritten boarding pass).

Teaching, Technical Training Day at the EPO

Together with Rene Mayrhofer and Alexander De Luca I organized a technical training at the European Patent Office in Munich. In the lectures we made the attempt to give a broad overview of recent advanced in this domain – and preparing such a day one realizes how much there is to it…. We covered the following topic:
  • Merging the physical and digital (e.g. sentient computing and dual reality [1])
  • Interlinking the real world and the virtual world (e.g. Internet of things)
  • Interacting with your body (e.g. implants for interaction, brain computer interaction, eye gaze interaction)
  • Interaction beyond the desktop, in particular sensor based UIs, touch interaction, haptics, and Interactive surfaces
  • Device authentication with focus on spontaneity and ubicomp environments
  • User authentication focus on authentication in the public 
  • Location-Awareness and Location Privacy
Overall we covered probably more than 100 references – here are just a few nice ones to read: computing tiles as basic building blocks for smart environments [2], a bendable computer interface [3], a touch screen you can also touch on the back side [4], and ideas on phones as basis for people centric censing [5].
[1] Lifton, J., Feldmeier, M., Ono, Y., Lewis, C., and Paradiso, J. A. 2007. A platform for ubiquitous sensor deployment in occupational and domestic environments In Proceedings of the 6th Conference on international information Processing in Sensor Networks (Cambridge, Massachusetts, USA, April 25 – 27, 2007). IPSN ’07. ACM, New York, NY, 119-127. DOI= http://doi.acm.org/10.1145/1236360.1236377
[2] Naohiko Kohtake, et al. u-Texture: Self-organizable Universal Panels for Creating Smart Surroundings. The 7th Int. Conference on Ubiquitous Computing (UbiComp2005), pp.19-38, Tokyo, September, 2005. http://www.ht.sfc.keio.ac.jp/u-texture/paper.html
[3] Schwesig, C., Poupyrev, I., and Mori, E. 2004. Gummi: a bendable computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 263-270. DOI= http://doi.acm.org/10.1145/985692.985726 
[4] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and Shen, C. 2007. Lucid touch: a seethrough mobile device. InProceedings of the 20th Annual ACM Symposium on User interface Software and Technology (Newport, Rhode Island, USA, October 07 – 10, 2007). UIST ’07. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1294211.1294259 
[5] Campbell, A. T., Eisenman, S. B., Lane, N. D., Miluzzo, E., Peterson, R. A., Lu, H., Zheng, X., Musolesi, M., Fodor, K., and Ahn, G. 2008. The Rise of People-Centric Sensing. IEEE Internet Computing 12, 4 (Jul. 2008), 12-21. DOI= http://dx.doi.org/10.1109/MIC.2008.90  

Final Presentation: Advertising 2.0

Last term we ran an interdisciplinary project with our MSc students from computer science and business studies to explore new ways in outdoor advertising. The course was jointly organized by the chairs: Specification of Software Systems, Pervasive Computing and User Interface Engineering, and Marketing and Trade. We were in particular interested what you can do with mobile phones and public displays. It is always surprising how much a group of 10 motivated students can create in 3 months. The group we had this term was extraordinary – over the last weeks they regularly stayed in the evenings longer in the lab than me 😉

The overall task was very open and the students created a concept and than implemented it – as a complete system including backend server, end user client on the mobile phone, and administration interface for advertisers. After the presentation and demos we really started thinking where we can deploy it and who the potential partners would be. The system offers means for implicit and explicit interaction, creates interest profiles, and allows to target adverts to groups with specific interest. Overall such technologies can make advertising more effective for companies (more precisely targeted adverts) and more pleasant for consumers (getting adverts that match personal areas of interest).

There are more photos of the presentation on the server.

PS: one small finding on the side – Bluetooth in its current form is a pain for interaction with public display… but luckily there are other options.