App store of a car manufacturer? Or the future of cars as application platform.

When preparing my talk for the BMW research colloquium I realized once more how much potential there is in the automotive domain (if you looks from am CS perspective). My talk was on the interaction of the driver with the car and the environment and I was assessing the potential of the car as a platform for interactive applications (slides in PDF). Thinking of the car as a mobile terminal that offers transportation is quite exciting…

I showed some of our recent project in the automotive domain:

  • enhance communication in the car; basically studying the effect of a video link between driver and passenger on the driving performance and on the communication
  • handwritten text input; where would you put the input and the output? Input on the steering wheel and visual feedback in the dashboard is a good guess – see [1] for more details.
  • How can you make it easier to interrupt tasks while driving – we have some ideas for minimizing the cost of interruptions for the driver on secondary tasks and explored it with a navigation task.
  • Multimodal interaction and in particular tactile output are interesting – we looked at how to present navigation information using a set of vibra tactile actuators. We will publish more details on this at Pervasive 2009 in a few weeks.

Towards the end of my talk I invited the audience to speculate with me on future scenarios. The starting point was: Imagine you store all the information that goes over the bus systems in the car permanently and you transmit it wireless over the network to a backend storage. Then image 10% of the users are willing to share this information publicly. That is really opening a whole new world of applications. Thinking this a bit further one question is how will the application store of a car manufacturer look in the future? What can you buy online (e.g. fuel efficiency? More power in the engine? A new layout for your dashboard? …). Seems like an interesting thesis topic.

[1] Kern, D., Schmidt, A., Arnsmann, J., Appelmann, T., Pararasasegaran, N., and Piepiera, B. 2009. Writing to your car: handwritten text input while driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 4705-4710. DOI= http://doi.acm.org/10.1145/1520340.1520724

Visit to Newcastle University, digital jewelry

I went to see Chris Kray at Culture Lab at Newcastle University. Over the next months we will be working on a joined project on a new approach to creating and building interactive appliances. I am looking forward to spending some more time in Newcastle.

Chris showed me around their lab and I was truly impressed. Besides many interesting prototypes in various domains I have not seen this number of different ideas and implementations of table top systems and user interface in another place. For picture of me in the lab trying out a special vehicle see Chris’ blog.

Jayne Wallace showed me some of her digital jewelry. A few years back she wrote a very intersting article with the title “all the useless beauty” [1] that provides an interesting perspective on design and suggests beauty as a material in digital design. The approach she takes it to design deliberately for a single individual. The design fits their personality and their context. She created a communication device to connect two people in a very simple and yet powerful way [2]. A further example is a piece of jewelry that makes the environment change to provide some personal information – technically it is similar to the work we have started with encoding interest in the Bluetooth friendly names of phones [3] but her artefacts are much more pretty and emotionally exciting.

[1] Wallace, J. and Press, M. (2004) All this useless beauty The Design Journal Volume 7 Issue 2 (PDF)

[2] Jayne Wallace. Journeys. Intergeneration Project.

[3] Kern, D., Harding, M., Storz, O., Davis, N., and Schmidt, A. 2008. Shaping how advertisers see me: user views on implicit and explicit profile capture. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 3363-3368. DOI= http://doi.acm.org/10.1145/1358628.1358858

Ubicomp Spring School in Nottingham – prototyping user interfaces

On Tuesday and Wednesday afternoon I ran practical workshops on creating novel user interfaces complementing the tutorial on Wednesday morning. The aim of the practical was to motivate people to more fundamentally question user interface decisions that we make in our research projects.

On a very simple level an input user interface can be seen as a sensor, a transfer function or mapping, and an action in the system that is controlled. To motivate that this I showed two simple javascript programs that allowed to play with the mapping of the mouse to a movement of a button on the screen and with moving through a set of images. If you twist the mapping functions really simple tasks (like moving one button on top of the other) may get complicated. Similarly if you change the way you use the sensor (e.g. instead of moving the mouse on a surface, having several people moving a surface over the mouse) such simple tasks may become really difficult, too.

With this initial experience, a optical mouse, a lot of materials (e.g. fabrics, cardboard boxes, picture frames, toys, etc.), some tools, and 2 hours of time the groups started to create their novel interactive experience. The results created included a string puppet interface, a frog interface, a interface to the (computer) recycling, a scarf, and a close contact dancing interface (the music only plays if bodies touch and move).

The final demos of the workshop were shown before dinner. Seeing the whole set of the new interface ideas one wonders why there is so little of this happening beyond the labs in the real world and why people are happy to live with current efficient but rather boring user interfaces – especially in the home context…

Ubicomp Spring School in Nottingham – Tutorial

The ubicomp spring school in Nottingham had an interesting set of lectures and practical sessions, including a talk by Turing Award winner Robin Milner on a theoretical approach to ubicomp. When I arrived on Tuesday I had the chance to see Chris Baber‘s tutorial on wearable computing. He provided really good examples of wearable computing and its distinct qualities (also in relation to wearable use of mobile phones). One example that captures a lot about wearable computing is an adaptive bra. The bra one example of a class of interesting future garments. The basic idea is that these garments detects the activity and changes their properties accordingly. A different example in this class is a shirt/jacket/pullover/trouser that can change its insulation properties (e.g. by storing and releasing air) according to the external temperature and the users body temperature.

My tutorial was on user interface engineering and I discussed: what is different in creating ubicomp UIs compared to traditional user interfaces. I showed some trends (including technologies as well as a new view on privacy) that open the design space for new user interfaces. Furthermore we discussed the idea about creating magical experiences in the world and the dilemma of user creativity and user needs.

There were about 100 people the spring school from around the UK – it is really exciting how much research in ubicomp (and somehow in the tradition of equator) is going on in the UK.

Mobile Boarding Pass, the whole process matters

Yesterday night I did an online check-in for my flight from Düsseldorf to Manchester. For convenience and curiosity I chose the mobile boarding pass. It is amazingly easy and it worked in principle very well. Only not everyone can work without paper yet. At some point in the process (after border control) I got a hand written “boarding pass” because this person needs to stamp it 😉 and we would probably have gotten into an argument if he tried to stamp my phone. There is some further room for improvement. The boarding pass shows besides the 2D barcode all the important information for the traveler – but you have to scroll to the bottom of the page to get the boarding number (which seems quite important for everyone else than the traveler – it was even on my handwritten boarding pass).

Teaching, Technical Training Day at the EPO

Together with Rene Mayrhofer and Alexander De Luca I organized a technical training at the European Patent Office in Munich. In the lectures we made the attempt to give a broad overview of recent advanced in this domain – and preparing such a day one realizes how much there is to it…. We covered the following topic:
  • Merging the physical and digital (e.g. sentient computing and dual reality [1])
  • Interlinking the real world and the virtual world (e.g. Internet of things)
  • Interacting with your body (e.g. implants for interaction, brain computer interaction, eye gaze interaction)
  • Interaction beyond the desktop, in particular sensor based UIs, touch interaction, haptics, and Interactive surfaces
  • Device authentication with focus on spontaneity and ubicomp environments
  • User authentication focus on authentication in the public 
  • Location-Awareness and Location Privacy
Overall we covered probably more than 100 references – here are just a few nice ones to read: computing tiles as basic building blocks for smart environments [2], a bendable computer interface [3], a touch screen you can also touch on the back side [4], and ideas on phones as basis for people centric censing [5].
[1] Lifton, J., Feldmeier, M., Ono, Y., Lewis, C., and Paradiso, J. A. 2007. A platform for ubiquitous sensor deployment in occupational and domestic environments In Proceedings of the 6th Conference on international information Processing in Sensor Networks (Cambridge, Massachusetts, USA, April 25 – 27, 2007). IPSN ’07. ACM, New York, NY, 119-127. DOI= http://doi.acm.org/10.1145/1236360.1236377
[2] Naohiko Kohtake, et al. u-Texture: Self-organizable Universal Panels for Creating Smart Surroundings. The 7th Int. Conference on Ubiquitous Computing (UbiComp2005), pp.19-38, Tokyo, September, 2005. http://www.ht.sfc.keio.ac.jp/u-texture/paper.html
[3] Schwesig, C., Poupyrev, I., and Mori, E. 2004. Gummi: a bendable computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 263-270. DOI= http://doi.acm.org/10.1145/985692.985726 
[4] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and Shen, C. 2007. Lucid touch: a seethrough mobile device. InProceedings of the 20th Annual ACM Symposium on User interface Software and Technology (Newport, Rhode Island, USA, October 07 – 10, 2007). UIST ’07. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1294211.1294259 
[5] Campbell, A. T., Eisenman, S. B., Lane, N. D., Miluzzo, E., Peterson, R. A., Lu, H., Zheng, X., Musolesi, M., Fodor, K., and Ahn, G. 2008. The Rise of People-Centric Sensing. IEEE Internet Computing 12, 4 (Jul. 2008), 12-21. DOI= http://dx.doi.org/10.1109/MIC.2008.90  

Final Presentation: Advertising 2.0

Last term we ran an interdisciplinary project with our MSc students from computer science and business studies to explore new ways in outdoor advertising. The course was jointly organized by the chairs: Specification of Software Systems, Pervasive Computing and User Interface Engineering, and Marketing and Trade. We were in particular interested what you can do with mobile phones and public displays. It is always surprising how much a group of 10 motivated students can create in 3 months. The group we had this term was extraordinary – over the last weeks they regularly stayed in the evenings longer in the lab than me 😉

The overall task was very open and the students created a concept and than implemented it – as a complete system including backend server, end user client on the mobile phone, and administration interface for advertisers. After the presentation and demos we really started thinking where we can deploy it and who the potential partners would be. The system offers means for implicit and explicit interaction, creates interest profiles, and allows to target adverts to groups with specific interest. Overall such technologies can make advertising more effective for companies (more precisely targeted adverts) and more pleasant for consumers (getting adverts that match personal areas of interest).

There are more photos of the presentation on the server.

PS: one small finding on the side – Bluetooth in its current form is a pain for interaction with public display… but luckily there are other options.

Impact of colors – hints for ambient design?

There is a study that looked at how the performace in solving certain cognitive/creative tasks is influenced by the backgroun color [1]. In short: to make people alert and to increase performance on detail oriented tasks use red; to get people in creative mode use blue. Lucky us our corporate desktop background is mainly blue! Perhaps this could be interesting for ambient colors, e.g. in the automotive context…

[1] Mehta, Ravi and Rui (Juliet) Zhu (2009), “Blue or Red? Exploring the Effect of Color on Cognitive Task Performances” Science 27 February 2009:Vol. 323. no. 5918, pp. 1226 – 1229 DOI: 10.1126/science.1169144

Modular device – for prototyping only?


Over the last years there have been many ideas how to make devices more modular. Components that allow the end-user to create their own device – with exactly the functionality they want have been the central idea. So far they are only used in prototyping and have not really had success in the market place. The main reason seems that you get a device that has everything included and does everything – smaller and cheaper… But perhaps as electronics gets smaller and core functions get more mature it may happen.

Yanko Design has proposed a set of concepts along this line – and some of them are appealing 🙂
http://www.yankodesign.com/2007/12/12/chocolate-portable-hdd/
http://www.yankodesign.com/2007/11/26/blocky-mp3-player-oh-and-modular-too/
http://www.yankodesign.com/2007/08/31/it-was-a-rock-lobster/

Buglabs (http://www.buglabs.net) sells a functional system that allows you to build your own mobile device.

Being creative and designing your own system has been of interest in the computing and HCI community for many years. At last years CHI there was an paper by Buechley et al. [1] that looked how the LilyPad Arduino can make creating “computers” an intersting experience – and especially for girls.

[1] Buechley, L., Eisenberg, M., Catchen, J., and Crockett, A. 2008. The LilyPad Arduino: using computational textiles to investigate engagement, aesthetics, and diversity in computer science education. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 423-432. DOI= http://doi.acm.org/10.1145/1357054.1357123

The next big thing – let’s look into the future

At Nokia Research Center in Tampere I gave a talk with the title “Computing Beyond Ubicomp – Mobile Communication changed the world – what else do we need?“. My main argument is that the next big thing is a device that allows us to predict the future – on a system as well as on a personal level. This is obviously very tricking as we have a free will and hence the future is not completely predictable – but extrapolating from the technologies we see now it seems not farfetched to create a device that enables predictions of the future in various contexts.

My argument goes as follows: the following points are technologically feasible in the near future:

  1. each car, bus, train, truck, …, object is tracked in real-time
  2. each person is tracked (location, activity, …, food intake, eye-gaze) in real-time
  3. environmental conditions are continuously sensed – globally and locally sensed
  4. with have a complete (3D) model of our world (e.g. buildings, street surface, …)

Having this information we can use data mining, learning, statistics, and models (e.g. a physics engine) to predict the future. If you wonder if I forget to thing about privacy – I did not (but it takes longer to explain – in short: the set of people who have a benefit or who do not care is large enough).

Considering this it becomes very clear that in medium term there is a great potential in having control over the access terminal to the virtual world, e.g. a phone… just thing how rich your profile in facebook/xing/linkedin can be if it takes all the information you implicitly generate on the phone into account.

Visit to Nokia Research Center Tampere, SMS, Physiological sensors

This trip was my first time in Tampere (nice to see sometimes a new place). After arriving yesterday night I got a quick cultural refresher course. I even met a person who was giving today a presentation to the president of Kazakhstan (and someone made a copy using a phone – hope he got back OK to Helsinki after the great time in the bar).

In the morning I met a number of people in Jonna Hakkila’s group at the Nokia Research Center. The team has a great mix of backgrounds and it was really interesting to discuss the project, ranging from new UI concepts to new hardware platform – just half a days is much too short… When Ari was recently visiting us in Essen he and Ali started to implement a small piece of software that (hopefull) improves the experience when receiving an SMS (to Ali/Ari – the TODOs for the Beta-release we identified are: sound design, screen design with statistics and the exit button in the menu, recognizing Ok and oK, autostart on reboot, volume level controlable and respecting silent mode). In case you have not helped us with our research yet please fill in the questionnaire: http://www.pcuie.uni-due.de/uieub/index.php?sid=74887#

I gave a talk (see separate post on the next big thing) and had the chance to meet Jari Kangas. We discovered some common interest in using physiological sensing in the user interface context. I think the next steps in integrating physiological sensors into devices are smaller than expected. My expectation is that we rather detect simple events like “surprise” rather than complex emotion (at least in the very near future). We will see where it goes – perhaps we should put some more students on the topic…

Bastian Pfleging joined the team (some weeks ago :-)

Bastian Pfleging joined us some weeks ago – his first day at work was at TEI’09 in Cambridge. We he came back he was so well integrted in the team that I forgot to write a blog entry. In fact he was already at a workshop with us some weeks ago – remember the photo?

Bastian studied computer science at TU Dortmund and his final project was on computer vision based interaction in smart environments in the Group of Gernot A. Fink.

Do you use Emoticons in your SMS? What are the first words in the SMS you receive?

We are curious in current practice in SMS use – and I hope for a good reason. Together with Jonna Hakkila and her group at Nokia Research we have discussed ideas how to make SMS a bit more emotional. Hopefully we have soon a public beta of a small program out. 
Till then it would be helpful to understand better how people use SMS and how the encode emotion in a very rudimentary way by 🙂 and 🙁 and alike. If you are curious, too and if you have 10 minutes it would be great to complete our survey: http://www.pcuie.uni-due.de/uieub/index.php?sid=74887#
Emotions in SMS and mobile communication has been a topic many people have been looking in, one of the early paper (in fact a design sketch not really a paper) was by Fagerberg et al.  [1] in our 2004 special issue on tangible UIs – an extended and more conceptual discussion of their work can be found in [2]; for more on their project see: http://www.sics.se/~petra/eMoto/
[1] Petra Fagerberg, Anna Ståhl, and Kristina Höök (2004) eMoto – Emotionally Engaging Interaction, Design Sketch in Journal of Personal and Ubiquitous Computing, Special Issue on Tangible Interfaces in Perspective, Springer.
[2] Ståhl, A., Sundström, P., and Höök, K. 2005. A foundation for emotional expressivity. In Proceedings of the 2005 Conference on Designing For User Experience (San Francisco, California, November 03 – 05, 2005). Designing For User Experiences, vol. 135. AIGA: American Institute of Graphic Arts, New York, NY, 33.

Zorah Mari Bauer visits, Shape the future but don’t ignore it

Zorah Mari Bauer, who describes herself as “… a theorist, pioneer and activist of innovative media”, visited our lab. She works at the crossroads of art, design, media and technology and looks into communities, web, TV, and mobile location based applications. We had an interesting discussion on upcoming trends in media and technology and how the inevitable shape our future and how a society has to innovate to be successful. It seems that people who understand the technologies seem to be more positive about the future than those who do not 🙂 It was very inspiring to discuss future trends with her – hope to continue the discussion in the future!

Reading the newspaper was a stark contrast to the interesting and forward looking exchange of ideas. On the way back I found an article in the German newspaper TAZ (www.taz.de) on how evil all the electronic publishing is – it is sometimes really frustrating how little some journalist – even at TAZ – research (or if they research how little they understand). One essential observation in business as well as in society is that if something does not have a value its existence is in danger. Moving towards a digital world for me the added value of traditional publishers is less and less clear – and the only way out is to be innovative… It is very clear that we can shape our future (and it is an exciting time for that) – but it is very clear that if you ignore the future it is still moving on. If you are a publisher and curious about ways to innovate talk to us we have some ideas! E.g. there is a great value if you facilitate relationships (between people, things, places, information) and people strive for external recognition.

New Conference on Automotive User Interfaces

If industries are not doing well one way forward is to promote innovation!

Since a number of years it became apparened that many PhD students in computer science and especially in human computer interaction work on topics related to user interfaces in the car. We think it is a good idea to forster a community in this area and hence we run the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2009)  in Essen, Germany. The conference is in the week after Mobile HCI and takes place Mon/Tue 21 – 22 September 2009. 
Submission deadline: 02 June 2009

Doctoral Seminar in Bommerholz, CS Career and new Ideas

Monday and Tuesday I organized together with Gernot A. Fink a PhD awayday for students in computer science of the Universities Bochum, Dortmund and Duisburg-Essen. With about 30 PhD students and some professors we went to Bommerholz, where the University of Dortmund has a small retreat.

The program included talks about career possibilities after the PhD including talks by:
  • Dr. Heiner Stüttgen, Vice President, NEC Laboratories Europe: “Industrial research – what is a PhD good for?”
  • Dr. Olaf Zwintzscher, CEO, W3L GmbH: “Adventure Spin-off – starting a company after garduation”
  • Dr. Wiltrud Christine Radau, Deutscher Hochschulverband: “career opportunities in universities”
Overall it became very clear that computer science is still the subject to study! The career opportunities are interesting, exciting and very good. Nevertheless there is always a downside to things – whatever way you choose you have to work hard 🙂
We had a further talk “Gutenberg over? The metamorphose scientific publishing” by Herrmann Engesser from Springer-Verlag. He showed in an interesting way how much change has happened in the last 40 years to publishing. The example of the Encyclopedia Britannica and the Brockhaus Encyclopedia demonstrates impressively that it is impossible to ignore changes in technology and stay successful in business. Looking at many newspapers one can only wonder when the will realize it.

Over coffee we discussed the added value that is provided by a publisher and by digital libraries like Springer Link, ACM DL or the IEEE Library. And here too there are many more open questions than answers. One clear direction is to look more into scientific communities. One idea that I find quite interesting is to search for publications that are from my scientific community, e.g. “give me all paper that have haptic in the title and that are published by people I am linked to in facebook, xing, and linkedin or by their contacts”. Sounds like an interesting project 🙂

Besides the invited talks we had three poster sessions. In each session 9 students presented their work. We started with 90 seconds presentations and then had discussions over the posters. As we had topics from all areas in Computer science I first expected that this may be pretty boring – but it was surprisingly interesting. I learned a lot about bio-informatics, learning algorithms, data mining, robotics and security over the last two days. Things I would never have read – but getting it explained in the context of a concrete PhD project was fun.
Our evening program was centered on movies. We first showed a number of snippets from movies (including James Bond, Harry Potter, Star Trek, and Minority Report) where cool technology feature. Then the students had 45 minutes to create new ideas of believable technology gadgets for two films, one to plays in 2011 and the other in 2060. The ideas were fun reaching form manipulated insects, to smart dust, to the exploitation of social networks. If you are Steven Spielberg or someone else who plans a movie feel free to call me – we have a lot of ideas 😉

Poster on mobile advertising displays at HotMobile 2009

We put together a poster discussing some of our recent work on mobile displays for HotMobile. While presenting the poster I got a number of interesting ideas and concerns. One idea is to widening the idea of advertsing and fuse it with traditional classify ads by private people (e.g. advertising a flat or telling the world that you lost your cat). The big question is really how to measure audince exposure and eventually conversion. There are several ideas how to do this – but looks more like another master project on the topic than a overnight hack 😉

The abstract for the poster:
In recent years many conventional public displays were replaced by electronic displays hence enabling novel forms of advertising and information dissemination. This includes mainly stationary displays, e.g. in billboards and street furniture, and currently first mobile displays on cars appear. Yet, current approaches are mostly static since they neither do consider mobility and the context they are used in nor the context of the viewer. In our work we explore how mobile public displays, which rapidly change their own context, can gather and process information about their context. Data about location, time, weather, and people in the vicinity can be used to react accordingly by displaying related content such as information or advertisements.

When spending some time in Montain View I was suprised how few electronic screens I saw compared to Germany or Asia. But nevertheless they have their own ways of creating attention… see the video below 🙂
Some time back in Munich we look at how interaction modalities can effect the attention of bystanders, see [1] for a short overview of the work.

[1] Paul Holleis, Enrico Rukzio, Friderike Otto, Albrecht Schmidt. Privacy and Curiosity in Mobile Interactions with Public Displays. Poster at CHI 2007 workshop on Mobile Spatial Interaction. San Jose, California, USA. 28 April 2007.

HotMobile09: history repeats – shopping assistance on mobile devices

Comparing prices and finding the cheapest item has been a favorite application example over the last 10 years. I have seen the idea of scanning product codes and compare them to prices in other shops (online or in the neighborhood) first demonstrated in 1999 at the HUC conference. The Pocket BargainFinder [1] was a mobile device with a barcode reader attached that you could scan books and get a online price comparison. Since then I have seen a number of examples that take this idea forward, e.g. a paper here at HotMobile [2] or the Amazon Mobile App.

The idea of making a bargain is certainly very attractive; however I think many of these applications do not take enough into account how price building works in the real world. If the consumer gets more power in comparison it can go two was: (1) shops will get more uniform in pricing or (2) shows will make it again harder to compare. The version (2) is more interesting 😉 and this can range from not allowing the use of mobile devices in the shop (what we see in some areas at the moment) to more sophisticated pricing options (e.g. prices get lowered when you buy combinations of products or when you are repeatedly in the same shop). I am really curious how this develops – would guess the system will penetrate the market over the next 3 years…

[1] Adam B. Brody and Edward J. Gottsman. Pocket BargainFinder: A Handheld Device for Augmented Commerce. First International Symposium on Handheld and Ubiquitous Computing (HUC ’99), 27-29 September 1999, Karlsruhe, Germany
http://www.springerlink.com/content/jxtd2ybejypr2kfr/

[2] Linda Deng, Landon Cox. LiveCompare: Grocery Bargain Hunting Through Participatory Sensing. HotMobile 2009.

Bob Iannucci from Nokia presents Keynote at HotMobile 2009

Bob Iannucci from Nokia presented his keynote “ubiquitous structured data: the cloud as a semantic platform” at HotMobile 2009 in Santa Cruz. He started out with the statement that “Mobility is at the beginning” and he argued that why mobile systems will get more and more important.

He presented several principles for mobile devices/systems

  • Simplicity and fitness for purpose are more important than feature
  • use concepts must remain constant – as few concepts as possible
  • presentations (what we see) and input modalities will evolve
  • standards will push the markets

Hearing this, especially the first point, from someone from Nokia seemed very interesting. His observations are in general well founded – especially the argument for simple usage models and sensible conceptual models when targeting the whole population of the earth as users.

In the keynote he offered an alternative conceptual model: Humans are Relational. Model everything as relations between people, things and places. He moved on to the question what are the central shortcomings in current mobile systems/mobile phones and he suggested it comes down to (1) no common data structure and (2) no common interaction concept.

With regard to interaction concepts he argued that a Noun-Verb style interaction is natural and easy for people to understand (have heard this before, for a discussion about it in [1, p59]). The basic idea in this mode is to choose a noun (e.g. people, place, thing) and then decide what to do with it (verb). From his point of view this interaction concept fits well the mobile device world. He argued that a social graph (basically relationships as in facebook etc.) would be well suited for a noun-verb style interaction. The nodes in the graph (e.g. people, photos, locations, etc.) are nouns and transformations (actions) between the nodes are the verbs. He suggested if we represent all the information that people have now in the phone as a graph and we have an open standard (and infrastructure) to share we could create a universal platform for mobile computing. (and potentially a huge graph with all the information in the world 😉

I liked his brief comment on privacy: “many privacy problems can be reduced to economic problems”. Basically people give their information away if there is value. And personally I think in most cases people give it away even for a minimal value… So far we have no market place where people can sell their information. He mentioned the example of a personal travel data which can provide the basis for traffic information (if aggregated). I think this is an interesting direction – how much value would have my motion pattern have?

Somehow related to what you can do on a mobile phone he shared with us the notion of the “3-Watt limit”. This seems fundamental: you cannot have more than 3 Watt used up in a device that fits in your hand (typical phone size) as otherwise it would get to hot. So the processing power limitation is not on the battery, but on the heat generated.

[1] Jef Raskin. The Humane Interface. Addison-Wesley. 2000.

Demo day at TEI in Cambridge

What is a simple and cheap way to get from Saarbrücken to Linz? It’s not really obvious, but going via Stansted/Cambridge makes sense – especially when there is the conference on Tangible and Embedded Interaction (www.tei-conf.org) and Raynair offers 10€ flight (not sure about sustainability though). Sustainability, from a different perspective was also at the center of the Monday Keynote by Tom Igeo which I missed.

Nicolas and Sharam did a great job and the choice to do a full day of demos worked out great. The large set of interactive demos presented captures and communicates a lot of the spirit of the community. To get an overview of the demos one has to read through the proceedings (will post a link as soon as they are online in the ACM-DL) as there are too many to discuss them here.
Nevertheless here is my random pick:
One big topic is tangible interaction on surfaces. Several examples showed how interactive surfaces can be combined with physical artifacts to make interaction more graspable. Jan Borcher’s group showed a table with passive controls that are recognized when placed on the table and they provide tangible means for interaction (e.g. keyboard keys, knobs, etc.). An interesting effect is that the labeling of the controls can be done dynamically.
Microsoft research showed an impressive novel table top display that allows two images to be projected – on the interactive surface and one on the objects above [1]. It was presented at large year’s UIST but I have tried it out now for the first time – and it is a stunning effect. Have a look at the paper (and before you read the details make a guess how it is implemented – at the demo most people guessed wrong 😉
Embedding sensing into artifacts to create a digital representation has always been a topic in tangible – even back to the early work of Hiroshi Ishii on Triangles [2]. One interesting example in this year’s demo was a set of cardboard pieces that are held together by hinges. Each hinge is technically realized as a potentiometer and by measuring the potion the structure can be determined. It is really interesting to think this further.
Conferences like TEI let you inevitably think about the feasibility of programmable matter – and there is ongoing work in this in the robotics community. The idea is to create micro-robots that can create arbitrary shapes – for a starting point see the work at CMU on Claytronics.
[1] Izadi, S., Hodges, S., Taylor, S., Rosenfeld, D., Villar, N., Butler, A., and Westhues, J. 2008. Going beyond the display: a surface technology with an electronically switchable diffuser. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1449715.1449760
[2] Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56. DOI= http://doi.acm.org/10.1145/274644.274652

Voice interaction – Perhaps it works …

Today we visited Christian Müller at DFKI in Saarbrücken. He organized a workshop on Automotive User Interfaces at IUI last week. My talk was on new directions for user interfaces and in particular arguing for a broad view on multimodality. We showed some of our recent projects on car user interfaces. Dagmar gave a short overview of CARS our simulator for evaluating driving performance and driver distractions and we discussed options for potential extensions and shortcomings of the Lane Change Task.
Being a long time skeptic about voice interfaces I was surprise to see a convincing demo of a multimodal user interface combining voice and a tactile controller in the car. I think this could be really an interesting option for future interfaces. 
Classical voice-only interfaces usually lack basic properties of modern interactive systems, e.g. as stated in Shneiderman’s Golden Rules or in Norman’s action cycle. In particular the following points are most often not well realized in voice-only system:
  • State of the system is always visible
  • Interactions with the system provide immediate and appropriate feedback
  • Actions are easily reversible
  • Opportunities for interaction are always visible 
By combing a physical controller with voice and having at the same time the objects of interaction visible to the user (as part of the physical system that is controlled, e.g. window, seat) these problems are addressed in a very interesting way. I am looking forward to seeing more along these lines – perhaps we should also not longer ignore speech interaction in our projects 😉 

Design Ideas and Demos at FH Potsdam

During the workshop last week in Potsdam we got to see demos from students of Design of Physical and Virtual Interfaces class taught by Reto Wettach and JennyLC Chowdhury. The students had to design a working prototype of an interactive system. As base technology most of them use the Arduino Board with some custom made extensions. For a set of pictures see my photo gallery and the photos on flickr. It would need pages to describe all of the projects so I picked few…

The project “Navel” (by Juan Avellanosa, Florian Schulz and Michael Härtel) is a belt with tactile output, similar to [1], [2] and [3]. The first idea along this lines that I have tried out was Gentle Guide [4] at mobile HCI 2003 – it seemed quite compelling. The student project proposed one novel application idea: to use it in sport. That is quite interesting and could complement ideas proposed in [5].

Vivien’s favorite was the vibrating doormat; a system where a foot mat is constructed of three vibrating tiles that can be controlled and different vibration patters can be presented. It was built by Lionel Michel and he has several ideas what research questions this could address. I found especially the question if and how one can induce feelings and emotions with such a system. In the same application context (doormat) another prototype looked at emotions, too. If you stroke or pat this mat it comes out of its hiding place (Roll-o-mat by Bastian Schulz).

There were several projects on giving everyday objects more personality (e.g. a Talking Trashbin by Gerd-Hinnerk Winck) and making them emotional reactive (e.g. lights that reacted to proximity). Firefly (by Marc Tiedemann) is one example how reactiveness and motion that is hard to predict can lead to an interesting user experience. The movement appears really similar to a real firefly.

Embedding Information has been an important topic in our research over the last years [6] – the demos provided several interesting examples: a cable that visualized energy consumption and keyboard to leave messages. I learned a further example of an idea/patent application where information is included in the object – in this case in a tea bag. This is an extreme case but I think looking into the future (and assuming that we get sustainable and bio-degradable electronics) it indicates an interesting direction and pushing the idea of Information at your fingertip (Bill Gates Keynote in 1994) much further than originally intended.

For more photos see my photo gallery and the photos on flickr.

[1] Tsukada, K. and Yasumrua, M.: ActiveBelt: Belt-type Wearable Tactile Display for Directional Navigation, Proceedings of UbiComp2004, Springer LNCS3205, pp.384-399 (2004).

[2] Alois Ferscha et al. Vibro-Tactile Space-Awareness . Video Paper, adjunct proceedings of Ubicomp2008. Paper. Video.

[3] Heuten, W., Henze, N., Boll, S., and Pielot, M. 2008. Tactile wayfinder: a non-visual support system for wayfinding. In Proceedings of the 5th Nordic Conference on Human-Computer interaction: Building Bridges (Lund, Sweden, October 20 – 22, 2008). NordiCHI ’08, vol. 358. ACM, New York, NY, 172-181. DOI= http://doi.acm.org/10.1145/1463160.1463179

[4] S.Bosman, B.Groenendaal, J.W.Findlater, T.Visser, M.de Graaf & P.Markopoulos . GentleGuide: An exploration of haptic output for indoors pedestrian guidance . Mobile HCI 2003.

[5] Mitchell Page, Andrew Vande Moere: Evaluating a Wearable Display Jersey for Augmenting Team Sports Awareness. Pervasive 2007. 91-108

[6] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‘Ubiquitous Display Environments’, September 2004

Towards interaction that is begreifbar

Since last year we have in Germany a working group on graspable/tangible interaction in mixed realities.
In German the key term we use is “begreifbar” or “begreifen” which has the meaning of acquire a deep understanding of something and the words basic meaning is to touch. Basically understand by touching – but in a more fundamental sense than grasping or getting grip. Hence the list of translations for “begreifen” given in the dictionary is quite long.
Perhaps we should push more for the word in the international community – Towards interaction that is begreifbar (English has too few foreign terms anyway 😉

This meeting was organized by Reto Wettach at Potsdam and the objective was to have two days to invent things together. The mix of people mainly included people from computer science and design. It is always amazing how many ideas come up if you put 25 people for a day in a room 🙂 We followed this week up on some of the ideas related to new means for communication – there are defiantly interesting student projects on this topic.

In the evening we had a half pecha-kucha (each person 10 slides of 20 seconds – in total 3:20, the original is 20 slides) http://www.pecha-kucha.org/. It is a great way of getting quickly to know about work, research, ideas, and background of other people. It could be format we could use more in teaching a perhaps for ad-hoc sessions at a new conference we plan (e.g. http://auto-ui.org) … prepared my slides on the train in the morning – and it is more challenging that expected to get a set of meaningful pictures together for 10 slides.

Overall the workshop showed that there is a significant interest and expertise in Germany moving from software ergonomics to modern human computer interaction.
There is a new person on our team (starting next week) – perhaps you can spot him on the pics.
For a set of pictures see my photo gallery and the photos on flickr.

Two basic references for interaction byond the desktop

Following the workshop I got a few questions on what the important papers are that one should read to start on the topic. There are many (e.g. search in google schoolar for tangible interaction, physical interaction, etc and you will see) and there conference dedicated to it (e.g. the tangible and embedded interaction TEI – next week in cambridge).

But if I have to pick two here is my joice:

[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392

[2] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 201-210. DOI= http://doi.acm.org/10.1145/1357054.1357089

What happens if Design meets Pervasive Computing?

This morning I met with Claudius Lazzeroni, a colleague from Folkwang Hochschule (they were part of our University till two years ago).
 
They have different study programs in design and art related subjects. He showed me some projects (http://www.shapingthings.net/ – in German but lots of pictures that give you the idea). Many of the ideas and prototypes related to our work and I hope we get some joint projects going. I think it could be really exciting to have projects with design and computer science students – looking forward to this!
When I was in the UK we collaborated in the equator project with designers – mainly Bill Gaver and his group – and the results were really exciting [1]. We build a table that reacted to load changes on the surfaces and allowed you to fly virtually over the UK. The paper is worthwhile to read – if you are in a hurry have a look at the movie about it on youtube: http://www.youtube.com/watch?v=uRKOypmDDBM
There was a further project with a table –  a key table – and for this one there more funny (and less serious?) video on youtube: http://www.youtube.com/watch?v=y6e_R5q-Uf4
[1] Gaver, W. W., Bowers, J., Boucher, A., Gellerson, H., Pennington, S., Schmidt, A., Steed, A., Villars, N., and Walker, B. 2004. The drift table: designing for ludic engagement. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 885-900. DOI= http://doi.acm.org/10.1145/985921.985947

Interesting interaction devices

Looking at interesting and novel interaction devices that would be challenging for students to classify (e.g. in the table suggested by Card et al 1991 [1]) I can across some pretty unusual device. Probably not really useful for an exam but perhaps next year for discussion in class…

Ever wanted to rearrange the keys on your keyboard? ErgoDex DX1 is a set of 25 keys that can be arranged on a surface to create a specific input device. It would be cool if the device could also sense which key is where – would make re-arranging part of the interaction process. In some sense it is similar to Nic Villar’s Voodoo I/O [2].
Wearable computing is not dead – here is some proof 😉 JennyLC Chowdhury presents intimate controllers – basically touch sensitive underwear (a bra and briefs). Have a look at the web page or the video on youtube.
What are keyboards of the future? Each key is a display? Or is the whole keyboard a screen? I think there is too much focus on the visual und to less on the haptic – perhaps it could be interesting to have keys that change shape and where the tactile properties can be programmed… 
[1] Card, S. K., Mackinlay, J. D., and Robertson, G. G. 1991. A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9, 2 (Apr. 1991), 99-122. DOI= http://doi.acm.org/10.1145/123078.128726 
[2] VILLAR, N., GILLEADE, K. M., RAMDUNYELLIS, D., and GELLERSEN, H. 2007. The VoodooIO gaming kit: a real-time adaptable gaming controller. Comput. Entertain. 5, 3 (Jul. 2007), 7. DOI= http://doi.acm.org/10.1145/1316511.1316518

Ranking Conferences and Journals – A Down-Under perspective

As many of us I am skeptical of rankings (as long as I was not involved in making them 😉 Nevertheless sometimes they are interesting and helpful in assessing where to publish or what better not to read…

This morning we discussed where to publish some interesting work related to web technology (a follow-up of the UsaProx) and for the discussion such a list may have been helpful. 
A colleague from Munich sent me the link to an Australian conference ranking and obviously they also have ranked Journals, too. They use A+, A, B, L, and C as tiers.
… and as we always knew you cannot be wrong when publishing in Pervasive, Percom, Ubicomp, and CHI 🙂

Technology Review with a Focus on User Interfaces

The February 2009 edition of technology review (German version) has its focus on new user interfaces and titles “Streicheln erwünscht” (translates to stroking/caressing/fondling welcome). It has a set of articles talking about new way of interacting multimodality, including tangible user interfaces and tactile communication. In the article “Feel me, touch me” by Gordon Bolduan on page 74 a photo of Dagmar’s prototype of tactile steering wheel is depicted. The full paper on the study will be published at Pervasive in May 2009 (so you have to be patient to get the details – or come and visit our lab 😉

In the blog entry of technology review  introducing the current issue there is a nice anecdote mentioned about a literature search on haptic/tactile remote communication (while I was still in Munich) – the final version of the seminar paper (now not X-rated anymore) is “Neue Formen der entfernten Kommunikation” by Martin Schrittenloher. He continued in his MSc Project on the topic and worked with Morten Fjeld  on sliders that give remote feedback, see [1].

Another topic closely related is to new forms of communication are exertion interfaces (we looked at the 2002/2003 work Florian ‘Floyd’ Mueller in the UIE lecture yesterday – even with the Nintendo Wii around the work is highly inspiring and impressive, see [2]). The communication example given in Breakout for Two is showing the potential of including the whole body in communication tasks. Watching the video  is really to recommend 🙂
[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51
[2] Mueller, F., Agamanolis, S., and Picard, R. 2003. Exertion interfaces: sports over a distance for social bonding and fun. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 – 10, 2003). CHI ’03. ACM, New York, NY, 561-568. DOI= http://doi.acm.org/10.1145/642611.642709  

Why can I not rotate my windows on my Vista Desktop?

In the User Interface Engineering lecture we discussed today input devices, especially to interact with 3D environments. In 3D environments having 6 degrees of freedom (3 directions in translation and 3 options for rotation) appears very natural. Looking back at 2D user interfaces with this in mind one has to ask why are we happy (an now for more than 25 years) with translation (in 2D) only and more specifically why is it not possible to rotate my application windows in Vista (or perhaps it is and I just dont know it). At first this questions seems like a joke but if you think more of it there could be interesting implication (perhaps with a little more thinking
 than this sketch 😉

Obviously people have implemented desktops with more than 2D and here is the link to the video on project looking glass – discussed in the lecture. (if you are bored with the sun sales story just move to 2:20): http://de.youtube.com/watch?v=JXv8VlpoK_g
It seems you can have it on Ubuntu, too: http://de.youtube.com/watch?v=EjQ4Nza34ak