Impact of colors – hints for ambient design?

There is a study that looked at how the performace in solving certain cognitive/creative tasks is influenced by the backgroun color [1]. In short: to make people alert and to increase performance on detail oriented tasks use red; to get people in creative mode use blue. Lucky us our corporate desktop background is mainly blue! Perhaps this could be interesting for ambient colors, e.g. in the automotive context…

[1] Mehta, Ravi and Rui (Juliet) Zhu (2009), “Blue or Red? Exploring the Effect of Color on Cognitive Task Performances” Science 27 February 2009:Vol. 323. no. 5918, pp. 1226 – 1229 DOI: 10.1126/science.1169144

Modular device – for prototyping only?


Over the last years there have been many ideas how to make devices more modular. Components that allow the end-user to create their own device – with exactly the functionality they want have been the central idea. So far they are only used in prototyping and have not really had success in the market place. The main reason seems that you get a device that has everything included and does everything – smaller and cheaper… But perhaps as electronics gets smaller and core functions get more mature it may happen.

Yanko Design has proposed a set of concepts along this line – and some of them are appealing šŸ™‚
http://www.yankodesign.com/2007/12/12/chocolate-portable-hdd/
http://www.yankodesign.com/2007/11/26/blocky-mp3-player-oh-and-modular-too/
http://www.yankodesign.com/2007/08/31/it-was-a-rock-lobster/

Buglabs (http://www.buglabs.net) sells a functional system that allows you to build your own mobile device.

Being creative and designing your own system has been of interest in the computing and HCI community for many years. At last years CHI there was an paper by Buechley et al. [1] that looked how the LilyPad Arduino can make creating “computers” an intersting experience – and especially for girls.

[1] Buechley, L., Eisenberg, M., Catchen, J., and Crockett, A. 2008. The LilyPad Arduino: using computational textiles to investigate engagement, aesthetics, and diversity in computer science education. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 423-432. DOI= http://doi.acm.org/10.1145/1357054.1357123

The next big thing – let’s look into the future

At Nokia Research Center in Tampere I gave a talk with the title “Computing Beyond Ubicomp – Mobile Communication changed the world – what else do we need?“. My main argument is that the next big thing is a device that allows us to predict the future – on a system as well as on a personal level. This is obviously very tricking as we have a free will and hence the future is not completely predictable – but extrapolating from the technologies we see now it seems not farfetched to create a device that enables predictions of the future in various contexts.

My argument goes as follows: the following points are technologically feasible in the near future:

  1. each car, bus, train, truck, …, object is tracked in real-time
  2. each person is tracked (location, activity, …, food intake, eye-gaze) in real-time
  3. environmental conditions are continuously sensed – globally and locally sensed
  4. with have a complete (3D) model of our world (e.g. buildings, street surface, …)

Having this information we can use data mining, learning, statistics, and models (e.g. a physics engine) to predict the future. If you wonder if I forget to thing about privacy – I did not (but it takes longer to explain – in short: the set of people who have a benefit or who do not care is large enough).

Considering this it becomes very clear that in medium term there is a great potential in having control over the access terminal to the virtual world, e.g. a phone… just thing how rich your profile in facebook/xing/linkedin can be if it takes all the information you implicitly generate on the phone into account.

Visit to Nokia Research Center Tampere, SMS, Physiological sensors

This trip was my first time in Tampere (nice to see sometimes a new place). After arriving yesterday night I got a quick cultural refresher course. I even met a person who was giving today a presentation to the president of Kazakhstan (and someone made a copy using a phone – hope he got back OK to Helsinki after the great time in the bar).

In the morning I met a number of people in Jonna Hakkila’s group at the Nokia Research Center. The team has a great mix of backgrounds and it was really interesting to discuss the project, ranging from new UI concepts to new hardware platform – just half a days is much too short… When Ari was recently visiting us in Essen he and Ali started to implement a small piece of software that (hopefull) improves the experience when receiving an SMS (to Ali/Ari – the TODOs for the Beta-release we identified are: sound design, screen design with statistics and the exit button in the menu, recognizing Ok and oK, autostart on reboot, volume level controlable and respecting silent mode). In case you have not helped us with our research yet please fill in the questionnaire: http://www.pcuie.uni-due.de/uieub/index.php?sid=74887#

I gave a talk (see separate post on the next big thing) and had the chance to meet Jari Kangas. We discovered some common interest in using physiological sensing in the user interface context. I think the next steps in integrating physiological sensors into devices are smaller than expected. My expectation is that we rather detect simple events like “surprise” rather than complex emotion (at least in the very near future). We will see where it goes – perhaps we should put some more students on the topic…

Bastian Pfleging joined the team (some weeks ago :-)

Bastian Pfleging joined us some weeks ago – his first day at work was at TEI’09 in Cambridge. We he came back he was so well integrted in the team that I forgot to write a blog entry. In fact he was already at a workshop with us some weeks ago – remember the photo?

Bastian studied computer science at TU Dortmund and his final project was on computer vision based interaction in smart environments in the Group of Gernot A. Fink.

Do you use Emoticons in your SMS? What are the first words in the SMS you receive?

We are curious in current practice in SMS use – and I hope for a good reason. Together with Jonna Hakkila and her group at Nokia Research we have discussed ideas how to make SMS a bit more emotional. Hopefully we have soon a public beta of a small program out.Ā 
Till then it would be helpful to understand better how people use SMS and how the encode emotion in a very rudimentary way by šŸ™‚ and šŸ™ and alike. If you are curious, too and if you have 10 minutes it would be great to complete our survey: http://www.pcuie.uni-due.de/uieub/index.php?sid=74887#
Emotions in SMS and mobile communication has been a topic many people have been looking in, one of the early paper (in fact a design sketch not really a paper) was by Fagerberg et al.Ā Ā [1]Ā in our 2004 special issue on tangible UIs – an extended and more conceptual discussion of their work can be found in [2]; for more on their project see: http://www.sics.se/~petra/eMoto/
[1] Petra Fagerberg, Anna StĆ„hl, and Kristina Hƶƶk (2004) eMoto – Emotionally Engaging Interaction, Design Sketch in Journal of Personal and Ubiquitous Computing, Special Issue on Tangible Interfaces in Perspective, Springer.
[2] StĆ„hl, A., Sundstrƶm, P., and Hƶƶk, K. 2005. A foundation for emotional expressivity. In Proceedings of the 2005 Conference on Designing For User Experience (San Francisco, California, November 03 – 05, 2005). Designing For User Experiences, vol. 135. AIGA: American Institute of Graphic Arts, New York, NY, 33.

Zorah Mari Bauer visits, Shape the future but don’t ignore it

Zorah Mari Bauer, who describes herself as “… a theorist, pioneer and activist of innovative media”, visited our lab. She works at the crossroads of art, design, media and technology and looks into communities, web, TV, and mobile location based applications. We had an interesting discussion on upcoming trends in media and technology and how the inevitable shape our future and how a society has to innovate to be successful. It seems that people who understand the technologies seem to be more positive about the future than those who do not šŸ™‚ It was very inspiring to discuss future trends with her – hope to continue the discussion in the future!

Reading the newspaper was a stark contrast to the interesting and forward looking exchange of ideas. On the way back I found an article in the German newspaper TAZ (www.taz.de) on how evil all the electronic publishing is – it is sometimes really frustrating how little some journalist – even at TAZ – research (or if they research how little they understand). One essential observation in business as well as in society is that if something does not have a value its existence is in danger. Moving towards a digital world for me the added value of traditional publishers is less and less clear – and the only way out is to be innovative… It is very clear that we can shape our future (and it is an exciting time for that) – but it is very clear that if you ignore the future it is still moving on. If you are a publisher and curious about ways to innovate talk to us we have some ideas! E.g. there is a great value if you facilitate relationships (between people, things, places, information) and people strive for external recognition.

New Conference on Automotive User Interfaces

If industries are not doing well one way forward is to promote innovation!

Since a number of years it became apparened that many PhD students in computer science and especially in human computer interaction work on topics related to user interfaces in the car. We think it is a good idea to forster a community in this area and hence we run theĀ 1st International Conference onĀ Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2009)Ā Ā in Essen, Germany. The conference is in the week after Mobile HCI and takes place Mon/Tue 21 – 22 September 2009.Ā 
Submission deadline: 02 June 2009

Doctoral Seminar in Bommerholz, CS Career and new Ideas

Monday and Tuesday I organized together with Gernot A. Fink a PhD awayday for students in computer science of the Universities Bochum, Dortmund and Duisburg-Essen. With about 30 PhD students and some professors we went to Bommerholz, where the University of Dortmund has a small retreat.

The program included talks about career possibilities after the PhD including talks by:
  • Dr. Heiner Stüttgen, Vice President, NEC Laboratories Europe: “Industrial research – what is a PhD good for?”
  • Dr. Olaf Zwintzscher, CEO, W3L GmbH: “Adventure Spin-off – starting a company after garduation”
  • Dr. Wiltrud Christine Radau, Deutscher Hochschulverband: “career opportunities in universities”
Overall it became very clear that computer science is still the subject to study! The career opportunities are interesting, exciting and very good. Nevertheless there is always a downside to things – whatever way you choose you have to work hard šŸ™‚
We had a further talk “Gutenberg over? The metamorphose scientific publishing” by Herrmann Engesser from Springer-Verlag. He showed in an interesting way how much change has happened in the last 40 years to publishing. The example of the Encyclopedia Britannica and the Brockhaus Encyclopedia demonstrates impressively that it is impossible to ignore changes in technology and stay successful in business. Looking at many newspapers one can only wonder when the will realize it.

Over coffee we discussed the added value that is provided by a publisher and by digital libraries like Springer Link, ACM DL or the IEEE Library. And here too there are many more open questions than answers. One clear direction is to look more into scientific communities. One idea that I find quite interesting is to search for publications that are from my scientific community, e.g. “give me all paper that have haptic in the title and that are published by people I am linked to in facebook, xing, and linkedin or by their contacts”. Sounds like an interesting project šŸ™‚

Besides the invited talks we had three poster sessions. In each session 9 students presented their work. We started with 90 seconds presentations and then had discussions over the posters. As we had topics from all areas in Computer science I first expected that this may be pretty boring – but it was surprisingly interesting. I learned a lot about bio-informatics, learning algorithms, data mining, robotics and security over the last two days. Things I would never have read – but getting it explained in the context of a concrete PhD project was fun.
Our evening program was centered on movies. We first showed a number of snippets from movies (including James Bond, Harry Potter, Star Trek, and Minority Report) where cool technology feature. Then the students had 45 minutes to create new ideas of believable technology gadgets for two films, one to plays in 2011 and the other in 2060. The ideas were fun reaching form manipulated insects, to smart dust, to the exploitation of social networks. If you are Steven Spielberg or someone else who plans a movie feel free to call me – we have a lot of ideas šŸ˜‰

Poster on mobile advertising displays at HotMobile 2009

We put together a poster discussing some of our recent work on mobile displays for HotMobile. While presenting the poster I got a number of interesting ideas and concerns. One idea is to widening the idea of advertsing and fuse it with traditional classify ads by private people (e.g. advertising a flat or telling the world that you lost your cat). The big question is really how to measure audince exposure and eventually conversion. There are several ideas how to do this – but looks more like another master project on the topic than a overnight hack šŸ˜‰

The abstract for the poster:
In recent years many conventional public displays were replaced by electronic displays hence enabling novel forms of advertising and information dissemination. This includes mainly stationary displays, e.g. in billboards and street furniture, and currently first mobile displays on cars appear. Yet, current approaches are mostly static since they neither do consider mobility and the context they are used in nor the context of the viewer. In our work we explore how mobile public displays, which rapidly change their own context, can gather and process information about their context. Data about location, time, weather, and people in the vicinity can be used to react accordingly by displaying related content such as information or advertisements.

When spending some time in Montain View I was suprised how few electronic screens I saw compared to Germany or Asia. But nevertheless they have their own ways of creating attention… see the video below šŸ™‚
Some time back in Munich we look at how interaction modalities can effect the attention of bystanders, see [1] for a short overview of the work.

[1] Paul Holleis, Enrico Rukzio, Friderike Otto, Albrecht Schmidt. Privacy and Curiosity in Mobile Interactions with Public Displays. Poster at CHI 2007 workshop on Mobile Spatial Interaction. San Jose, California, USA. 28 April 2007.

HotMobile09: history repeats – shopping assistance on mobile devices

Comparing prices and finding the cheapest item has been a favorite application example over the last 10 years. I have seen the idea of scanning product codes and compare them to prices in other shops (online or in the neighborhood) first demonstrated in 1999 at the HUC conference. The Pocket BargainFinder [1] was a mobile device with a barcode reader attached that you could scan books and get a online price comparison. Since then I have seen a number of examples that take this idea forward, e.g. a paper here at HotMobile [2] or the Amazon Mobile App.

The idea of making a bargain is certainly very attractive; however I think many of these applications do not take enough into account how price building works in the real world. If the consumer gets more power in comparison it can go two was: (1) shops will get more uniform in pricing or (2) shows will make it again harder to compare. The version (2) is more interesting šŸ˜‰ and this can range from not allowing the use of mobile devices in the shop (what we see in some areas at the moment) to more sophisticated pricing options (e.g. prices get lowered when you buy combinations of products or when you are repeatedly in the same shop). I am really curious how this develops – would guess the system will penetrate the market over the next 3 years…

[1] Adam B. Brody and Edward J. Gottsman. Pocket BargainFinder: A Handheld Device for Augmented Commerce. First International Symposium on Handheld and Ubiquitous Computing (HUC ’99), 27-29 September 1999, Karlsruhe, Germany
http://www.springerlink.com/content/jxtd2ybejypr2kfr/

[2] Linda Deng, Landon Cox. LiveCompare: Grocery Bargain Hunting Through Participatory Sensing. HotMobile 2009.

Bob Iannucci from Nokia presents Keynote at HotMobile 2009

Bob Iannucci from Nokia presented his keynote “ubiquitous structured data: the cloud as a semantic platform” at HotMobile 2009 in Santa Cruz. He started out with the statement that “Mobility is at the beginning” and he argued that why mobile systems will get more and more important.

He presented several principles for mobile devices/systems

  • Simplicity and fitness for purpose are more important than feature
  • use concepts must remain constant – as few concepts as possible
  • presentations (what we see) and input modalities will evolve
  • standards will push the markets

Hearing this, especially the first point, from someone from Nokia seemed very interesting. His observations are in general well founded – especially the argument for simple usage models and sensible conceptual models when targeting the whole population of the earth as users.

In the keynote he offered an alternative conceptual model: Humans are Relational. Model everything as relations between people, things and places. He moved on to the question what are the central shortcomings in current mobile systems/mobile phones and he suggested it comes down to (1) no common data structure and (2) no common interaction concept.

With regard to interaction concepts he argued that a Noun-Verb style interaction is natural and easy for people to understand (have heard this before, for a discussion about it in [1, p59]). The basic idea in this mode is to choose a noun (e.g. people, place, thing) and then decide what to do with it (verb). From his point of view this interaction concept fits well the mobile device world. He argued that a social graph (basically relationships as in facebook etc.) would be well suited for a noun-verb style interaction. The nodes in the graph (e.g. people, photos, locations, etc.) are nouns and transformations (actions) between the nodes are the verbs. He suggested if we represent all the information that people have now in the phone as a graph and we have an open standard (and infrastructure) to share we could create a universal platform for mobile computing. (and potentially a huge graph with all the information in the world šŸ˜‰

I liked his brief comment on privacy: “many privacy problems can be reduced to economic problems”. Basically people give their information away if there is value. And personally I think in most cases people give it away even for a minimal value… So far we have no market place where people can sell their information. He mentioned the example of a personal travel data which can provide the basis for traffic information (if aggregated). I think this is an interesting direction – how much value would have my motion pattern have?

Somehow related to what you can do on a mobile phone he shared with us the notion of the “3-Watt limit”. This seems fundamental: you cannot have more than 3 Watt used up in a device that fits in your hand (typical phone size) as otherwise it would get to hot. So the processing power limitation is not on the battery, but on the heat generated.

[1] Jef Raskin. The Humane Interface. Addison-Wesley. 2000.

Andreas Riener defends his PhD in Linz

After a stop-over in Stansted/Cambridge at the TEI conference I was today in Linz, Austria, as external for the PhD defense of Andreas Riener. He did his PhD with Alois Ferscha and worked on implicit interaction in the car. The set and size of experiments he did is impressive and he has two central results. (1) using tactile output in the car can really improve the car to driver communication and reduce reaction time. And (2) by sensing the force pattern a body creates on the seat driving relates activities can be detected and to some extend driver identification can be performed. For more details it makes sense to have a look into the thesis šŸ˜‰ If you mail Andreas he will probably sent you the PDF…
One of the basic assumptions of the work was to use implicit interaction (on input and output) to lower the cognitive load while driving – which is defiantly a valid approach. Recently however we also discussed more the issues that arise when the cognitive load for drivers is to low (e.g. due to assistive systems in the car such as ACC and lane keeping assistance). There is an interesting phenomenon, the Yerkes-Dobson Law (see [1]), that provides the foundation for this. Basically as the car provides more sophisticated functionality and requires less attention of the user the risk increase as the basic activation of the driver is lower. Here I think looking into multimodality to activate the user more quickly in situations where the driver is required to take over responsibility could be interesting – perhaps we find a student interested in this topic.
[1] http://en.wikipedia.org/wiki/Yerkes-Dodson_law (there is a link to the 1908 publication by Yerkes, & Dodson)

Demo day at TEI in Cambridge

What is a simple and cheap way to get from Saarbrücken to Linz? It’s not really obvious, but going via Stansted/Cambridge makes sense – especially when there is the conference on Tangible and Embedded Interaction (www.tei-conf.org) and Raynair offers 10€ flight (not sure about sustainability though). Sustainability, from a different perspective was also at the center of the Monday Keynote by Tom Igeo which I missed.

Nicolas and Sharam did a great job and the choice to do a full day of demos worked out great. The large set of interactive demos presented captures and communicates a lot of the spirit of the community. To get an overview of the demos one has to read through the proceedings (will post a link as soon as they are online in the ACM-DL) as there are too many to discuss them here.
Nevertheless here is my random pick:
One big topic is tangible interaction on surfaces. Several examples showed how interactive surfaces can be combined with physical artifacts to make interaction more graspable. Jan Borcher’s group showed a table with passive controls that are recognized when placed on the table and they provide tangible means for interaction (e.g. keyboard keys, knobs, etc.). An interesting effect is that the labeling of the controls can be done dynamically.
Microsoft research showed an impressive novel table top display that allows two images to be projected – on the interactive surface and one on the objects above [1]. It was presented at large year’s UIST but I have tried it out now for the first time – and it is a stunning effect. Have a look at the paper (and before you read the details make a guess how it is implemented – at the demo most people guessed wrong šŸ˜‰
Embedding sensing into artifacts to create a digital representation has always been a topic in tangible – even back to the early work of Hiroshi Ishii on Triangles [2]. One interesting example in this year’s demo was a set of cardboard pieces that are held together by hinges. Each hinge is technically realized as a potentiometer and by measuring the potion the structure can be determined. It is really interesting to think this further.
Conferences like TEI let you inevitably think about the feasibility of programmable matter – and there is ongoing work in this in the robotics community. The idea is to create micro-robots that can create arbitrary shapes – for a starting point see the work at CMU on Claytronics.
[1] Izadi, S., Hodges, S., Taylor, S., Rosenfeld, D., Villar, N., Butler, A., and Westhues, J. 2008. Going beyond the display: a surface technology with an electronically switchable diffuser. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1449715.1449760
[2] Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56. DOI= http://doi.acm.org/10.1145/274644.274652

Voice interaction – Perhaps it works …

Today we visited Christian Müller at DFKI in Saarbrücken. He organized a workshop on Automotive User Interfaces at IUI last week. My talk was on new directions for user interfaces and in particular arguing for a broad view on multimodality. We showed some of our recent projects on car user interfaces. Dagmar gave a short overview of CARS our simulator for evaluating driving performance and driver distractions and we discussed options for potential extensions and shortcomings of the Lane Change Task.
Being a long time skeptic about voice interfaces I was surprise to see a convincing demo of a multimodal user interface combining voice and a tactile controller in the car. I think this could be really an interesting option for future interfaces.Ā 
Classical voice-only interfaces usually lack basic properties of modern interactive systems, e.g. as stated in Shneiderman’s Golden Rules or in Norman’s action cycle. In particular the following points are most often not well realized in voice-only system:
  • State of the system is always visible
  • Interactions with the system provide immediate and appropriate feedback
  • Actions are easily reversible
  • Opportunities for interaction are always visibleĀ 
By combing a physical controller with voice and having at the same time the objects of interaction visible to the user (as part of the physical system that is controlled, e.g. window, seat) these problems are addressed in a very interesting way. I am looking forward to seeing more along these lines – perhaps we should also not longer ignore speech interaction in our projects šŸ˜‰Ā 

Design Ideas and Demos at FH Potsdam

During the workshop last week in Potsdam we got to see demos from students of Design of Physical and Virtual Interfaces class taught by Reto Wettach and JennyLC Chowdhury. The students had to design a working prototype of an interactive system. As base technology most of them use the Arduino Board with some custom made extensions. For a set of pictures see my photo gallery and the photos on flickr. It would need pages to describe all of the projects so I picked few…

The project ā€œNavelā€ (by Juan Avellanosa, Florian Schulz and Michael HƤrtel) is a belt with tactile output, similar to [1], [2] and [3]. The first idea along this lines that I have tried out was Gentle Guide [4] at mobile HCI 2003 – it seemed quite compelling. The student project proposed one novel application idea: to use it in sport. That is quite interesting and could complement ideas proposed in [5].

Vivien’s favorite was the vibrating doormat; a system where a foot mat is constructed of three vibrating tiles that can be controlled and different vibration patters can be presented. It was built by Lionel Michel and he has several ideas what research questions this could address. I found especially the question if and how one can induce feelings and emotions with such a system. In the same application context (doormat) another prototype looked at emotions, too. If you stroke or pat this mat it comes out of its hiding place (Roll-o-mat by Bastian Schulz).

There were several projects on giving everyday objects more personality (e.g. a Talking Trashbin by Gerd-Hinnerk Winck) and making them emotional reactive (e.g. lights that reacted to proximity). Firefly (by Marc Tiedemann) is one example how reactiveness and motion that is hard to predict can lead to an interesting user experience. The movement appears really similar to a real firefly.

Embedding Information has been an important topic in our research over the last years [6] – the demos provided several interesting examples: a cable that visualized energy consumption and keyboard to leave messages. I learned a further example of an idea/patent application where information is included in the object – in this case in a tea bag. This is an extreme case but I think looking into the future (and assuming that we get sustainable and bio-degradable electronics) it indicates an interesting direction and pushing the idea of Information at your fingertip (Bill Gates Keynote in 1994) much further than originally intended.

For more photos see my photo gallery and the photos on flickr.

[1] Tsukada, K. and Yasumrua, M.: ActiveBelt: Belt-type Wearable Tactile Display for Directional Navigation, Proceedings of UbiComp2004, Springer LNCS3205, pp.384-399 (2004).

[2] Alois Ferscha et al. Vibro-Tactile Space-Awareness . Video Paper, adjunct proceedings of Ubicomp2008. Paper. Video.

[3] Heuten, W., Henze, N., Boll, S., and Pielot, M. 2008. Tactile wayfinder: a non-visual support system for wayfinding. In Proceedings of the 5th Nordic Conference on Human-Computer interaction: Building Bridges (Lund, Sweden, October 20 – 22, 2008). NordiCHI ’08, vol. 358. ACM, New York, NY, 172-181. DOI= http://doi.acm.org/10.1145/1463160.1463179

[4] S.Bosman, B.Groenendaal, J.W.Findlater, T.Visser, M.de Graaf & P.Markopoulos . GentleGuide: An exploration of haptic output for indoors pedestrian guidance . Mobile HCI 2003.

[5] Mitchell Page, Andrew Vande Moere: Evaluating a Wearable Display Jersey for Augmenting Team Sports Awareness. Pervasive 2007. 91-108

[6] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‘Ubiquitous Display Environments’, September 2004

Towards interaction that is begreifbar

Since last year we have in Germany a working group on graspable/tangible interaction in mixed realities.
In German the key term we use is ā€œbegreifbarā€ or ā€œbegreifenā€ which has the meaning of acquire a deep understanding of something and the words basic meaning is to touch. Basically understand by touching – but in a more fundamental sense than grasping or getting grip. Hence the list of translations for ā€œbegreifenā€ given in the dictionary is quite long.
Perhaps we should push more for the word in the international community – Towards interaction that is begreifbar (English has too few foreign terms anyway šŸ˜‰

This meeting was organized by Reto Wettach at Potsdam and the objective was to have two days to invent things together. The mix of people mainly included people from computer science and design. It is always amazing how many ideas come up if you put 25 people for a day in a room šŸ™‚ We followed this week up on some of the ideas related to new means for communication – there are defiantly interesting student projects on this topic.

In the evening we had a half pecha-kucha (each person 10 slides of 20 seconds – in total 3:20, the original is 20 slides) http://www.pecha-kucha.org/. It is a great way of getting quickly to know about work, research, ideas, and background of other people. It could be format we could use more in teaching a perhaps for ad-hoc sessions at a new conference we plan (e.g. http://auto-ui.org) … prepared my slides on the train in the morning – and it is more challenging that expected to get a set of meaningful pictures together for 10 slides.

Overall the workshop showed that there is a significant interest and expertise in Germany moving from software ergonomics to modern human computer interaction.
There is a new person on our team (starting next week) – perhaps you can spot him on the pics.
For a set of pictures see my photo gallery and the photos on flickr.

Two basic references for interaction byond the desktop

Following the workshop I got a few questions on what the important papers are that one should read to start on the topic. There are many (e.g. search in google schoolar for tangible interaction, physical interaction, etc and you will see) and there conference dedicated to it (e.g. the tangible and embedded interaction TEI – next week in cambridge).

But if I have to pick two here is my joice:

[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392

[2] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 201-210. DOI= http://doi.acm.org/10.1145/1357054.1357089

What happens if Design meets Pervasive Computing?

This morning I met with Claudius Lazzeroni, a colleague from Folkwang Hochschule (they were part of our University till two years ago).
Ā 
They have different study programs in design and art related subjects. He showed me some projects (http://www.shapingthings.net/ – in German but lots of pictures that give you the idea). Many of the ideas and prototypes related to our work and I hope we get some joint projects going. I think it could be really exciting to have projects with design and computer science students – looking forward to this!
When I was in the UK we collaborated in the equator project with designers – mainly Bill Gaver and his group – and the results were really exciting [1]. We build a table that reacted to load changes on the surfaces and allowed you to fly virtually over the UK. The paper is worthwhile to read – if you are in a hurry have a look at the movie about it on youtube: http://www.youtube.com/watch?v=uRKOypmDDBM
There was a further project with a table – Ā a key table – and for this one there more funny (and less serious?) video on youtube: http://www.youtube.com/watch?v=y6e_R5q-Uf4
[1] Gaver, W. W., Bowers, J., Boucher, A., Gellerson, H., Pennington, S., Schmidt, A., Steed, A., Villars, N., and Walker, B. 2004. The drift table: designing for ludic engagement. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 885-900. DOI= http://doi.acm.org/10.1145/985921.985947

Interesting interaction devices

Looking at interesting and novel interaction devices that would be challenging for students to classify (e.g. in the table suggested by Card et al 1991 [1]) I can across some pretty unusual device. Probably not really useful for an exam but perhaps next year for discussion in class…

Ever wanted to rearrange the keys on your keyboard? ErgoDex DX1 is a set of 25 keys that can be arranged on a surface to create a specific input device. It would be cool if the device could also sense which key is where – would make re-arranging part of the interaction process. In some sense it is similar to Nic Villar’s Voodoo I/O [2].
Wearable computing is not dead – here is some proof šŸ˜‰ JennyLC Chowdhury presents intimate controllers – basically touch sensitive underwear (a bra and briefs). Have a look at the web page or the video on youtube.
What are keyboards of the future? Each key is a display? Or is the whole keyboard a screen? I think there is too much focus on the visual und to less on the haptic – perhaps it could be interesting to have keys that change shape and where the tactile properties can be programmed… 
[1] Card, S. K., Mackinlay, J. D., and Robertson, G. G. 1991. A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9, 2 (Apr. 1991), 99-122. DOI= http://doi.acm.org/10.1145/123078.128726Ā 
[2] VILLAR, N., GILLEADE, K. M., RAMDUNYELLIS, D., and GELLERSEN, H. 2007. The VoodooIO gaming kit: a real-time adaptable gaming controller. Comput. Entertain. 5, 3 (Jul. 2007), 7. DOI= http://doi.acm.org/10.1145/1316511.1316518

Ranking Conferences and Journals – A Down-Under perspective

As many of us I am skeptical of rankings (as long as I was not involved in making them šŸ˜‰ Nevertheless sometimes they are interesting and helpful in assessing where to publish or what better not to read…

This morning we discussed where to publish some interesting work related to web technology (a follow-up of the UsaProx) and for the discussion such a list may have been helpful.Ā 
A colleague from Munich sent me the link to an Australian conference ranking and obviously they also have ranked Journals, too. They use A+, A, B, L, and C as tiers.
… and as we always knew you cannot be wrong when publishing in Pervasive, Percom, Ubicomp, and CHI šŸ™‚

Technology Review with a Focus on User Interfaces

The February 2009 edition of technology review (German version) has its focus on new user interfaces and titles “Streicheln erwünscht” (translates to stroking/caressing/fondling welcome). It has a set of articles talking about new way of interacting multimodality, including tangible user interfaces and tactile communication. In the article “Feel me, touch me” by Gordon Bolduan on page 74 a photo of Dagmar’s prototype of tactile steering wheel is depicted. The full paper on the study will be published atĀ Pervasive in May 2009 (so you have to be patient to get the details – or come and visit our lab šŸ˜‰

In the blog entry of technology review Ā introducing the current issue there is a nice anecdote mentioned about a literature search on haptic/tactile remote communication (while I was still in Munich) – the final version of the seminar paper (now not X-rated anymore) is “Neue Formen der entfernten Kommunikation” by Martin Schrittenloher. He continued in his MSc Project on the topic and worked with Morten FjeldĀ Ā on sliders that give remote feedback, see [1].

Another topic closely related is to new forms of communication are exertion interfaces (we looked at the 2002/2003 work Florian ‘Floyd’ Mueller in the UIE lecture yesterday – even with the Nintendo Wii around the work is highly inspiring and impressive, see [2]). The communication example given in Breakout for Two is showing the potential of including the whole body in communication tasks. Watching theĀ video Ā is really to recommend šŸ™‚
[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51
[2] Mueller, F., Agamanolis, S., and Picard, R. 2003. Exertion interfaces: sports over a distance for social bonding and fun. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 – 10, 2003). CHI ’03. ACM, New York, NY, 561-568. DOI= http://doi.acm.org/10.1145/642611.642709 Ā 

Why can I not rotate my windows on my Vista Desktop?

In the User Interface Engineering lecture we discussed today input devices, especially to interact with 3D environments. In 3D environments having 6 degrees of freedom (3 directions in translation and 3 options for rotation) appears very natural. Looking back at 2D user interfaces with this in mind one has to ask why are we happy (an now for more than 25 years) with translation (in 2D) only and more specifically why is it not possible to rotate my application windows in Vista (or perhaps it is and I just dont know it). At first this questions seems like a joke but if you think more of it there could be interesting implication (perhaps with a little more thinking
Ā than this sketch šŸ˜‰

Obviously people have implemented desktops with more than 2D and here is the link to the video on project looking glass – discussed in the lecture. (if you are bored with the sun sales story just move to 2:20): http://de.youtube.com/watch?v=JXv8VlpoK_g
It seems you can have it on Ubuntu, too: http://de.youtube.com/watch?v=EjQ4Nza34ak

Rating your professor, teacher, doctor, or fellow students?

This morning I was coming back from Munich* on the train I got a phone call from a journalist from Radio Essen (http://www.102.2radioessen.de/).Ā As their studio is very close to the railways station in Essen I went there spontaneously before going back to University.Ā 

We talked a little about web services for students to rate their profs (e.g. meinProf.de). The numbers of ratings most professors have received so far is extremely small (in comparison to the number of students we teach) and hence you get interesting effects that are far from representative or in many cases even meaningful. Last term I registered my course and we sent proactively a mail to all students who complete the course with the request to rate the lectures. This seems to be a good way to generate a positive selection šŸ™‚
There are many of these services out – rating teachers, doctors, shops, etc. Thinking a little more about the whole concept of rating others one could image many interesting services – all of them creating a clear benefit (for someone) and a massive reduced privacy for others.Ā 
To make it more specific I offer you one idea: Rate your fellow students’ professonal capabilities and academic performance. Students have typically a very good insight into the real qualities of their peers (e.g. technical skills, social compatibility, creativity, mental resilience, ability to cope with workload, diligence,Ā honesty etc.). Having this information combined with the official degree (and the transcript the university offers) a potential employer would get a really interesting picture… We discussed this with students last term an the reactions were quite diverse – as one can image.>
Obviously such a service would create a lot of criticism (which lowers the cost of marketing) and one would have to carefully think in which countries it would be legal to run it. An interesting question would also be what verification one would employ to ensure that the ratings are real – or perhaps we would not need to care? Interested in the topic – perhaps we should get 5 people together implemented in a week and get rich šŸ˜‰Ā 
The direction of such rating systems are taking is very clear – and it seems that they will come in many areas of our life. Perhaps there is some real research in it… how will these technology change the way we live together?

* travelling from Munich (leaving at 22:30) and arriving in Essen in the morning (or Darmstadt) works fairly well and if you stay in a hotel in Stuttgart šŸ˜‰ – it is surprisingly a real alternative to a night train or an early morning flight…

No 3 ;-) Paul defends his PhD – Congratulations!

Paul Holleis defended today his PhD thesis on “Integrating Usability Models into Pervasive Application Development” in Munich – My No.3. He worked together with Matthias on theĀ DFG project ā€œEmbedded Interactionā€.Ā Paul is now with Docomo Eurolabs in Munich.

The set of publication Paul produced is impressive – you probably don’t have time to read all of them šŸ˜‰ but at least take a look at the following ones: an extension to KLM for mobile phones [1], an integrated development environment that includes usability models [2], and a explorative study in wearable computing [3].

In Germany we have a tradition to make a hat for the candidate. Paul’s hat has items on it that insiders can interpret, including a world map with bikes, miniature TEI’07 proceedings, Birkenstock shoes, a key with a label “Amsterdam”, a display, a phone, a yoyo, a control unit for vibration motors, 4 flags of town, and some context-aware plug-and-play hardware (and obviously batteries).
[1]Ā Holleis, P., Otto, F., Hussmann, H., and Schmidt, A. 2007. Keystroke-level model for advanced mobile phone interaction. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ’07. ACM, New York, NY, 1505-1514. DOI=http://doi.acm.org/10.1145/1240624.1240851
[2]Ā P. Holleis, A. Schmidt: MakeIt: Integrate User Interaction Times in the Design Process of Mobile Applications. 6th International Conference on Pervasive Computing (Pervasive’08), Sydney, Australia, May 2008
[3]Ā Holleis, P., Schmidt, A., Paasovaara, S., Puikkonen, A., and HƤkkilƤ, J. 2008. Evaluating capacitive touch input on clothes. InProceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services(Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 81-90.Ā DOI= http://doi.acm.org/10.1145/1409240.1409250

Mechanical Computing, Beauty of Calculating Machines

Instead of covering the history of calculating machines in the DSD lecture, we took the train and went to the Arithmeum in Bonn to the see the artefacts live and to play with some of them.
We started with early means for counting and record keeping. The tokens and early writings did not use numbers as abstract concepts, rather as representatives of concrete objects – this is very inspiring, especially from a tangible interaction point of view. The knots, as used in south America, show impressively how the tools for calculation have to fit the context people live in. Interestingly all these artefacts highliht how the ability to calculate and store information is related to the ability to do trade – quite a good motivation for the setup we have in Essen business studies and computer science within one faculty.
I was again impressed by the ingenuity by the early inventers of calculating tools and machines. There is an interesting separation between calculating tools and machines – the first ones require the user to take care of the carry and the second do it by themselves. We tried out replicas of Napier’s calculating tool and Schickard’s calculating machine.
The beauty and the mechanical precision required of those early machines is impressive. These prototypes (most of them took years and massive funds to be built complete) can teach us something for research today. These inventors had visions and the will to get it implemented, even without a clear application or business model in mind. They were excited by the creating of systems than can do things, machines could not do before. From the professions of the inventorsĀ (e.g. Philipp MatthƤus Hahn was a clergyman)Ā Ā it becomes apparent that at these times some considered religion and calculation as closely related – which to mondern understanding is very very alien.

Seeing the Hollerith machine that was used for the US census more than 100 years ago can teach you a lot about data processing. Punch cards, electrical reading and electrical counters (using mainly relays) were the basis for this technology. Looking at the labels on the counters showed that the US has a long tradition in collecting data that is after some time is not seen as political correct šŸ˜‰
Having learned binary calculations during the DSD course it was nice to see a machine that did binary additions, using small steel balls and gravity. On each place (1,2,4,8, …) there is space for one ball. If a second one comes to this place one moves up to the next place (carry) and one is discarded. This is implemented with very simple mechanics and the working prototype (recently build) is based on designs of Schickard (but he never built – if I am correct).
Moving on with binary systems and finally to silicon, we got to see the Busicom 141 – a desk calculator that uses the Intel 4004. It is impressive to see that this is not even 40 years ago – starting with 2300 transistors and 180kHz.Ā 
you can find the full set of photos at:Ā http://foto.ubisys.org/dsd0809/

CfP Workshop on Pervasive Advertising

We organize at this year’s Pervasive computingĀ conference in Nara, Japan a workshop on Pervasive Advertising – http://pervasiveadvertising.org.Ā 
We expect that there is a lot of interesting research going on in the area and it is clearly a controversial topic. Being an optimist – I see the new options that arise. In particular a future with less annoying advertisements is one hope šŸ™‚
But many people are focusing on the risks that arise – an interesting positing with some criticism of our workshop objective can be found at the near future laboratoryĀ Ā I do not share their views šŸ™‚
To me the idea that if you do not research it, it does not happen seems not a very viable option. I still think with research we can shape the future!
I am already looking forward to the submission and to the workshop. You have a contribution? Deadline is Feb, 11 2008.

Hans is visiting, generating new ideas for projects

Hans was for a meeting in Stuttgart and he stayed another day to discuss project ideas with me – won’t tell them here ;-).
Nevertheless there are always small new things to discover, too. Hans showed Vivien the Ocarina application for the iPhone – it is quite amazing how little it takes to create an interesting application – and that is very different from a traditional musical instrument. Especially the interweaving of playing yourself with the worldwide community is extremely well done.Ā