Invited Talk by Nicolas Villar

Nicolas visited us in Essen to give the opening talk of our German meeting on tangible interaction. In his talk he first showed some examples of the hardware and sensors group at Microsoft research in Cambridge, most notably the SenseCam (which we learned is licensed and will be soon commercially available).

In the main part of the talk Nicolas presented a modular embedded architecture that allows developers to create custom made digital systems with fairly little effort. By integrating physical development (3D printing), functional blocks and software development the approach aims at empowering developers to create entirely new devices. His examples were impressive, e.g. creating a fully functional game console in a few hours.

Assuming that electronics become really small and cheap and that displays can be directly printed I can see that this approach makes a lot of sense – the question is just how long will it take before we rather use a (nowadays) powerful ARM processor, instead of a logic circuit with 10 gates. I would imagine that from a economic perspective it will less than 20 years before this makes sense.

We talked about energy harvesting and hier is a link to a potential interesting component: LTC3108.

Enrico Rukzio visits us in Essen, projections everywhere

On Wednesday and Thursday Enrico visited our group in Essen. He gave a part of my lecture on user interface engineering talking about mobile interaction with the real world. He include interesting examples, such as QR-code/NFC/RFID use in Asia, SixthSense project (camera projection system to wear around the neck) and handheld mobile projections. Enrico also explained some of the multi-tag work he does at Lancaster University [1].

In the lecture we talked briefly about future devices and interfaces. I mentioned one example: projection in the large – on building scale. The 3D visualization overplayed on buildings seem impressive – at least when looking at the video. NuFormer ( has created several interesting projections – but I never have seen one in the real – so far…

Looking at the 6th sense project and on the building projections we wondered how important in may become in the future to make research results in ubicomp/HCI understandable and accessible to a wide audience. Will this replace papers in the future?

[1] Seewoonauth, K., Rukzio, E., Hardy, R., and Holleis, P. 2009. Touch & connect and touch & select: interacting with a computer by touching it with a mobile phone. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-9. DOI=

Reto Wetach visits our lab… and looking for someone with expertise in pain

Reto Wettach was in Essen so we took the opportunity to get together to flash out some ideas for a proposal – it is related to pain – in a positive sense. There is interesting and scary previous work, see [1] & [2]. For the proposal we still look for someone not from the UK and not from Germany – who has an expertise and interest in medical devices (sensors and actuators) and someone who has experience in pain and perception of pain (e.g. from the medical domain). Please let me know if you know someone who may fit the profile …

Before really getting to this we had a good discussion on the usefulness of the concept of tangible interaction – obviously we see the advantages clearly – but nevertheless it seem in many ways hard to proof. The argument for tangible UIs as manipulators and controls is very clear and can be shown but looking at tangible objects as carriers for data it becomes more difficult. Looking a physical money the tangible features are clear and one can argue for the benefit of tangible qualities (e.g. I like Reto’s statement “the current crisis would not have happened if people would have had to move money physically”) – but also the limitations are there and modern world with only tangible money would be unimaginable.

Taking the example of money (coins and bills) two requirements for tangible objects that embody information become clear:

  • The semantic of the information carried by the object has to be universally accepted
  • Means for processing (e.g. reading) the tangible objects have to be ubiquitously available

There is an interesting and early paper that looks into transporting information in physical form [3]. The idea is simple: data can be assigned to/associated with any object and can be retrieved from this object. The implementation is interesting, too – the passage mechanism uses the weight of an object as ID.

[2] Dermot McGrath. No Pain, No Game. Wired Magazin 07/2002.
[3] Shin’ichi Konomi, Christian Müller-Tomfelde, Norbert A. Streitz: Passage: Physical Transportation of Digital Information in Cooperative Buildings. Cooperative Buildings. Integrating Information, Organizations and Architecture. CoBuild 1999. Springer LNCS 1670. pp. 45-54.

Visitor from Munich: Gilbert Beyer

Gilbert Beyer from Munich came to visit our lab. In Munich he is working on interesting projects that combine aspects of software engineering and human computer interaction in the group of Prof. Martin Wirsing. Gilbert participated in the pervasive computing in advertising workshop in Nara and we met there.

We discussed aspects of how to study and empirically evaluate larger and off-desktop interactive systems. Even though those systems differ significantly from desktop systems the book How to Design and Report Experiments by Andy Field and Graham J. Hole is still a good starting point.

Carting new territories is exciting and it seems that this happens currently in various areas. Historicaly it is interesting to look at Card’s paper [1] for a useful design space for input devices – must read ;-). Tico Ballagas looked into a design space for mobile interaction in his PhD – also very interesting – if you do not have the time to read the thesis, have a look the book chapter [2]. Over the last year Dagmar worked on a design space for the automotive domain, which is accepted at Automotive User Interfaces conference ( and which will be published in September.

[1] Card, S. K., Mackinlay, J. D., and Robertson, G. G. 1991. A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9, 2 (Apr. 1991), 99-122. DOI=

[2] Rafael Ballagas, Michael Rohs, Jennifer Sheridan, and Jan Borchers. The Design Space of Ubiquitous Mobile Input. In Joanna Lumsden, editor, Handbook of Research on User Interface Design and Evaluation for Mobile Technologies. IGI Global, Hershey, PA, USA, 2008.

Morten Fjeld visiting

On his way from Eindhoven to Zurich Morten Fjeld was visiting our group. It was great to catch up and talk about a number of exciting research projects and ideas. Some years ago one of my students from Munich did his final project with Morten working on haptic communication ideas, see [1]. Last year at TEI Morten had a paper on a related project – also using actuated sliders, see [2].

In his presentation Morten gave an overview of the research he does and we found a joint interest in capacitive sensing. Raphael Wimmer did his final project in Munich on capacitive sensing for embedded interaction which was published in Percom 2007, see [3]. Raphael has continued the work for more details and the open source hardware and software see Morten has a cool paper (combing a keyboard and capacitive sensing) at Interact 2009 – so check the program when it is out.

We talked about interaction and optical tracking and that reminded me that we wanted to see how useful the touchless SDK ( could be for final projects and exercise. Matthias Kranz had used it successfully with students in Linz in the unconventional user interfaces class.

[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51

[2] Gabriel, R., Sandsjö, J., Shahrokni, A., and Fjeld, M. 2008. BounceSlider: actuated sliders for music performance and composition. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, 127-130. DOI=

[3] Wimmer, R., Kranz, M., Boring, S., and Schmidt, A. 2007. A Capacitive Sensing Toolkit for Pervasive Activity Detection and Recognition. In Proceedings of the Fifth IEEE international Conference on Pervasive Computing and Communications (March 19 – 23, 2007). PERCOM. IEEE Computer Society, Washington, DC, 171-180. DOI=

Andreas Riener visits our lab

Andreas Riener from the University of Linz came to visit us for 3 days. In his research he works on multimodal and implicit interaction in the car. We talked about several new ideas for new user multimodal interfaces. Andreas had a preseure matt with him and we could try out what sensor readings we get in different setups. It seems that in particular providing redundancy in the controls could create interesting opportunities – hopefully we find means to explore this further.

Enrico Rukzio visits our Lab

Enrico Rukzio (my first PhD from Munich, now lecturer in Lancaster) visited our Lab. He was make a small tour of Germany (Münster, Essen, Oldenburg). In the user interface engineering class Enrico showed some on his current work on mobile interaction, in particular mobile projectors and NFC tags. After the presentation we wondered how long it will take till kids on the train will play with mobile projections 😉

We showed Enrico a demo of eye-tracking for active customization of browser adverts. In our setup we use the Tobii X120. For tracking of people in the room we still have not decided on a system – and Enrico told me about the Optitrack system they have. That looked quite interesting… 
As we all do studies in our work – the design of studies is critical and there is an interesting book to help with this: How to Design and Report Experiments by Andy Field  and Graham J. Hole.

Christian Kray visits our Lab

Christian Kray and I were colleagues in Lancaster for a very short time – he just joined the university when I left for Munich. After his post-doc in Lancaster he moved to a position in Newcastle.

His work at the cross roads of mobile interaction and public displays is very exciting. In particular he investigates interesting concepts related to visual codes – some aspects to these ideas are discussed in “Swiss Army Knife meets Camera Phone” [1]. His new prototypes are really cool and I look forward to see/read more about them.

We realized that there are many areas where we have common interests. Perhaps there is a chance in the future to work together on some of the ideas discussed!

[1] Swiss Army Knife meets Camera Phone: Tool Selection and Interaction using Visual Markers. C. Kray and M. Rohs. (2007) In “Workshop on Mobile Interaction with the Real World at Mobile HCI 2007”. Singapore, September 9, 2007.

Nicolas Villar visiting

Nicolas, who was the first BSc student I worked with in Lancaster, is now after finishing his PhD with Microsoft Research in Cambridge, UK. He came on Friday to Essen to see the lab and he brought us a Voodoo I/O box [1] – we are really excited!

He stayed for the weekend and I learned a lot about interesting technologies and ideas. Looking at his iREX ebook and Vivien’s new USB Microscope (30€ from Aldi 😉 we had to do some research into the screen quality of different devices. It is interesting to see that e-Ink moves closer to newspaper and that in comparison to it an iPhone screen is pretty coarse.

Some references to remember:

[1] Spiessl, W., Villar, N., Gellersen, H., and Schmidt, A. 2007. VoodooFlash: authoring across physical and digital form. In Proceedings of the 1st international Conference on Tangible and Embedded interaction (Baton Rouge, Louisiana, February 15 – 17, 2007). TEI ’07. ACM, New York, NY, 97-100. DOI=

Wolfgang Spießl introduces context-aware car systems

Wolfgang visited us for 3 days and we talked a lot about context-awareness in the automotive domain. Given the sensors included in the cars and some recent ideas on context-fusion it seems feasible that in the near future context-aware assistance and information systems will get new functionality. Since finishing my PhD dissertation [1] there has been a move towards two directions: context predication and communities as source for context. One example of a community based approach is which evolved out of ContextWatcher /IST-Mobilife.

In his lecture he showed many examples how pervasive computing happens in the car already now. After the talk we had the chance see and discuss user interface elements in current cars – in particular the head up display. Wolfgang gave demonstration of the CAN bus signals related to interaction with the car that are available to create context-aware applications. The car head-up display (which appears as being just in front of the car) create discussions on interesting use cases for these types of displays – beyond navigation and essential driving information.
In the lecture questions about how feasible / easy it is to do your own developments using the UI elements in the car – basically how I can run my applications in the car. This is not yet really supported 😉 However I had a previous post [2] where I argue that this is probably to come… and I still see this trend… It may be an interesting though how one can provide third parties access to UI components in the car without giving away control…