Silvia Miksch talking about time oriented visual analytics

It seems this term we picked a good slot for the lecture. On Thursday we had Prof. Silvia Miksch from Vienna University of Technology visiting our institute. We took this chance for another guest lecture in my advanced HCI class. Silvia presented a talk with the title “A Matter of Time: Interactive Visual Analytics of Time-Oriented Data and Information”. She first introduced the notion of interactive visual analytics and then systematically showed how time oriented data can be visually presented.

I really liked how Silvia motivated visual analytics and could not resist to adapt it with a Christmas theme. The picture shows three representations (1) numbers, always 3 grouped together, (2) a plot of the numbers where the first is the label and the second and the third are coordinates, and (3) a line connecting the labels in order. Her example was much nicer, but I missed to take a photo. And it is obvious that you do not put it on the same slide… Nevertheless I think even this simple Christmas tree example shows the power of visual analytics. This will go in my slide set for presentations in schools 😉

If you are more interested in the details of the visualization of time oriented data, please have a look at the following book: Visallization of Time-Oriented Data, by Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. Springer, 2011. http://www.timeviz.net [2]. After the talk there was an interested discussion about the relationship and fundamental difference between time and space. I think this is worthwhile further discussion.

Another direction to follow up is tangible (visual) analytics. It would be interesting to assess the contributions to understanding of further modalities when interactively exploring data, e.g. haptics and sound. Some years back Martin Schrittenloher (one of my students in Munich) visited Morten Fjeld for his project thesis and experimented with force feedback sliders [1], … perhaps we should have this as a project topic again! An approach would be to look specifically at the understanding of data when force-feedback is presented on certain dimensions.

References
[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51
[2] Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. Visallization of Time-Oriented Data. Springer, 2011. http://www.timeviz.net

Bryan Reimer: Opening keynote at Auto-UI 2011 in Salzburg

Bryan started his keynote talk the automotive user interface conference (auto-ui.org) in Salzburg with reminding us that having controversial discussions about the HMI in the car is not new. Quoting a newspaper article from the 1930s on the introduction of the radio in the car and its impact on the driver he picked an interesting example, that can be seen as the root of many issues we have now with infotainment systems in the car.

The central question he raised is: how to create user interface that fit human users? He made an important point: humans are not “designed” to drive at high speed in complex environments; perception has evolved for walking and running in natural environment. Additionally to the basic limitations of human cognition, there is a great variety of capabilities of drivers, their skills and cognitive ability (e.g. influence of age). A implication of the global change is demographics is that the average capabilities of a drivers will be reduced – basically as many older people will be drivers…

Over the last 100 years cars have changes significantly! Looking more closely Bryan argues that much of the chance happened in the last 10 years. There has been little change from the 1950s to the 1990s with regard to the car user interface.

It is apparent that secondary tasks are becoming more important to the user. Users will interact more while driving because the can. It is however not obvious that they are capable of it.

Even given these developments it is apparent that driving has become safer. Passive safety has been improved massively and this made driving much safer. There seems to be a drawback to this as well, as people may take greater risks as they feel safer. The next step is really to avoid accidence in the first place. Bryan argues that the interaction between driver, environment, and vehicles is very important in that. He suggests that we should make more of an effort to create systems that fit the drivers.

The Yerkes-Dodson Law helps to understand how to design systems that keep peoples attention in the optimal performance. He made an important point: there are certain issues that cannot be solved, e.g. if someone is tired we can do only very little – the driver will need to rest. We should make sure that we take these things into account when designing systems.

Visual distraction is an obvious factor and much discussed in the papers at the conference – but Bryan argued that “eyes on the road” is not equal to “mind on the road”. I think this is really a very important point. Ensuring that people keep their eyes on the road, seeing things is not enough. The big resulting question is how to keep or get people focused on the street and environment. It seems there is some more research to do…

The variety of interfaces and interaction metaphors build into cars opens more choices but at the same time creates problems, as people need to learn and understand them. A simple question such as: How do you switch the car off? may be hard to answer (Bryan had the example of a car with a push button starter, where you cannot remove the key). I think there are simple questions that can be learned from industry and production machines… add an emergency stop button and make it mandatory 😉

If you are interested more about Bryan’s work look at his webpage or his page at the MIT agelab or one of his recent publications [1] in the IEEE Pervasive Computing Magazine’s special issue on automotive computing, see [2] for an introduction to the special issue.

Sorry for the poor quality photos … back row and an iPhone…

[1] Joseph F. Coughlin, Bryan Reimer, and Bruce Mehler. 2011. Monitoring, Managing, and Motivating Driver Safety and Well-Being. IEEE Pervasive Computing 10, 3 (July 2011), 14-21. DOI=10.1109/MPRV.2011.54 http://dx.doi.org/10.1109/MPRV.2011.54

[2] Albrecht Schmidt, Joseph Paradiso, and Brian Noble. 2011. Automotive Pervasive Computing. IEEE Pervasive Computing 10, 3 (July 2011), 12-13. DOI=10.1109/MPRV.2011.45 http://dx.doi.org/10.1109/MPRV.2011.45

CHI 2010 – Opening and Keynote

2343 attendees came to CHI 2010 this year to Atlanta. Participants are from 43 countries and the colored map suggested that a good number came from Germany. Outside it really feels like spring 🙂

Overall CHI 2010 received 2220 submission across 13 categories of which 699 were accepted. In the paper and nodes categories there were 1345 submissions of which 302 were accepted (22% acceptance rate).

Genevieve Bell from Intel is a cultural anthropologist and she presented the CHI opening keynote with the title: “Messy Futures: culture, technology and research”. She is a great story teller and showed exemplarily the value of ethnography and anthropology research. One very graphical example was the picture of what are the real consumers – typically not living in a perfect environment, but rather living clutter and mess …

A further issue she briefly addressed was the demographic shifts and urbanization (soon three quarter of people will live in cities). This followed on to an argument for designing for the real people and for their real needs (in contrast to the idea of designing for women by “shrinking and pinking it”).

Genevieve Bell discussed critical domains that drive technology: politics, religion, sex, and sports. She argued that CHI and Ubicomp has not really looked at these topics – or at least they did not publish it in CHI 😉 Here examples were quite entertaining and fun to listen to the keynote – but it created little controversy.

NSF/EU workshop in Mannheim

Mohan Kumar and Marco Conti organized an EU/NSF workshop on Future Directions in Pervasive Computing and Social Networking for Emerging Applications. They managed to get together an interesting set of people and the discussion in the break out session were very enjoyable and I got a number of ideas what really are the challenges to come.

There are the position statements on the web page and at some point the identified grand challenges will be available.

PS: blackboards are still highly effective 😉

Visit to TU Dortmund: Impressive Demos on Vision and Audio

After several tries we finally managed to travel to Dortmund (half an hour on the S-Train) to visit Gernot A. Fink‘s group at the Technical University Dortmund. Bastian Pfleging did with this group his master thesis before he joined us. The research focus of the group is on signal processing and computer vision. They also follow an experimental approach – building systems that work (which we saw in the demos). In their lab space they have setup a building (basically a house inside a house – impressive!).

I have learned about a new location technology based on passive infrared sensors. The idea is to pick heat emitted from people and combine the output from several sensors to localize the person. The technology is very simple, potentially cheap, and privacy preserving. Sometime back we thought of a project topic using thermal imaging (not really cheap or privacy preserving) for context-awarenes – but so far there was no student who wanted to do it. Perhaps we should try again to find a student.

The other demos were situated in a meeting room that is equipped with several cameras and microphones. It was interesting to see how robust several of the vision prototypes managed to track people in the room and to detect pointing actions. One basic mechanism the use to detect interesting regions in an image is saliency based on different features – and it works well.

The audio demo used two arrays of 8 microphones each; the arrays are nicely integrated in a ceiling panel. Using these signals they can calculate the energy that originates from a certain spatial region in the room. Looking at the complexity of the hardware and software for sound localization it appears not in the far future that this could become ubiquitous. We talked about the work James Scott did on sound localization (snipping on a light switch) – here is the reference [1].

The room is equipped with sensors, lights, switches and a UI panel that are linked over a commercial bus system (KNX). Sometime ago we had a bachelor project in Essen that looked at EnOcean (another home networking technology). We discussed how well these systems are positioned in comparison to web technologies.

I personally think medium term we will move – at least on a control and user interface level – to web protocols. The moment you use web protocols it is so much easier to create user interfaces (e.g. using a Web browser as frontend) and it is simple integrate with existing systems (e.g. facebook). It would be interesting to assess how easy it is to use RESTful services to replicate some of the features of home automation systems. Sounds like an interesting project topic. There is a workshop on the Web of Things at PerCom in Mannheim – I am curious what is coming up there.

[1] James Scott, Boris Dragovic: Audio Location: Accurate Low-Cost Location Sensing. Pervasive Computing: Third International Conference, PERVASIVE 2005, Munich, Germany, May 8-13, 2005. Springer LNCS 3468/2005. pp 1-18. http://dx.doi.org/10.1007/11428572_1

Visiting TU-Berlin and T-Labs

We have a number of student projects that look at novel applications and novel application platforms on mobile phones. As Michael Rohs from T-Labs is also teaching a course on mobile HCI we thought it would be a good opportunity to meet and discuss some application ideas.

I gave a talk in Michael’s lecture discussing the concept of user interfaces beyond the desktop, context as enabling technology, and future applications in mobile, wearable and ubiquitous computing. We had an interesting discussion – and in the end it always comes down to privacy and impact on society. I see this as a very positive development as it shows that the students are not just techies but that they see the bigger picture – and the impact (be it good or bad) they may have with their developments. I mentioned to books that are interesting to read: the transparent society [1] and total recall [2].

In the afternoon we discussed two specific projects. One was an application for informal social while watching TV (based on a set iconic communication elements) that can be used to generate meta data on the program shown. The other is a platform that allows web developers to create distributed mobile applications making use of all the sensors on mobile phones. It is essential a platform an API that provides access to all functions on the phones available in S60 phones over a RESTful API, e.g. you can use a HTTP call to make a photo on someone’s phone. We hope to release some of the software soon.

In the coffee area at T-labs was a printout with the 10+1 innovation principles – could not resist to take a photo 😉 Seems innovation is really trival – just follow the 11 rules and you are there 😉

[1] David Brin. The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom. Basic Books. 1999. ISBN-13: 978-0738201443. Amazon-link. Webpage: http://www.davidbrin.com/transparent.htm

[2] Gordon Bell, Jim Gemmell. Total Recall: How the E-Memory Revolution Will Change Everything. Dutton Adult. 2009. ISBN-13: 978-0525951346. Amazon-link. Webpage: http://totalrecallbook.com/

Papers are all similar – Where are the tools to make writing more effective?

Yesterday we discussed (again during the evening event of MobileHCI2009) how hard it would be to support the process of writing a high quality research paper and essays. In many conference there is a very defined style what you need to follow, specific things to include, and certain ways of how to present information. This obviously depends on the type of contribution but within one contribution type there could be probably provided a lot of help to create the skeleton of the paper… In many other areas Sounds like another project idea 😉

You ought to keep your essay presentation for the IELTS paper short. Recall that you just have 40 minutes to compose the exposition, and some of this time should be spent arranging. Along these lines, you should have the capacity to compose your presentation decently fast so you can begin composing your body sections and ask if needed.

Workshop at MobileHCI: Context-Aware Mobile Media and Mobile Social Networks

Together with colleagues from Nokia, VTT, and CMU we organized a workshop on Context-Aware Mobile Media and Mobile Social Networks at MobileHCI 2009.

The topic came up in discussions some time last year. It is very clear that social network have moved towards mobile scenarios and that utilizing context and contextual media adds a new dimension. The workshop program is very diverse and ranges studying usage practices to novel technological solutions for contextual media and application.

One topic that is interesting to further look at is to use (digital) social networks for health care. Taking an analogy in history it is evident that the direct social group you were in took were the set of people that helped you in case of illness or accident. Looking at conditions and illnesses that cause a loss of mobility or memory it could be interesting to find applications on top of digital social networks to provide help. Seems this could be a project topic.

In one discussion we explored what would happen if we would change our default communication behavior from closed/secret (e.g. Email and SMS) to public (e.g. bulletin boards). I took the example of organizing this workshop: our communication has been largely on email and has not been public. If it would had been open (e.g. public forum) we probably would have organized the workshop in the same way but at the same time provided an example how one can organize a workshop and by this perhaps provided useful information for future workshop chairs. In this case there are little privacy concerns but images all communication is public? We would learn a lot about how the world works…

About 10 years ago we published at paper there is more to context than location [1]. However, looking at our workshop it seems: location is still the dominant context people think of. Many of the presentations and discussions included the term context, but the examples focused on location. Perhaps we do need location only? Or perhaps we should look more closely to find the benefit of other contexts?

[1] A. Schmidt, M. Beigl, H.W. Gellersen (1999) There is more to context than location, Computers & Graphics, vol. 23, no. 6, pp. 893-901.

More surface interaction using audio: Scratch input

After my talk at the Minerva School Roy Weinberg pointed me to a paper by Chris Harrison and Scott Hudson [1] – it also uses audio for creating an interactive surface. The novelty on the technical side is limited but nevertheless the approach is interesting and appealing because of its simplicity and its potential (e.g. just think beyond a fingernail on a table to any contact movement on surfaces – pushing toy cars, walking, pushing a shopping trolley…). Perhaps having a closer look at this approach a generic location system could be created (e.g. using special shoe soles that make a certain noise).

There is a youtube movie: http://www.youtube.com/watch?v=2E8vsQB4pug

Besides his studies Roy develops software for the Symbian platform and he sells a set of interesting applications.

[1] Harrison, C. and Hudson, S. E. 2008. Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 205-208. DOI= http://doi.acm.org/10.1145/1449715.1449747

Taking pictures during sports – ideas for an appliance


If you do sports it typically requires another person to take the photos of you. Having the evening off in in Haifa Keith, Antonio and me went climbing at http://www.shafan-hasela.com/. It was not easy to get there – we used the typical way – first: take a bus to a random place (not intentially) – second: realize that the bus went to a place you did not want to go – third: take the taxi to where you wanted to go.

Being three people it was very easy to takes pictures while climbing – and I as I am climbing a class below Antonio and Keith I had a lot of time to take the pictures 😉

Being computer scientist you always think about cool, challenging, and exciting projects. So we wondered if we could build an autonomous flying object that contains a camera that follows you (in a defined distance) and takes exciting photos. We have an idea how this could be done – let me know if you would be interested in the project (e.g. bachelor/master)- may be even done in a collaboration with Lancaster.

Social networks connected to the real world

Florian Michahelles mentioned in his blog a talk [1] and paper [2] by Aaron Beach on mobile social networks that are linked to artefacts (e.g. clothing) in the real world. This is really interesting and I think we should look more into this…

[1] Aaron Beach. University of Colorado. Whozthat: Mobile Social Networks. Whoz touching me? Whoz Music? Whoz Watching? Who Cares?

[2] Beach, A.; Gartrell, M.; Akkala, S.; Elston, J.; Kelley, J.; Nishimoto, K.; Ray, B.; Razgulin, S.; Sundaresan, K.; Surendar, B.; Terada, M.; Han, R., “WhozThat? evolving an ecosystem for context-aware mobile social networks” Network, IEEE , vol.22, no.4, pp.50-55, July-Aug2008

Visit to NEC labs in Heidelberg

In the afternoon I gave a talk at NEC labs in Heidelberg on ubiquitous display networks. Over the last year we developed and number of ideas and prototypes of interactive public display systems. We run a lab class (Fallstudien) on pervasive computing technologies and advertising together with colleagues from marketing. In another class (Projektseminar) we investigated how to facilitate interaction between interactive surfaces (e.g. multi touch table) and mobile devices. One of the prototypes will be shown as poster at mobile HCI 2009 in Bonn. In some thesis projects we introduced the notion of mobile contextual displays and their potential applications in advertising, see [1] and [2].

Seeing the work at NEC and based on the discussion I really think there is a great of potential for ubiquitous display networks – at the same time there are many challenges – including privacy that allways ensures discussion 😉 It would be great to have another bachelor or master thesis to address some of them – perhaps jointly with people from NEC. To understand the information needs in a particular display environment (at the University of Duisburg-Essen) we currently run a survey to better understand requirements. If you read German you are welcome to participate in the survey.

Predicting the future usually features in my talks – and interestingly I go a recommendation from Miquel Martin for a book that takes its own angle on that: Predictably Irrational by Dan Ariely (the stack of book gets slowly to large – time for holidays).

[1] Florian Alt, Albrecht Schmidt, Christoph Evers: Mobile Contextual Displays. Pervasive Advertising Workshop @ Pervasive 2009. Nara, Japan 2009.

[2] Florian Alt, Christoph Evers, Albrecht Schmidt: Users’ View on Car Advertisements. In: Proceedings of the Seventh International Conference on Pervasive Computing, Pervasive’09. Springer Berlin / Heidelberg Nara, Japan 2009.

Human Computer Confluence – Information Day in Brussels

By the end of the month FET Open will launch the call for the proactive initiative on Human Computer Confluence. The term is new and hopefully it will really lead to new ideas. Today was already an information day on the upcoming proactive initiatives. I arrived the evening before and it is always a treat to talk a walk in the city.

The presentations were not really surprising and also the short intros by the participants remained very generic. Seeing the call that is now finalized and having been at the consultation meetings it seems to me that the focus is rather broad for a proactive initiative… but with many people wanting a piece of the cake this seems inevitable.

I presented a short idea of “breaking space and time boundaries” – the idea is related to a previous post on predicting the future. The main idea is that with massive sensing (by a large number of people) and with uniform access to this information – independ of time and space – we will be able to create a different view of our realty. We think of putting a consortium together for an IP. Interested? Then give me a call.

Andreas Riener visits our lab

Andreas Riener from the University of Linz came to visit us for 3 days. In his research he works on multimodal and implicit interaction in the car. We talked about several new ideas for new user multimodal interfaces. Andreas had a preseure matt with him and we could try out what sensor readings we get in different setups. It seems that in particular providing redundancy in the controls could create interesting opportunities – hopefully we find means to explore this further.

Meeting on public display networks

Sunday night I travelled to Lugano for a meeting public display networks. I figured out that going there by night train is the best option – leaving midnight in Karlsruhe and arriving at 6am there. As I planned to sleep all the time my assumption was that the felt travel time would be zero. Made my plan without the rail company… the train was 2 hours late and I walked up and down for 2 hours in Karlsruhe at the track – and interestingly the problem would have been less annoying if public displays would provide the relevant information … The most annoying thing was passengers had no information if or when the train will come and no one could tell (neither was anyone at the station nor was anyone taking calls at the hotline).
The public display – really nice state of the art hardware – showed for 1 hour nothing and that it showed that the train is one hour late (was already more than 1 hour after the scheduled time) and finally the train arrived 2 hours late (the display still showing 1 hour delay). How hard can it be to provide this information? It seems with current approaches it is too hard…

On my way back I could observe a further example of short comings with content on public display. In the bus office they had a really nice 40-50 inch screen showing teletext of the departure. The problem was it was the teletext for the evening as the staff has to manually switch the pages. Here too it is very clear the information is available but current delivery systems are not well integrated.

In summary it is really a pity how poorly the public display infrastructures are used. It seems there are a lot of advances in the hardware but little on the content delivery, software and system side.

Offline Tangible User Interface

When shopping for a sofa I used an interesting tangible user interface – magnetic stickers. For each of the sofas systems the customer can create their own configuration using these magnetic stickers on a background (everything in a scale 1:50).

After the user is happy with the configuration the shop assistant makes a xerox copy (I said I do not need a black and white copy I make my own color copy with the phone) and calculates the price and writes up an order. The interaction with the pieces is very good and also great as a shared interface – much nicer than comparable systems that are screen based. I could imaging with a bit of effort one could create a phone application that scans the customer design, calculates the prices, and provides a rendered image of the configuration – with the chosen color (in our case green ;-). Could be an interesting student project…

App store of a car manufacturer? Or the future of cars as application platform.

When preparing my talk for the BMW research colloquium I realized once more how much potential there is in the automotive domain (if you looks from am CS perspective). My talk was on the interaction of the driver with the car and the environment and I was assessing the potential of the car as a platform for interactive applications (slides in PDF). Thinking of the car as a mobile terminal that offers transportation is quite exciting…

I showed some of our recent project in the automotive domain:

  • enhance communication in the car; basically studying the effect of a video link between driver and passenger on the driving performance and on the communication
  • handwritten text input; where would you put the input and the output? Input on the steering wheel and visual feedback in the dashboard is a good guess – see [1] for more details.
  • How can you make it easier to interrupt tasks while driving – we have some ideas for minimizing the cost of interruptions for the driver on secondary tasks and explored it with a navigation task.
  • Multimodal interaction and in particular tactile output are interesting – we looked at how to present navigation information using a set of vibra tactile actuators. We will publish more details on this at Pervasive 2009 in a few weeks.

Towards the end of my talk I invited the audience to speculate with me on future scenarios. The starting point was: Imagine you store all the information that goes over the bus systems in the car permanently and you transmit it wireless over the network to a backend storage. Then image 10% of the users are willing to share this information publicly. That is really opening a whole new world of applications. Thinking this a bit further one question is how will the application store of a car manufacturer look in the future? What can you buy online (e.g. fuel efficiency? More power in the engine? A new layout for your dashboard? …). Seems like an interesting thesis topic.

[1] Kern, D., Schmidt, A., Arnsmann, J., Appelmann, T., Pararasasegaran, N., and Piepiera, B. 2009. Writing to your car: handwritten text input while driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 4705-4710. DOI= http://doi.acm.org/10.1145/1520340.1520724

Impact of colors – hints for ambient design?

There is a study that looked at how the performace in solving certain cognitive/creative tasks is influenced by the backgroun color [1]. In short: to make people alert and to increase performance on detail oriented tasks use red; to get people in creative mode use blue. Lucky us our corporate desktop background is mainly blue! Perhaps this could be interesting for ambient colors, e.g. in the automotive context…

[1] Mehta, Ravi and Rui (Juliet) Zhu (2009), “Blue or Red? Exploring the Effect of Color on Cognitive Task Performances” Science 27 February 2009:Vol. 323. no. 5918, pp. 1226 – 1229 DOI: 10.1126/science.1169144

The next big thing – let’s look into the future

At Nokia Research Center in Tampere I gave a talk with the title “Computing Beyond Ubicomp – Mobile Communication changed the world – what else do we need?“. My main argument is that the next big thing is a device that allows us to predict the future – on a system as well as on a personal level. This is obviously very tricking as we have a free will and hence the future is not completely predictable – but extrapolating from the technologies we see now it seems not farfetched to create a device that enables predictions of the future in various contexts.

My argument goes as follows: the following points are technologically feasible in the near future:

  1. each car, bus, train, truck, …, object is tracked in real-time
  2. each person is tracked (location, activity, …, food intake, eye-gaze) in real-time
  3. environmental conditions are continuously sensed – globally and locally sensed
  4. with have a complete (3D) model of our world (e.g. buildings, street surface, …)

Having this information we can use data mining, learning, statistics, and models (e.g. a physics engine) to predict the future. If you wonder if I forget to thing about privacy – I did not (but it takes longer to explain – in short: the set of people who have a benefit or who do not care is large enough).

Considering this it becomes very clear that in medium term there is a great potential in having control over the access terminal to the virtual world, e.g. a phone… just thing how rich your profile in facebook/xing/linkedin can be if it takes all the information you implicitly generate on the phone into account.

Visit to Nokia Research Center Tampere, SMS, Physiological sensors

This trip was my first time in Tampere (nice to see sometimes a new place). After arriving yesterday night I got a quick cultural refresher course. I even met a person who was giving today a presentation to the president of Kazakhstan (and someone made a copy using a phone – hope he got back OK to Helsinki after the great time in the bar).

In the morning I met a number of people in Jonna Hakkila’s group at the Nokia Research Center. The team has a great mix of backgrounds and it was really interesting to discuss the project, ranging from new UI concepts to new hardware platform – just half a days is much too short… When Ari was recently visiting us in Essen he and Ali started to implement a small piece of software that (hopefull) improves the experience when receiving an SMS (to Ali/Ari – the TODOs for the Beta-release we identified are: sound design, screen design with statistics and the exit button in the menu, recognizing Ok and oK, autostart on reboot, volume level controlable and respecting silent mode). In case you have not helped us with our research yet please fill in the questionnaire: http://www.pcuie.uni-due.de/uieub/index.php?sid=74887#

I gave a talk (see separate post on the next big thing) and had the chance to meet Jari Kangas. We discovered some common interest in using physiological sensing in the user interface context. I think the next steps in integrating physiological sensors into devices are smaller than expected. My expectation is that we rather detect simple events like “surprise” rather than complex emotion (at least in the very near future). We will see where it goes – perhaps we should put some more students on the topic…

Doctoral Seminar in Bommerholz, CS Career and new Ideas

Monday and Tuesday I organized together with Gernot A. Fink a PhD awayday for students in computer science of the Universities Bochum, Dortmund and Duisburg-Essen. With about 30 PhD students and some professors we went to Bommerholz, where the University of Dortmund has a small retreat.

The program included talks about career possibilities after the PhD including talks by:
  • Dr. Heiner Stüttgen, Vice President, NEC Laboratories Europe: “Industrial research – what is a PhD good for?”
  • Dr. Olaf Zwintzscher, CEO, W3L GmbH: “Adventure Spin-off – starting a company after garduation”
  • Dr. Wiltrud Christine Radau, Deutscher Hochschulverband: “career opportunities in universities”
Overall it became very clear that computer science is still the subject to study! The career opportunities are interesting, exciting and very good. Nevertheless there is always a downside to things – whatever way you choose you have to work hard 🙂
We had a further talk “Gutenberg over? The metamorphose scientific publishing” by Herrmann Engesser from Springer-Verlag. He showed in an interesting way how much change has happened in the last 40 years to publishing. The example of the Encyclopedia Britannica and the Brockhaus Encyclopedia demonstrates impressively that it is impossible to ignore changes in technology and stay successful in business. Looking at many newspapers one can only wonder when the will realize it.

Over coffee we discussed the added value that is provided by a publisher and by digital libraries like Springer Link, ACM DL or the IEEE Library. And here too there are many more open questions than answers. One clear direction is to look more into scientific communities. One idea that I find quite interesting is to search for publications that are from my scientific community, e.g. “give me all paper that have haptic in the title and that are published by people I am linked to in facebook, xing, and linkedin or by their contacts”. Sounds like an interesting project 🙂

Besides the invited talks we had three poster sessions. In each session 9 students presented their work. We started with 90 seconds presentations and then had discussions over the posters. As we had topics from all areas in Computer science I first expected that this may be pretty boring – but it was surprisingly interesting. I learned a lot about bio-informatics, learning algorithms, data mining, robotics and security over the last two days. Things I would never have read – but getting it explained in the context of a concrete PhD project was fun.
Our evening program was centered on movies. We first showed a number of snippets from movies (including James Bond, Harry Potter, Star Trek, and Minority Report) where cool technology feature. Then the students had 45 minutes to create new ideas of believable technology gadgets for two films, one to plays in 2011 and the other in 2060. The ideas were fun reaching form manipulated insects, to smart dust, to the exploitation of social networks. If you are Steven Spielberg or someone else who plans a movie feel free to call me – we have a lot of ideas 😉

Poster on mobile advertising displays at HotMobile 2009

We put together a poster discussing some of our recent work on mobile displays for HotMobile. While presenting the poster I got a number of interesting ideas and concerns. One idea is to widening the idea of advertsing and fuse it with traditional classify ads by private people (e.g. advertising a flat or telling the world that you lost your cat). The big question is really how to measure audince exposure and eventually conversion. There are several ideas how to do this – but looks more like another master project on the topic than a overnight hack 😉

The abstract for the poster:
In recent years many conventional public displays were replaced by electronic displays hence enabling novel forms of advertising and information dissemination. This includes mainly stationary displays, e.g. in billboards and street furniture, and currently first mobile displays on cars appear. Yet, current approaches are mostly static since they neither do consider mobility and the context they are used in nor the context of the viewer. In our work we explore how mobile public displays, which rapidly change their own context, can gather and process information about their context. Data about location, time, weather, and people in the vicinity can be used to react accordingly by displaying related content such as information or advertisements.

When spending some time in Montain View I was suprised how few electronic screens I saw compared to Germany or Asia. But nevertheless they have their own ways of creating attention… see the video below 🙂
Some time back in Munich we look at how interaction modalities can effect the attention of bystanders, see [1] for a short overview of the work.

[1] Paul Holleis, Enrico Rukzio, Friderike Otto, Albrecht Schmidt. Privacy and Curiosity in Mobile Interactions with Public Displays. Poster at CHI 2007 workshop on Mobile Spatial Interaction. San Jose, California, USA. 28 April 2007.

Andreas Riener defends his PhD in Linz

After a stop-over in Stansted/Cambridge at the TEI conference I was today in Linz, Austria, as external for the PhD defense of Andreas Riener. He did his PhD with Alois Ferscha and worked on implicit interaction in the car. The set and size of experiments he did is impressive and he has two central results. (1) using tactile output in the car can really improve the car to driver communication and reduce reaction time. And (2) by sensing the force pattern a body creates on the seat driving relates activities can be detected and to some extend driver identification can be performed. For more details it makes sense to have a look into the thesis 😉 If you mail Andreas he will probably sent you the PDF…
One of the basic assumptions of the work was to use implicit interaction (on input and output) to lower the cognitive load while driving – which is defiantly a valid approach. Recently however we also discussed more the issues that arise when the cognitive load for drivers is to low (e.g. due to assistive systems in the car such as ACC and lane keeping assistance). There is an interesting phenomenon, the Yerkes-Dobson Law (see [1]), that provides the foundation for this. Basically as the car provides more sophisticated functionality and requires less attention of the user the risk increase as the basic activation of the driver is lower. Here I think looking into multimodality to activate the user more quickly in situations where the driver is required to take over responsibility could be interesting – perhaps we find a student interested in this topic.
[1] http://en.wikipedia.org/wiki/Yerkes-Dodson_law (there is a link to the 1908 publication by Yerkes, & Dodson)

Voice interaction – Perhaps it works …

Today we visited Christian Müller at DFKI in Saarbrücken. He organized a workshop on Automotive User Interfaces at IUI last week. My talk was on new directions for user interfaces and in particular arguing for a broad view on multimodality. We showed some of our recent projects on car user interfaces. Dagmar gave a short overview of CARS our simulator for evaluating driving performance and driver distractions and we discussed options for potential extensions and shortcomings of the Lane Change Task.
Being a long time skeptic about voice interfaces I was surprise to see a convincing demo of a multimodal user interface combining voice and a tactile controller in the car. I think this could be really an interesting option for future interfaces. 
Classical voice-only interfaces usually lack basic properties of modern interactive systems, e.g. as stated in Shneiderman’s Golden Rules or in Norman’s action cycle. In particular the following points are most often not well realized in voice-only system:
  • State of the system is always visible
  • Interactions with the system provide immediate and appropriate feedback
  • Actions are easily reversible
  • Opportunities for interaction are always visible 
By combing a physical controller with voice and having at the same time the objects of interaction visible to the user (as part of the physical system that is controlled, e.g. window, seat) these problems are addressed in a very interesting way. I am looking forward to seeing more along these lines – perhaps we should also not longer ignore speech interaction in our projects 😉 

Towards interaction that is begreifbar

Since last year we have in Germany a working group on graspable/tangible interaction in mixed realities.
In German the key term we use is “begreifbar” or “begreifen” which has the meaning of acquire a deep understanding of something and the words basic meaning is to touch. Basically understand by touching – but in a more fundamental sense than grasping or getting grip. Hence the list of translations for “begreifen” given in the dictionary is quite long.
Perhaps we should push more for the word in the international community – Towards interaction that is begreifbar (English has too few foreign terms anyway 😉

This meeting was organized by Reto Wettach at Potsdam and the objective was to have two days to invent things together. The mix of people mainly included people from computer science and design. It is always amazing how many ideas come up if you put 25 people for a day in a room 🙂 We followed this week up on some of the ideas related to new means for communication – there are defiantly interesting student projects on this topic.

In the evening we had a half pecha-kucha (each person 10 slides of 20 seconds – in total 3:20, the original is 20 slides) http://www.pecha-kucha.org/. It is a great way of getting quickly to know about work, research, ideas, and background of other people. It could be format we could use more in teaching a perhaps for ad-hoc sessions at a new conference we plan (e.g. http://auto-ui.org) … prepared my slides on the train in the morning – and it is more challenging that expected to get a set of meaningful pictures together for 10 slides.

Overall the workshop showed that there is a significant interest and expertise in Germany moving from software ergonomics to modern human computer interaction.
There is a new person on our team (starting next week) – perhaps you can spot him on the pics.
For a set of pictures see my photo gallery and the photos on flickr.

Rating your professor, teacher, doctor, or fellow students?

This morning I was coming back from Munich* on the train I got a phone call from a journalist from Radio Essen (http://www.102.2radioessen.de/). As their studio is very close to the railways station in Essen I went there spontaneously before going back to University. 

We talked a little about web services for students to rate their profs (e.g. meinProf.de). The numbers of ratings most professors have received so far is extremely small (in comparison to the number of students we teach) and hence you get interesting effects that are far from representative or in many cases even meaningful. Last term I registered my course and we sent proactively a mail to all students who complete the course with the request to rate the lectures. This seems to be a good way to generate a positive selection 🙂
There are many of these services out – rating teachers, doctors, shops, etc. Thinking a little more about the whole concept of rating others one could image many interesting services – all of them creating a clear benefit (for someone) and a massive reduced privacy for others. 
To make it more specific I offer you one idea: Rate your fellow students’ professonal capabilities and academic performance. Students have typically a very good insight into the real qualities of their peers (e.g. technical skills, social compatibility, creativity, mental resilience, ability to cope with workload, diligence, honesty etc.). Having this information combined with the official degree (and the transcript the university offers) a potential employer would get a really interesting picture… We discussed this with students last term an the reactions were quite diverse – as one can image.>
Obviously such a service would create a lot of criticism (which lowers the cost of marketing) and one would have to carefully think in which countries it would be legal to run it. An interesting question would also be what verification one would employ to ensure that the ratings are real – or perhaps we would not need to care? Interested in the topic – perhaps we should get 5 people together implemented in a week and get rich 😉 
The direction of such rating systems are taking is very clear – and it seems that they will come in many areas of our life. Perhaps there is some real research in it… how will these technology change the way we live together?

* travelling from Munich (leaving at 22:30) and arriving in Essen in the morning (or Darmstadt) works fairly well and if you stay in a hotel in Stuttgart 😉 – it is surprisingly a real alternative to a night train or an early morning flight…

Will we have face-2-face PC meetings in the future?

On Thursday morning I flew to Boston for the CHI 2009 PC meeting. The review and selection process was organized very professional and efficient. We discussed all papers in one and a half days – and I think an interesting program came out and I learned a lot about what values my colleagues see or see not in papers. On Friday afternoon I flew back to the UK for the Pervasive 2009 PC meeting in Cambridge (with the same crew on the plane).

Nevertheless the question remains how sustainable is it that 100 people fly to a face-2-face meeting. In what way could we do such a meeting remotely? Video conferencing still does not really work well for larger group discussions (just collecting experience here in Cambridge during the Pervasive PC meeting)… Can it be so difficult to make a reasonable video link between two meeting room? How could we recreate the social aspects (like a joined dinner or walking back through the city with Gregory) as well as side conversations in the meetings? We probably should try harder – It cannot be that difficult – there have been massive amounts of work in CSCW research? Perhaps we should try linking two rooms at different universities as a group project next term? 

Male (88%), writing like Oscar Wilde (35%)

Looking into Paul Rayson’s blog and discovered an interesting link: http://www.genderanalyzer.com. It is a web form where you can put in an URL and you get an estimate whether the author of this text is male or female. For me it worked great 😉 It says that the text I wrote in my blog is with 88% written by a male. I tried it with a few more of my pages and it worked. Then I looked at some pages of some of my female colleagues and to my surprise it seems they do not write their web pages by themselves (as the program indicated 95% male writer) – they probably all have a hidden male assistant 😉

While I was in Lancaster I shared for most of the time an office with Paul. During this time I learned a lot of interesting things about corpus linguistics and phenomena in language in general – just by sharing the office. One fact at that at the time was surprising to me is that if you take 6 words from an arbitrary text in the exact order as they appear in the text and you search on the web for the exact phrase it is likely that you will only find this text. How many hits do you get for phrase “I was at Trinity College reading” in google? Try it out 😉 [to students: that is why not getting caught when you plagiarize is really hard]

From http://www.genderanalyzer.com I came to http://www.ofaust.com and to my great surprise I write like Oscar Wilde (35%) and Friedrich Nietzsche (30%). Thinking of social networks (and in particular the use of languages within closed groups) such technologies could become an interesting enabling technology for novel applications. Perhaps I should visit Paul again in Lancaster…
PS: and I nearly forgot I am a thinker / INTJ – The Scientists (according to http://www.typealyzer.com/)
PPS (2008-11-17): a further URL contrinuted from my collegues on the gender topic: http://www.mikeonads.com/2008/07/13/using-your-browser-url-history-estimate-gender/

What can you alarm clock do? Platform for the bedside table

I have learned about Chumby an interesting platform that is designed to replace devices on your bedside table. Looking forward to get one or some when I fly next time to the US.

For a design competion at the appliance design conference I did a design concept for a networked alarm clock [1] assuming that networked device will be soon cheaply available. Maybe we should look at the paper again and think about how to push such ideas forward as the devices are on the market…

[1] Schmidt, A. 2006. Network alarm clock (The 3AD International Design Competition). Personal Ubiquitous Computing Journal. 10, 2-3 (Jan. 2006), 191-192. DOI= http://dx.doi.org/10.1007/s00779-005-0022-y

Workshops at Informatik 2008 in Munich, e-ink prediction

Yesterday there was a workshop on Mobile and Embedded Interaction as part of Informatik2008 in Munich. The talks and discussions were very interesting. Lucia and Thomas raised interesting issues on a new notion of personal computing, where the mobile device becomes the center of a personal computing infrastructure. This idea has been around for some time (e.g. Roy Wants Personal Server [1]) but the new ideas and the feasibility with current hardware makes it really an exciting topic. On the general topic there are many open questions, as visible on the slide.

After the workshop, when swapping business cards, we started the discussion when in the future we will have business cards (in larger quantities, to give away) that have active display elements (e.g. eInk) included. Everyone gave a predictions in how many years we will have it (Lucia Terrenghi:never; Raimund Dachselt:7; Thomas Lang: business card will disappear; Albrecht Schmidt:9; Heiko Drewes:10; Florian Echtler:5; Michael Rohs:5; Paul Holleis:5). Lets get back in 5 years and see… In September 2008 the Esquire Magazine featured an e-ink cover page – have not seen it myself:-( but there is a video: http://www.esquire.com/the-side/video/e-ink-cover-video

Today we organized a workshop on Software, Services and Platforms for new infrastructures in telecommunication. We had a set of really interesting talks. As I did my PhD on context-awareness I was quite impressed by work on context oriented programming and the advances over the last years in this domain (good starting point on the topic with some publications [2]).

At the end of the workshop I gave the following scenario as an impulse for discussion: image there are 10 million facebook users that contniouly stream the video of what they see into the net, e.g. using eagle-i. The discussion raise many technical as well as social challenges!

[1] Want, R., Pering, T., Danneels, G., Kumar, M., Sundar, M., and Light, J. 2002. The Personal Server: Changing the Way We Think about Ubiquitous Computing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 – October 01, 2002). G. Borriello and L. E. Holmquist, Eds. Lecture Notes In Computer Science, vol. 2498. Springer-Verlag, London, 194-209.

[2] http://www.swa.hpi.uni-potsdam.de/cop/

PS: there are few photos as someone in the workshop today objected to be on the net…