Silvia Miksch talking about time oriented visual analytics

It seems this term we picked a good slot for the lecture. On Thursday we had Prof. Silvia Miksch from Vienna University of Technology visiting our institute. We took this chance for another guest lecture in my advanced HCI class. Silvia presented a talk with the title “A Matter of Time: Interactive Visual Analytics of Time-Oriented Data and Information”. She first introduced the notion of interactive visual analytics and then systematically showed how time oriented data can be visually presented.

I really liked how Silvia motivated visual analytics and could not resist to adapt it with a Christmas theme. The picture shows three representations (1) numbers, always 3 grouped together, (2) a plot of the numbers where the first is the label and the second and the third are coordinates, and (3) a line connecting the labels in order. Her example was much nicer, but I missed to take a photo. And it is obvious that you do not put it on the same slide… Nevertheless I think even this simple Christmas tree example shows the power of visual analytics. This will go in my slide set for presentations in schools 😉

If you are more interested in the details of the visualization of time oriented data, please have a look at the following book: Visallization of Time-Oriented Data, by Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. Springer, 2011. http://www.timeviz.net [2]. After the talk there was an interested discussion about the relationship and fundamental difference between time and space. I think this is worthwhile further discussion.

Another direction to follow up is tangible (visual) analytics. It would be interesting to assess the contributions to understanding of further modalities when interactively exploring data, e.g. haptics and sound. Some years back Martin Schrittenloher (one of my students in Munich) visited Morten Fjeld for his project thesis and experimented with force feedback sliders [1], … perhaps we should have this as a project topic again! An approach would be to look specifically at the understanding of data when force-feedback is presented on certain dimensions.

References
[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51
[2] Wolfgang Aigner, Silvia Miksch, Heidrun Schumann, and Christian Tominski. Visallization of Time-Oriented Data. Springer, 2011. http://www.timeviz.net

Bryan Reimer: Opening keynote at Auto-UI 2011 in Salzburg

Bryan started his keynote talk the automotive user interface conference (auto-ui.org) in Salzburg with reminding us that having controversial discussions about the HMI in the car is not new. Quoting a newspaper article from the 1930s on the introduction of the radio in the car and its impact on the driver he picked an interesting example, that can be seen as the root of many issues we have now with infotainment systems in the car.

The central question he raised is: how to create user interface that fit human users? He made an important point: humans are not “designed” to drive at high speed in complex environments; perception has evolved for walking and running in natural environment. Additionally to the basic limitations of human cognition, there is a great variety of capabilities of drivers, their skills and cognitive ability (e.g. influence of age). A implication of the global change is demographics is that the average capabilities of a drivers will be reduced – basically as many older people will be drivers…

Over the last 100 years cars have changes significantly! Looking more closely Bryan argues that much of the chance happened in the last 10 years. There has been little change from the 1950s to the 1990s with regard to the car user interface.

It is apparent that secondary tasks are becoming more important to the user. Users will interact more while driving because the can. It is however not obvious that they are capable of it.

Even given these developments it is apparent that driving has become safer. Passive safety has been improved massively and this made driving much safer. There seems to be a drawback to this as well, as people may take greater risks as they feel safer. The next step is really to avoid accidence in the first place. Bryan argues that the interaction between driver, environment, and vehicles is very important in that. He suggests that we should make more of an effort to create systems that fit the drivers.

The Yerkes-Dodson Law helps to understand how to design systems that keep peoples attention in the optimal performance. He made an important point: there are certain issues that cannot be solved, e.g. if someone is tired we can do only very little – the driver will need to rest. We should make sure that we take these things into account when designing systems.

Visual distraction is an obvious factor and much discussed in the papers at the conference – but Bryan argued that “eyes on the road” is not equal to “mind on the road”. I think this is really a very important point. Ensuring that people keep their eyes on the road, seeing things is not enough. The big resulting question is how to keep or get people focused on the street and environment. It seems there is some more research to do…

The variety of interfaces and interaction metaphors build into cars opens more choices but at the same time creates problems, as people need to learn and understand them. A simple question such as: How do you switch the car off? may be hard to answer (Bryan had the example of a car with a push button starter, where you cannot remove the key). I think there are simple questions that can be learned from industry and production machines… add an emergency stop button and make it mandatory 😉

If you are interested more about Bryan’s work look at his webpage or his page at the MIT agelab or one of his recent publications [1] in the IEEE Pervasive Computing Magazine’s special issue on automotive computing, see [2] for an introduction to the special issue.

Sorry for the poor quality photos … back row and an iPhone…

[1] Joseph F. Coughlin, Bryan Reimer, and Bruce Mehler. 2011. Monitoring, Managing, and Motivating Driver Safety and Well-Being. IEEE Pervasive Computing 10, 3 (July 2011), 14-21. DOI=10.1109/MPRV.2011.54 http://dx.doi.org/10.1109/MPRV.2011.54

[2] Albrecht Schmidt, Joseph Paradiso, and Brian Noble. 2011. Automotive Pervasive Computing. IEEE Pervasive Computing 10, 3 (July 2011), 12-13. DOI=10.1109/MPRV.2011.45 http://dx.doi.org/10.1109/MPRV.2011.45

CHI 2010 – Opening and Keynote

2343 attendees came to CHI 2010 this year to Atlanta. Participants are from 43 countries and the colored map suggested that a good number came from Germany. Outside it really feels like spring 🙂

Overall CHI 2010 received 2220 submission across 13 categories of which 699 were accepted. In the paper and nodes categories there were 1345 submissions of which 302 were accepted (22% acceptance rate).

Genevieve Bell from Intel is a cultural anthropologist and she presented the CHI opening keynote with the title: “Messy Futures: culture, technology and research”. She is a great story teller and showed exemplarily the value of ethnography and anthropology research. One very graphical example was the picture of what are the real consumers – typically not living in a perfect environment, but rather living clutter and mess …

A further issue she briefly addressed was the demographic shifts and urbanization (soon three quarter of people will live in cities). This followed on to an argument for designing for the real people and for their real needs (in contrast to the idea of designing for women by “shrinking and pinking it”).

Genevieve Bell discussed critical domains that drive technology: politics, religion, sex, and sports. She argued that CHI and Ubicomp has not really looked at these topics – or at least they did not publish it in CHI 😉 Here examples were quite entertaining and fun to listen to the keynote – but it created little controversy.

NSF/EU workshop in Mannheim

Mohan Kumar and Marco Conti organized an EU/NSF workshop on Future Directions in Pervasive Computing and Social Networking for Emerging Applications. They managed to get together an interesting set of people and the discussion in the break out session were very enjoyable and I got a number of ideas what really are the challenges to come.

There are the position statements on the web page and at some point the identified grand challenges will be available.

PS: blackboards are still highly effective 😉

Visit to TU Dortmund: Impressive Demos on Vision and Audio

After several tries we finally managed to travel to Dortmund (half an hour on the S-Train) to visit Gernot A. Fink‘s group at the Technical University Dortmund. Bastian Pfleging did with this group his master thesis before he joined us. The research focus of the group is on signal processing and computer vision. They also follow an experimental approach – building systems that work (which we saw in the demos). In their lab space they have setup a building (basically a house inside a house – impressive!).

I have learned about a new location technology based on passive infrared sensors. The idea is to pick heat emitted from people and combine the output from several sensors to localize the person. The technology is very simple, potentially cheap, and privacy preserving. Sometime back we thought of a project topic using thermal imaging (not really cheap or privacy preserving) for context-awarenes – but so far there was no student who wanted to do it. Perhaps we should try again to find a student.

The other demos were situated in a meeting room that is equipped with several cameras and microphones. It was interesting to see how robust several of the vision prototypes managed to track people in the room and to detect pointing actions. One basic mechanism the use to detect interesting regions in an image is saliency based on different features – and it works well.

The audio demo used two arrays of 8 microphones each; the arrays are nicely integrated in a ceiling panel. Using these signals they can calculate the energy that originates from a certain spatial region in the room. Looking at the complexity of the hardware and software for sound localization it appears not in the far future that this could become ubiquitous. We talked about the work James Scott did on sound localization (snipping on a light switch) – here is the reference [1].

The room is equipped with sensors, lights, switches and a UI panel that are linked over a commercial bus system (KNX). Sometime ago we had a bachelor project in Essen that looked at EnOcean (another home networking technology). We discussed how well these systems are positioned in comparison to web technologies.

I personally think medium term we will move – at least on a control and user interface level – to web protocols. The moment you use web protocols it is so much easier to create user interfaces (e.g. using a Web browser as frontend) and it is simple integrate with existing systems (e.g. facebook). It would be interesting to assess how easy it is to use RESTful services to replicate some of the features of home automation systems. Sounds like an interesting project topic. There is a workshop on the Web of Things at PerCom in Mannheim – I am curious what is coming up there.

[1] James Scott, Boris Dragovic: Audio Location: Accurate Low-Cost Location Sensing. Pervasive Computing: Third International Conference, PERVASIVE 2005, Munich, Germany, May 8-13, 2005. Springer LNCS 3468/2005. pp 1-18. http://dx.doi.org/10.1007/11428572_1

Visiting TU-Berlin and T-Labs

We have a number of student projects that look at novel applications and novel application platforms on mobile phones. As Michael Rohs from T-Labs is also teaching a course on mobile HCI we thought it would be a good opportunity to meet and discuss some application ideas.

I gave a talk in Michael’s lecture discussing the concept of user interfaces beyond the desktop, context as enabling technology, and future applications in mobile, wearable and ubiquitous computing. We had an interesting discussion – and in the end it always comes down to privacy and impact on society. I see this as a very positive development as it shows that the students are not just techies but that they see the bigger picture – and the impact (be it good or bad) they may have with their developments. I mentioned to books that are interesting to read: the transparent society [1] and total recall [2].

In the afternoon we discussed two specific projects. One was an application for informal social while watching TV (based on a set iconic communication elements) that can be used to generate meta data on the program shown. The other is a platform that allows web developers to create distributed mobile applications making use of all the sensors on mobile phones. It is essential a platform an API that provides access to all functions on the phones available in S60 phones over a RESTful API, e.g. you can use a HTTP call to make a photo on someone’s phone. We hope to release some of the software soon.

In the coffee area at T-labs was a printout with the 10+1 innovation principles – could not resist to take a photo 😉 Seems innovation is really trival – just follow the 11 rules and you are there 😉

[1] David Brin. The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom. Basic Books. 1999. ISBN-13: 978-0738201443. Amazon-link. Webpage: http://www.davidbrin.com/transparent.htm

[2] Gordon Bell, Jim Gemmell. Total Recall: How the E-Memory Revolution Will Change Everything. Dutton Adult. 2009. ISBN-13: 978-0525951346. Amazon-link. Webpage: http://totalrecallbook.com/

Papers are all similar – Where are the tools to make writing more effective?

Yesterday we discussed (again during the evening event of MobileHCI2009) how hard it would be to support the process of writing a high quality research paper and essays. In many conference there is a very defined style what you need to follow, specific things to include, and certain ways of how to present information. This obviously depends on the type of contribution but within one contribution type there could be probably provided a lot of help to create the skeleton of the paper… In many other areas Sounds like another project idea 😉

You ought to keep your essay presentation for the IELTS paper short. Recall that you just have 40 minutes to compose the exposition, and some of this time should be spent arranging. Along these lines, you should have the capacity to compose your presentation decently fast so you can begin composing your body sections and ask if needed.

Workshop at MobileHCI: Context-Aware Mobile Media and Mobile Social Networks

Together with colleagues from Nokia, VTT, and CMU we organized a workshop on Context-Aware Mobile Media and Mobile Social Networks at MobileHCI 2009.

The topic came up in discussions some time last year. It is very clear that social network have moved towards mobile scenarios and that utilizing context and contextual media adds a new dimension. The workshop program is very diverse and ranges studying usage practices to novel technological solutions for contextual media and application.

One topic that is interesting to further look at is to use (digital) social networks for health care. Taking an analogy in history it is evident that the direct social group you were in took were the set of people that helped you in case of illness or accident. Looking at conditions and illnesses that cause a loss of mobility or memory it could be interesting to find applications on top of digital social networks to provide help. Seems this could be a project topic.

In one discussion we explored what would happen if we would change our default communication behavior from closed/secret (e.g. Email and SMS) to public (e.g. bulletin boards). I took the example of organizing this workshop: our communication has been largely on email and has not been public. If it would had been open (e.g. public forum) we probably would have organized the workshop in the same way but at the same time provided an example how one can organize a workshop and by this perhaps provided useful information for future workshop chairs. In this case there are little privacy concerns but images all communication is public? We would learn a lot about how the world works…

About 10 years ago we published at paper there is more to context than location [1]. However, looking at our workshop it seems: location is still the dominant context people think of. Many of the presentations and discussions included the term context, but the examples focused on location. Perhaps we do need location only? Or perhaps we should look more closely to find the benefit of other contexts?

[1] A. Schmidt, M. Beigl, H.W. Gellersen (1999) There is more to context than location, Computers & Graphics, vol. 23, no. 6, pp. 893-901.

More surface interaction using audio: Scratch input

After my talk at the Minerva School Roy Weinberg pointed me to a paper by Chris Harrison and Scott Hudson [1] – it also uses audio for creating an interactive surface. The novelty on the technical side is limited but nevertheless the approach is interesting and appealing because of its simplicity and its potential (e.g. just think beyond a fingernail on a table to any contact movement on surfaces – pushing toy cars, walking, pushing a shopping trolley…). Perhaps having a closer look at this approach a generic location system could be created (e.g. using special shoe soles that make a certain noise).

There is a youtube movie: http://www.youtube.com/watch?v=2E8vsQB4pug

Besides his studies Roy develops software for the Symbian platform and he sells a set of interesting applications.

[1] Harrison, C. and Hudson, S. E. 2008. Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 205-208. DOI= http://doi.acm.org/10.1145/1449715.1449747

Taking pictures during sports – ideas for an appliance


If you do sports it typically requires another person to take the photos of you. Having the evening off in in Haifa Keith, Antonio and me went climbing at http://www.shafan-hasela.com/. It was not easy to get there – we used the typical way – first: take a bus to a random place (not intentially) – second: realize that the bus went to a place you did not want to go – third: take the taxi to where you wanted to go.

Being three people it was very easy to takes pictures while climbing – and I as I am climbing a class below Antonio and Keith I had a lot of time to take the pictures 😉

Being computer scientist you always think about cool, challenging, and exciting projects. So we wondered if we could build an autonomous flying object that contains a camera that follows you (in a defined distance) and takes exciting photos. We have an idea how this could be done – let me know if you would be interested in the project (e.g. bachelor/master)- may be even done in a collaboration with Lancaster.