3DUI Technologies for Interactive Content by Prof. Yoshifumi Kitamura

In the context of multimodal interaction in ubiquitous computing professor Yoshifumi Kitamura presented a Simtech guest lecture on 3D user interface technologies. His research goal is to create 3D display technologies that allow multi-user direct interaction. Users should be able to move in front of the display and different users should have different perspectives according to the location in front of the display. He showed a set of rotating displays (volumetric displays) that allow for the visual presentation, but not for interaction.

His approach is based on an illusion hole that allows for multiple users and direct manipulation. The approach is to have different projections for different users, that are not visible for others but that creates the illusion of interaction with a single object. It uses a display mask that physically limits the view of each user. Have a look at their SIGGRAPH Paper for more details [1]. More recent work on this can be found on the webpage of Yoshifumi Kitamura’s web page [2]

Example of the IllusionHole from [2].

Over 10 years ago they worked on tangible user interfaces based on blocks. Their system is based on a set of small electronic components with input and output, that can be connected and used to create larger structures and that provide input and output functionality. See [3] and [4] for details and applications of Cognitive Cubes and Active Cubes.

He showed examples of interaction with a map based on the concept of electric materials. Elastic scroll and elastic zoom allow to navigate with maps in an apparently intuitive ways. The mental model is straight forward, as the users can image the surface as an elastic material, see [5].

One really cool new display technology was presented at last year ITS is a furry multi-touch display [6]. This is a must read paper!

The furry display prototype – from [6].

References
[1] Yoshifumi Kitamura, Takashige Konishi, Sumihiko Yamamoto, and Fumio Kishino. 2001. Interactive stereoscopic display for three or more users. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’01). ACM, New York, NY, USA, 231-240. DOI=10.1145/383259.383285 http://doi.acm.org/10.1145/383259.383285
[2] http://www.icd.riec.tohoku.ac.jp/project/displays-and-interface/index.html
[3] Ehud Sharlin, Yuichi Itoh, Benjamin Watson, Yoshifumi Kitamura, Steve Sutphen, and Lili Liu. 2002. Cognitive cubes: a tangible user interface for cognitive assessment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02). ACM, New York, NY, USA, 347-354. DOI=10.1145/503376.503438 http://doi.acm.org/10.1145/503376.503438
[4] Ryoichi Watanabe, Yuichi Itoh, Masatsugu Asai, Yoshifumi Kitamura, Fumio Kishino, and Hideo Kikuchi. 2004. The soul of ActiveCube: implementing a flexible, multimodal, three-dimensional spatial tangible interface. Comput. Entertain. 2, 4 (October 2004), 15-15. DOI=10.1145/1037851.1037874 http://doi.acm.org/10.1145/1037851.1037874
[5] Kazuki Takashima, Kazuyuki Fujita, Yuichi Itoh, and Yoshifumi Kitamura. 2012. Elastic scroll for multi-focus interactions. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ’12). ACM, New York, NY, USA, 19-20. DOI=10.1145/2380296.2380307 http://doi.acm.org/10.1145/2380296.2380307
[6] Kosuke Nakajima, Yuichi Itoh, Takayuki Tsukitani, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura, and Fumio Kishino. 2011. FuSA touch display: a furry and scalable multi-touch display. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA, 35-44. DOI=10.1145/2076354.2076361 http://doi.acm.org/10.1145/2076354.2076361

SIGCHI Rebuttals – Some suggestions to write them

ACM SIGCHI has in it’s review process the opportunity for the authors to respond to the comments of the reviewers. I find this a good thing and to me it has two main functions:

  1. The reviewers are usually more careful in what they write as they know they have to face a response for the authors
  2. Authors can clarify points that they did not get across in the first place in the original submission.

We usually write for all submissions with an average score over 2.0 a rebuttal. For lower ranked submissions it may be OK if we think we have a chance to counter some of the arguments, which we believe are wrong or unfair.

For the rebuttal it is most critical to address the meta-review as good as possible. The primary will be in the PC meeting and if the rebuttal wins this person over the job is well done. The other reviews should be addressed, too.

For all the papers where we write a rebuttal I suggest the following steps(a table may be helpful):

  1. read all reviews in detail
  2. copy out all statements that have questions, criticism, suggestions for improvement from each review
  3. for each of these statement make a short version (bullet points, short sentence) in your own words
  4. sort the all the extracted statements by topic
  5. combine all statements that address the same issue
  6. order the combined statements according to priority (highest priority to primary reviewer)
  7. for each combined statement decide if the criticism is justified, misunderstood, or unjustified
  8. make a response for each combined statement
  9. create a rebuttal that addresses as many points as possible, without being short (trade-off in the number of issue to address and detail one can give)

Point 8 is the core…
There are three basic options:

  • if justified: acknowledge that this is an issue and propose how to fix it
  • if misunderstood: explain again and propose you will improve the explanaition in the final version
  • if unjustified: explain that this point may be disputed and provide additional evidence why you think it should be as it is

The unjustified ones are the most tricky ones. We had cases where reviewers stated that the method we used is not appropriate. Here a response could be to cite other work that used this method in the same context. Similarly we had reviewers arguing that the statistical tests we used cannot be used on our data, here we also explained in more details the distribution of the data and why the test is appropriate. Sometimes it may be better to ignore cases where the criticism is unjustified – especially if it is not from the primary.

Some additional points

  • be respectful to the reviewers – they put work in to review the papers
  • if the reviewers did not understand – we probably did not communicate well
  • do not promise unrealistic things in the rebuttal
  • try to answer direct questions with precise and direct answers
  • if you expect that one reviewer did not read the paper – do not directly write this – try to address the points (and perhaps add a hint it is in the paper, e.g. “ANSWER as we outline already in section X)

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Visiting the Culture Lab in Newcastle

While being in the north of England I stopped by in Newcastle at the Culture Lab. If the CHI-conference is a measure for quality in research in Human Computer Interaction Culture Lab is currently one of the places to be – if you are not convinced have look at Patrick Olivier’s publications. The lab is one of a few places where I think a real ubicomp spirit is left – people developing new hardware and devices (e.g. mini data acquisition boards, specific wireless sensor, embedded actuators) and interdisciplinary research plays a central role. This is very refreshing to see, especially as so many others in Ubicomp have moved to mainly creating software on phones and tables…

Diana, one of our former students from Duisburg-Essen, is currently working on her master thesis in Newcastle. She looks into new tangible forms of interaction on table top UIs. Especially actuation of controls is a central question. The approach she uses for moving things is compared to other approached, e.g. [1], very simple but effective – looking forward to reading the paper on the technical details (I promised not to tell any details here). The example application she has developed is in chemistry education.

Some years back at a visit to the culture lab I had already seen some of the concepts and ideas for the kitchen. Over the last years this has progressed and the current state is very appealing. I really thing the screens behind glass in the black design make a huge difference. Using a set of small sensors they have implemented a set of aware kitchen utensils [2]. Matthias Kranz (back in our group in Munich) worked on a similar idea and created a knife that knows what it cuts [3]. It seems worthwhile to exploring the aware artifacts vision further …

References
[1] Gian Pangaro, Dan Maynes-Aminzade, and Hiroshi Ishii. 2002. The actuated workbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02). ACM, New York, NY, USA, 181-190. DOI=10.1145/571985.572011 http://doi.acm.org/10.1145/571985.572011 

[2] Wagner, J., Ploetz, T., Halteren, A. V., Hoonhout, J., Moynihan, P., Jackson, D., Ladha, C., et al. (2011). Towards a Pervasive Kitchen Infrastructure for Measuring Cooking Competence. Proc Int Conf Pervasive Computing Technologies for Healthcare (pp. 107-114). PDF

[3] Matthias Kranz, Albrecht Schmidt, Alexis Maldonado, Radu Bogdan Rusu, Michael Beetz, Benedikt Hörnler, and Gerhard Rigoll. 2007. Context-aware kitchen utilities. InProceedings of the 1st international conference on Tangible and embedded interaction (TEI ’07). ACM, New York, NY, USA, 213-214. DOI=10.1145/1226969.1227013 http://doi.acm.org/10.1145/1226969.1227013 (PDF)

Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Introduction to the special issue on interaction beyond the desktop

After coming back from CHI2012 in Austin I found my paper copy of the April 2012 issue of IEEE Computer magazine in my letter box. This is our special issue on interaction beyond the desktop. Having the physical copy is always nice (it is because I probably grew up with paper magazines ;-).

This guest editors’ introduction [1] is an experiment as we include photos from all papers on the theme. The rational is, that probably most people will not have the paper copy in their hand. When having the digital version the overview of the papers is harder to manage, that is why we think including the photos helps to make readers curious to look at the papers in the issue. Please let us know if you think this is a good idea…

[1] Albrecht Schmidt and Elizabeth Churchill. Interaction Beyond the Keyboard. IEEE Computer, April 2012, pp. 21–24. (PDF). Link to the article in Computing Now.

Keynote at Percom 2012: Andy Hopper from Cambridge on Computing for the Future of the Planet

In his Keynote “Computing for the Future of the Planet” Andy Hopper brought up 4 topics and touched shortly on each of them: (1) Optimal digital infrastructure – green computing, (2) Sense and optimize – computing for green, (3) Predict and react – assured computing, and (4) Digital alternatives to physical activities.

In the beginning of his talk he discussed an interesting (and after he said it very obvious) option of Green Computing: move computing towards the energy source as it is easier to transmit data than to transmit power. Thinking about this I could imagine that Google’s server farms are move to a sunny dessert and then the calculations are done while the sun is shining… and using the cold of night to cool down… This could be extended to storage: storing data is easier than storing energy – this should open some opportunities.

As a sample of an embedded sensing systems Andy Hopper presented a shoe with built-in force sensing (FSR) that allows to measure contact time and this helps to work out speed. There initial research was targeted towards athletes, see Rob Harle’s page for details. It is however easy to imagine the potential this has if regular shoes can sense movement in everyday use. He hinted to think about the options if one could go to doctor and analyze the change in walking pattern over the last year.

In various examples Andy showed how Ubisense is used in commercial applications, production, and training. It seems that medium resolution tracking (e.g. below 1 meter accuracy) can be reliably achieved with such an off the shelf systems, even in harsh environments. He mentioned that the university installations of the system at an early product stage were helpful to improve the product and grow the company. This is interesting advices, and could be a strategy for other pervasive computing products, too. For close observers of the slides there were some interesting inside in the different production methods between BMW and Austin Martin and the required quality 😉

Power usage is a central topic in his labs work and he showed several examples of how to monitor power usage in different scenarios. On example is monitoring power usage on the phone, implemented as an App that looks at how power is consumed and how re-charging is done. This data is then collected and shared – at current over 8000 people are participating. For more details see Daniel T. Wagner’ page. A further example is the global personal energy meter. He envisions that infrastructure, e.g. trains and building, are broadcasting information about the use of energy and that they provide information about one individuals share of this.

With an increasing proliferation of mobile phones the users’ privacy becomes a major issue. He showed in his talk an example, where privacy is provided by faking data. In this approach fake data, e.g. for calendar events, location data, and address book, is provided to apps on the phone. By these means you can alter what an application sees (e.g. location accuracy).

For more details and papers see the website of the digital technology group: http://www.cl.cam.ac.uk/research/dtg/www/

Opening talk at the Social Media for Insurances Symposium

I was invited to Leipzig to talk about social networks and in the context of insurance companies (http://www.versicherungsforen.net/social-media). The main focus of the talk was to show what people currently do in social networks and to speculate why they do it (and  I used a picture of the seven deadly sins as an illustrations…) Additionally I discussed some prototypes of activity recognition and their potential once integrated into social media.

My talk was entitled “500 Freunde (auf Facebook): Wozu noch eine Versicherung?“ – „500 friends (on Facebook) – Is there still need for insurance?“ and discussed how ubiquitous capture and social media may shape the next community [1]. The slides in are in German.

The event was very interesting and I would expect that there is a great potential out there for insurance companies to tap into. Looking back at the original idea of insurance (e.g. old fire insurance communities) or sharing the risk of hail in farming communities can give interesting inspiration for peer-2-peer insurance models. It will be exciting to see if there a new products and services that come out of the “big players” or if new players will come to the game. To me the central issue to address is how to make insurance products more visible – and I think a user centered design approach could be very interesting…

In the future I would expect that finding the right value mix (privacy, price, safety, etc.) will be essential as we argued for other services in [2]. Some years back we wrote in an article about RFID [3] “privacy is sacred but cheap” and the more services we see the more I am convinced that this is more than a slogan. If you can create a service that is of immediate value to the user I would I expect that privacy will be a lesser concern to most? On the other hand if you reduce privacy without any value in exchange there is always an outcry…

[1] “500 Freunde (auf Facebook): Wozu noch eine Versicherung?“ – Ermöglichen allgegenwärtige Aufzeichnungs-technologien und digitale soziale Netze die nächste Solidargemeinschaft? Slides as PDF (in German)
[2] Albrecht Schmidt, Marc Langheinrich, Kristian Kersting, “Perception beyond the Here and Now,” Computer, vol. 44, no. 2, pp. 86-88, Feb. 2011, doi:10.1109/MC.2011.54 (final version at IEEE, free draft version)
[3] Schmidt, A.; Spiekermann, S.; Gershman, A.; Michahelles, F., “Real-World Challenges of Pervasive Computing“, Pervasive Computing, IEEE , vol.5, no.3pp. 91- 93, c3, July-Sept. 2006. 10.1109/MPRV.2006.57

Facebook – a platform to spot when companies go bankrupt? Real world example.

In the Germany the drug store chain Schlecker announced to be insolvent, see the Reuter news post. If you look at the company’s Facebook page and scan the comments from the last 4 weeks it is apparent that some people in the crowd and employees expected it already last year.
Schlecker is a large drug store chain with probably over 10.000 outlets in Europe and more than 30.000 employees.

The following screen shots show some selected examples I took from the following page: http://www.facebook.com/schlecker.drogerie 
The posts are in German – the minimal summary should give you some idea…

This one the company wishes a happy Christmas and reminds people of a chance to win a car. The first replies echo the holiday greetings but then one complains that they let their shops bleed out (run empty) and that the order good do not arrive (probably posted by an employee). One further speculates that the company is close to bankruptcy. (over 3 weeks before the official note of insolvency)


The company announces a 2 euro discount on a product. Then employees post that they would like to sell the goods to the customers but that they do not get the goods for their shops. Additionally they complain that the goods they get from other closed down shows are not what they need. One says we want to work but we cat (as they are running out of stock). (over 2 weeks before the official note of insolvency)

The company announces price reductions in some goods. Some says that is great – but would be much better if these goods would be in the shops to buy them. (9 days before the official note of insolvency)

Overall I think this is an instructive real world example of the information that can be found in social networks about the health/value of companies. In particular the mix of customers and employees posting makes it a good example to study. I would expect that companies will learn lessons from this with regard to guidelines for the employees… and about transparency / openness…to understand how reliable such posts are we probably need to do some more research? let us know if you are interested in working this with us.

Doktorandenkolleg

Willkommen zum VIS(US) Doktorandenkolleg!

Das Institut für Visualisierung und Interaktive Systeme (VIS) lädt Doktorandinnen und Doktoranden zum wissenschaftlichen Austausch und zur Information über Perspektiven nach der Promotion in Wirtschaft und Wissenschaft im Rahmen des Doktorandenkollegs 2012 ein.

Wann? 06.-08. Februar 2012
Wo? Waldhotel Zollernblick, Freudenstadt
Wer? Doktorandinnen und Doktoranden des VIS(US)
Leitung: Tom Ertl, Martin Fuchs, Albrecht Schmidt, Daniel Weiskopf
Institut für Visualisierung und Interaktive Systeme (VIS)
Visualisierungsinstitut der Universität Stuttgart

 

 

Vorläufiges Programm 

Tag 1 –   6. Februar 2012
Skifahren bei genügend Schnee (=Urlaubstag 😉 )
Anreise nach Freudenstadt (Organisation nach Absprache)
18 Uhr Gemeinsames Abendessen
20 Uhr

20:30

Prof. Dr. Rul Gunzenhäuser:
Thesen und Prognosen aus dem Bereich der Informatik

Albrecht Schmidt: „die Welt in 100 Jahre“
– Rückblick auf ein Buch von Wissenschaftlern von 1910 –
Wir entwickeln von Szenarien für die nächsten 100 Jahre

Tag 2   7. Februar 2012
08:45 Einführung/Eröffnung
09:00 Vortrag: Andrés Bruhn
Vorstellung des Arbeitsgebiets und der neuen Arbeitsgruppe
10:00 „FastForward“ Poster – Session 1
Kurzpräsentation (Elevator-Talk) – 90 Sekunden (strikt!) pro Person
20 Präsentation thematisch bunt gemischt
Ziel: Dissertationsthema und Arbeitsgebiet allgemeinverständlich für Informatiker erklären und auf das eigene Poster neugierig machen
10:30 – 11:30 Kaffeepause, Posterausstellung und Gespräche an den Postern
11:30 – 12:30 Track A: Session 1
3 Vorträge @ 10 Minuten
Track B: Session 1
3 Vorträge @ 10 Minuten
Track C: Session 1
3 Vorträge @ 10 Minuten
12:30 – 14:00 Mittagessen
14:00 – 15:00 Track A: Session 2
3 Vorträge @ 10 Minuten
Track B: Session 2
3 Vorträge @ 10 Minuten
Track C: Session 2
3 Vorträge @ 10 Minuten
15:30 – 16:00 „FastForward“ Poster – Session 2
Kurzpräsentation (Elevator-Talk) – 90 Sekunden (strikt!) pro Person
20 Präsentation thematisch bunt gemischt
Ziel: Dissertationsthema und Arbeitsgebiet allgemeinverständlich für Informatiker erklären und auf das eigene Poster neugierig machen
16:00 – 17:00 Kaffeepause, Posterausstellung und Gespräche an den Postern
17:00 – 18:00 Frei 🙂
18 Uhr Gemeinsames Abendessen
20 Uhr Informatik Studieren – Was macht es attraktiv?
Wie sollten wir unsere Studiengänge gestalten?
Wie gewinnen wir die besten Studierenden?
Diskussion und Gruppenarbeit
Tag 3   8. Februar 2012
08:30 – 10:30 Karrierewege nach der Promotion

  • Profile und Anforderungen
  • Akademische Karriere im Ausland (z.B. USA, UK)
  • Consulting
  • Entwickler (z.B. Google)
  • Management
  • Professor an einer (Fach)-Hochschule
  • Professor an einer Uni
  • Unternehmensgründung
  • Wissenschaftler in einem Forschungslabor

Diskussion ?

10:30 – 11:00 Kaffeepause
11:00 – 12:00 Track A: Session 3
2 Vorträge @ 10 Minuten
Diskussion
Publikationsstrategie
Track B: Session 3
2 Vorträge @ 10 Minuten
Diskussion
Publikationsstrategie
Track C: Session 3
2 Vorträge @ 10 Minuten
Diskussion
Publikationsstrategie
12:00 Mittagessen
Abreise, Rückfahrt nach Stuttgart
Evtl. Skifahren (bei Schnee und Interesse…)

Einreichung von Beiträgen

Ab sofort: Anmeldung per E-Mail an anja.mebus@vis.uni-stuttgart.de (Betreff: “DOKO-2012”, bitte Anschrift, Arbeitstitel des Promotionsvorhabens und Betreuerin oder Betreuer der Promotion angeben)
bis 24.1.2012: Einreichung einer Kurzfassung des Beitrags zum Doktorandenkolleg
(max. 1 Seite, unter Beachtung der folgenden Hinweise)
Hinweis: Für die Teilnahme am Doktorandenkolleg stehen nur begrenzt Plätze zur Verfügung. Sollte die Zahl der Anmeldungen die verfügbaren Kapazitäten überschreiten, entscheiden die Organisatoren über die Annahme von Beiträgen!
bis 2.2.2012: Feedback
bis 5.2.2012: Abgabe der finalen Version des Beitrags

Anmeldung von Beiträgen

Mit dem Doktoranden-Kolloquium möchten wir alle die in VIS und VISUS promovieren motivieren über Ihr Dissertationsthema zu berichten und zu diskutieren. Jede(r) Teilnehmer(in) soll bis zu, 24.1.2012 einen Beitrag im Umfang von ca. 1 Seite (Vorlagen siehe unten) schreiben, der die folgenden Abschnitte enthält:

Problembeschreibung und Forschungsfrage

  • Welches Problem wollt ihr mit euerer Forschung lösen?
  • Warum ist es wichtig dieses Problem zu lösen?
  • Aus welchem Grund sollte jemand für Forschung an dieser Frage bezahlen?
  • Was ist die zentrale Forschungsfrage und was wollt ihr sie konkret herausfinden?
  • Was ist der zu erwartende Wissensgewinn?

Vorgehensweise und Methode

  • Wie führt ihr eure Forschung durch? Ist eure Forschung theoretisch, experimentell oder empirisch?
  • Wie verifizieren oder evaluieren ihr die Ergebnisse?
  • Wie stellt ihr die Richtigkeit und Qualität eurer Ergebnisse sicher?
  • Erkläre kurz die Vorgehensweise und begründe warum diese für deine Forschungsarbeiten angemessen ist. Welche alternativen Vorgehensweisen wären möglich und warum verwendest du diese nicht?
  • Welche Methoden setzt du ein?

Verwandte Arbeiten

  • Was sind die wichtigsten drei Arbeiten anderer Forschungsgruppen auf die sich deine Forschung bezieht?
  • Wie haben diese Arbeiten dich beeinflusst?
  • Was machst du besser als die bisherigen Arbeiten? Wo ergibt sich etwas Neues durch deine Arbeit?

Vorläufige Ergebnisse

  • Was hast du bis jetzt herausgefunden? Beschreibe die vorläufigen Ergebnisse.
  • Aus welchem Grund sollten wir diesen Ergebnissen vertrauen? Wie hast du diese überprüft?
  • Welche weiteren Ergebnisse erwartest du?

Nächste Schritte

  • Was sind die nächsten Schritte in deiner Arbeit? Was fehlt noch damit aus der Arbeit eine Dissertation wird?
  • Wo brauchst du noch weitere (externe) Expertise? An welchen Stellen wären Kooperationen hilfreich?

Formatvorlage und Einreichung
Bitte verwendet die folgende Vorlage für die Einreichung. Bitte schickt den Beitrag als PDF an anja.mebus@vis.uni-stuttgart.de (Betreff: “DOKO-2012-Beitrag”)

Beispiel: PDF
Latex-Vorlage: ZIP-Archiv
MS-Word 97-2003 Vorlage: DOC
MS-Word 2007 Vorlage: DOCX

Auto-UI 2012 in the US, looking for hosts for 2013

The next and 4rd international conference on Automotive User Interfaces and Vehicular Applications (AutoUI 2012) will be in Portsmouth, New Hampshire in the USA. The dates for the conference are 17-19 of October 2012. The first day for workshops and tutorials and 2 days for the main conference. Portsmouth is about an 1 hour drive from Boston and the timing is great (fall foliage – the photos of the colorful forests looked good 😉

The steering committee (sc@auto-ui.org) is inviting proposals for Auto-UI 2013 from the community of researchers in the field. The conference was 2009 in Essen (Germany), 2010 in Pittsburgh (USA), 2011 in Salzburg (Austria), and it will be in 2012 in Portsmouth (USA). Keeping this cycle between Europe and North America 2013 should be in Europe.

Bryan Reimer: Opening keynote at Auto-UI 2011 in Salzburg

Bryan started his keynote talk the automotive user interface conference (auto-ui.org) in Salzburg with reminding us that having controversial discussions about the HMI in the car is not new. Quoting a newspaper article from the 1930s on the introduction of the radio in the car and its impact on the driver he picked an interesting example, that can be seen as the root of many issues we have now with infotainment systems in the car.

The central question he raised is: how to create user interface that fit human users? He made an important point: humans are not “designed” to drive at high speed in complex environments; perception has evolved for walking and running in natural environment. Additionally to the basic limitations of human cognition, there is a great variety of capabilities of drivers, their skills and cognitive ability (e.g. influence of age). A implication of the global change is demographics is that the average capabilities of a drivers will be reduced – basically as many older people will be drivers…

Over the last 100 years cars have changes significantly! Looking more closely Bryan argues that much of the chance happened in the last 10 years. There has been little change from the 1950s to the 1990s with regard to the car user interface.

It is apparent that secondary tasks are becoming more important to the user. Users will interact more while driving because the can. It is however not obvious that they are capable of it.

Even given these developments it is apparent that driving has become safer. Passive safety has been improved massively and this made driving much safer. There seems to be a drawback to this as well, as people may take greater risks as they feel safer. The next step is really to avoid accidence in the first place. Bryan argues that the interaction between driver, environment, and vehicles is very important in that. He suggests that we should make more of an effort to create systems that fit the drivers.

The Yerkes-Dodson Law helps to understand how to design systems that keep peoples attention in the optimal performance. He made an important point: there are certain issues that cannot be solved, e.g. if someone is tired we can do only very little – the driver will need to rest. We should make sure that we take these things into account when designing systems.

Visual distraction is an obvious factor and much discussed in the papers at the conference – but Bryan argued that “eyes on the road” is not equal to “mind on the road”. I think this is really a very important point. Ensuring that people keep their eyes on the road, seeing things is not enough. The big resulting question is how to keep or get people focused on the street and environment. It seems there is some more research to do…

The variety of interfaces and interaction metaphors build into cars opens more choices but at the same time creates problems, as people need to learn and understand them. A simple question such as: How do you switch the car off? may be hard to answer (Bryan had the example of a car with a push button starter, where you cannot remove the key). I think there are simple questions that can be learned from industry and production machines… add an emergency stop button and make it mandatory 😉

If you are interested more about Bryan’s work look at his webpage or his page at the MIT agelab or one of his recent publications [1] in the IEEE Pervasive Computing Magazine’s special issue on automotive computing, see [2] for an introduction to the special issue.

Sorry for the poor quality photos … back row and an iPhone…

[1] Joseph F. Coughlin, Bryan Reimer, and Bruce Mehler. 2011. Monitoring, Managing, and Motivating Driver Safety and Well-Being. IEEE Pervasive Computing 10, 3 (July 2011), 14-21. DOI=10.1109/MPRV.2011.54 http://dx.doi.org/10.1109/MPRV.2011.54

[2] Albrecht Schmidt, Joseph Paradiso, and Brian Noble. 2011. Automotive Pervasive Computing. IEEE Pervasive Computing 10, 3 (July 2011), 12-13. DOI=10.1109/MPRV.2011.45 http://dx.doi.org/10.1109/MPRV.2011.45

Guests in my multimodal interaction class

Today I had brought 3 more professors with me to teach the class on multimodal interaction (I learned from Hans). As we have the pd-net project meeting Nigel Davies, Marc Langheirich, and Rui Jose were in Stuttgart and ‘volunteered’ to give a talk.

Nigel talked about the work in Lancaster on the use of mobile computing technology to support sustainable travel. He explained the experiments they conducted for collecting and sharing travel related information. In the 6th Sense Transport project they look beyond looking at understanding the current context into predictions and eventually ‘time travel’ 😉

Marc presented a one hour version of his tutorial on privacy introducing the terminology and explaining the many facets this topic has. We discussed the ‘NTHNTF’ argument (Nothing To Hide Nothing To Fear) and Marc used an example of AOLstalker.com to show the weaknesses of this argument. Marc suggested some reading if you want to dive into the topic, see [1,2,3,4].

Rui focused in his lecture on pervasive public displays. He gave an overview of typical architectures for digital signage systems and the resulting limitation. The pd-net approach aims at creating an open platform that allows many different applications and use cased. He showed once concept of using virtual pin-badges to trigger content and to express interest in a certain topic.

There is more information on the pd-net project on http://pd-net.org

[1] David Brin. The Transparent Society. Perseus Publishing, 1999.
[2] Simson Garfinkel: Database Nation – The Death of Privacy in the 21st Century. O’Reilly, 2001.
[3] Lawrence Lessig: Code and Other Laws of Cyberspace. Basic Books, 2006. http://codev2.cc/
[4] Waldo, Lin, Millett (eds.): Engaging Privacy and Information Technologygy in a Digital Age. National Academies Press, 2007.

Call for Papers: Symposium on Pervasive Display Networks

Rui José and Elaine Huang are chairing an international symposium on pervasive displays in Portugal. The conference will be held June 4-5 2012 in Porto. The submission deadline for full papers is January 16th, 2012.

With our research in the PD-net project we encounter many interesting research questions and met with many other researchers interested in the topic. It seems that the many real deployments of electronic displays is fueling ideas and makes it obvious that research is required to understand the properties of this new upcoming media. The call states: “As digital displays become pervasive, they become increasingly relevant in many areas, including advertising, art, sociology, engineering, computer science, interaction design, and entertainment.

We hope with this symposium we will bring together researchers and practitioners as well as users to share research results and generate new ideas.

Submissions that report on cutting-edge research in the broad spectrum of pervasive digital displays are invited, ranging from large interactive walls to personal projection, from tablets and mobile phone screens to 3-D displays and tabletops. Topics include:

  • Novel technologies
  • Architecture
  • Applications
  • Domains and formative studies studies
  • Evaluations and deployments
  • Interfaces and interaction techniques
  • Content design

Have a look at the webpage and the call for paper at http://pervasivedisplays.org/cfp.php

Closing Keynote at AMI2011, Beyond Ubicomp – Computing is Changing the Way we Live

On Friday afternoon I had the privilege to present the closing keynote at AMI2011 in Amsterdam with the title ‘Beyond Ubicomp – Computing is Changing the Way we Live’. The conference featured research in Ambient Intelligence ranging from networking and system architecture to interfaces and ethnography. It brought an interesting set of people together and it was good to see many students and young researchers presenting their work.

In my closing keynote at talked about my experience of the last 13 years in this field and about a vision of the future. My vision is based on a basic technology assessment – basically looking what technologies will (in my view) definitely come over the next 20 years and looking at the implications of this. I stared out with a short reference to Mark Weiser’s now 20 year old article [1]. The upcoming issue of IEEE Pervasive Magazine will have a in-depth analysis of the last 20 years since Weiser’ article – we have also an article in there on how interaction evolved.

The vision part of the talk looked “Perception beyond there here and now” [2] from 3 different angles:

  • Paradigm Shift in Communication
    Here I argue that the default communication in the future will be public communication and only if something is secret we will try to use non public channel. First indicators of this are a switch from email to twitter and facebook. I used a cake baking example to highlight the positive points of this shift.
  • Steep Increase in media capture
    The second angle is just observing and extrapolating the increase in capture of media information. If you go already now on youtube you will information about many things (backing a cake, repairing a bike, etc.). The implication of this increase in media capture will be virtually unlimited access to experience other people share
  • Transformation of experienced perception
    The final angle is that this creates a new way of perceiving the world. We will extent perception beyond the here and now and this is bringing a completely new way of creating and accessing information. I used the example of enquiring about buying an international train ticket at the station in Amsterdam. If you can look there through other people’s eyes the question becomes trivial.

My overall argument is that we are in for a major transformation of our knowledge and information culture. I would expect that this shift is as radical as the shift from an oral tradition to the written societies – but the transition will be much quicker and in the context of a globalized and competitive world.

The main conclusion from this is: Ethics and values are the central design material of this century.

Looking at twitter it seems it got across to some in the audience 😉 If your are interested, too have a look at the slides from the keynote.

[1] Mark Weiser. The computer for the 21st century. Scientific American, Vol. 265, No. 3. (1991)
[2] Albrecht Schmidt, Marc Langheinrich, and Kritian Kersting. 2011. Perception beyond the Here and Now. Computer 44, 2 (February 2011), 86-88. DOI=10.1109/MC.2011.54 http://dx.doi.org/10.1109/MC.2011.54

hci4schools

Angefangen hat der Tag um 9 Uhr mit einer Besprechung. Dabei wurde über mehrere Projekte gesprochen und auch darüber was ich die Woche lang machen darf. Das hörte sich sehr vielversprechend an. Read More

Keynote: Steve Benford talking on “Designing Trajectories Through Entertainment Experiences”

On Tuesday morning Steve Benford presented the entertainment interfaces keynote. He is interested in how to use computer technology to support performances. Steve works a lot with artist group, where the University is involved in implementing, running and studying the experiences. The studies are typically done by means of ethnography. The goal of this research is to uncover the basic mechanisms that make these performances work and potentially transfer the findings to human computer interaction in more general.

I particularly liked the example of “Day of the figurines“. Steve showed the video of experiences they created and discussed the observations and findings in detail. He related this work to the notion of trajectories [1], [2]. He made the point that historic trajectory are especially well suited to support spectators.

Some years back I worked with Steve in the Equator and we even have a jointed publication [3] 🙂 When looking for these references I came across another interesting paper – related to thrill and excitement, which he discussed in the final part of the talk [4].

PS: we had a great party on Monday night but the attendance was extremly good 🙂

[1] Benford, S. and Giannachi, G. 2008. Temporal trajectories in shared interactive narratives. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 73-82. DOI= http://doi.acm.org/10.1145/1357054.1357067

[2] Benford, S., Giannachi, G., Koleva, B., and Rodden, T. 2009. From interaction to trajectories: designing coherent journeys through user experiences. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI ’09. ACM, New York, NY, 709-718. DOI= http://doi.acm.org/10.1145/1518701.1518812

[3] Benford, S., Schnädelbach, H., Koleva, B., Anastasi, R., Greenhalgh, C., Rodden, T., Green, J., Ghali, A., Pridmore, T., Gaver, B., Boucher, A., Walker, B., Pennington, S., Schmidt, A., Gellersen, H., and Steed, A. 2005. Expected, sensed, and desired: A framework for designing sensing-based interaction. ACM Trans. Comput.-Hum. Interact. 12, 1 (Mar. 2005), 3-30. DOI= http://doi.acm.org/10.1145/1057237.1057239

[4] Schnädelbach, H., Rennick Egglestone, S., Reeves, S., Benford, S., Walker, B., and Wright, M. 2008. Performing thrill: designing telemetry systems and spectator interfaces for amusement rides. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1167-1176. DOI= http://doi.acm.org/10.1145/1357054.1357238

Opening Keynote of Mensch&Computer 2010 by Ed H. Chi

Ed H. Chi from PARC presented the opening keynote for Mensch&Computer 2010. In the motivation of the talk he showed a document on “Applied Information processing psychology” from 1971 – probably very few had seen this before. It makes an argument for an experimental science that is related to augmented cognition. The basic idea is very similar to Vannevar Bush’s Memex – to extend the human cognitive power by machines (and especially computer technology). It is apparent that these ideas became the backdrop of the many innovations that happened at PARC in the early days.

Ed stressed that there is still a lot of potential for the application of psychological phenomena and models to human computer interaction research. As an example he used the idea that speech output in a navigation system could use your name in an important situation making use of the attenuation theory of attention (the cocktail party effect). By hearing your name you are more likely to listen – even if you are yourself in a conversation. The effect may be stronger if the voice is your mother’s voice 😉

The main part of the talk centered on model driven research in HCI. Using the ScentHighlights [1] examples he outlined the process. I liked very much the broad view Ed has on models and the various uses of models he suggested, e.g. generative models that generate ideas; or behavioral models that lead to additional functionalities (as example he used: people are sharing search results in google, hence sharing should be a basic function in a search tool). Taking the example of Wikipedia he showed how models can be used to predict interaction and growth. I found the question on the growth of knowledge very exciting. I think it is defiantly not finite 😉 otherwise research is a bad career choice. Looking at the Wikipedia example it is easy to imagine that the carrying capacity is a linear function and hence one could use a predictive function where a logistic growth curve is overlayed with a linear function.

Random link from the talk: http://mrtaggy.com/

Ed discussed yahoo’s social pattern library:
http://developer.yahoo.com/ypatterns/social/people/reputation/
This pattern library is pretty interesting. I found the reputation pattern pretty comprehensive. It seems that this library is now comprehensive enough for using it for real and in teaching.

[1] Chi, E. H., Hong, L., Gumbrecht, M., and Card, S. K. 2005. ScentHighlights: highlighting conceptually-related sentences during reading. In Proceedings of the 10th international Conference on intelligent User interfaces (San Diego, California, USA, January 10 – 13, 2005). IUI ’05. ACM, New York, NY, 272-274. DOI= http://doi.acm.org/10.1145/1040830.1040895

Talk at the University of New Hampshire, Durham

Andrew Kun invited me to give a talk at the Univeristy of New Hampshire in Durham on my way back from CHI. The talk was on “Embedding Interaction – Human Computer Interaction in the Real World”. In the afternoon I got to see interesting projects in the automotive domain as well as an application on a multi-touch table. At CHI we ran a SIG on Automotive User Interfaces [1].

Seeing the implementation of Project54 live was very exciting. I came across the project first at Pervasive 2005 in Munich [2]. This project is an interesting example of how fast research can become deployed on a large scale.

Andrew chairs together with Susanne Boll the 2nd Int. Conf. on Automotive User Interfaces and Interactive Vehicular Applications – check out the call for papers on http://auto-ui.org/! (deadline 2nd of July 2010)

PS: if you ever stay in Durham – here is my favorite hotel: Three Chimneys Inn Durham.

[1] Schmidt, A., Dey, A. K., Kun, A. L., and Spiessl, W. 2010. Automotive user interfaces: human computer interaction in the car. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3177-3180. DOI= http://doi.acm.org/10.1145/1753846.1753949

[2] Laslo Turner and Andrew L. Kun, “Evaluating the Project54 speech user interface,” Third International Conference on Pervasive Computing (Adjunct Proceedings), Munich, Germany, May 8-13, 2005

Our Paper and Note at CHI 2010

Over the last year we looked more closely into the potential of eye-gaze for implicit interaction. Gazemarks is an approach where the users’ gaze is continuously monitored and when leaving a screen or display the last active gaze area is determined and store [1]. When the user looks back at this display this region is highlighted. By this the time for attention switching between displays was in our study reduced from about 2000ms to about 700ms. See the slides or paper for details. This could make the difference that we enable people to safely read in the car… but before this more studies are needed 🙂

Together with Nokia Research Center in Finland we looked at how we can convey the basic message of an incoming SMS already with the notification tone [2]. Try the Emodetector application for yourself or see the previous post.

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

[2] Sahami Shirazi, A., Sarjanoja, A., Alt, F., Schmidt, A., and Hkkilä, J. 2010. Understanding the impact of abstracted audio preview of SMS. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 1735-1738. DOI= http://doi.acm.org/10.1145/1753326.1753585

PS: the social event was at the aquarium in Atlanta – amazing creatures! Again supprised how well the N95 camera works even under difficult light conditions…

Work in Progress at CHI 2010

It seems we have a lot of work in progress 🙂 and CHI is a great place to get feedback an talk to people about it.

Florian Alt and others from the summer school in Haifa pushed one of the ideas the developed there further. It is about interactions and technologies to motivate a more thoughtful handling of trash in urban areas [1].

Tanja Döring and Bastian Pfleging developed with Chris Kray in Nottingham the idea of tangible devices that have a functional core and a passive shell [2]. By this we image that future tangible products can be created by designers and developers with no need for the production of electronics. As a side effect this approach could make consumer electronics more sustainable – even if you like often new gadgets.

Together with people from DFKI Saarbrücken we explored the potential of a multi-touch steering wheel [3]. What gestures would you do to switch on your radio? How to interact with the navigation system? Such questions are empirically explored and presented in this paper.

How many people have a phone? How many people have a PC? It very clear more people have a phone than a PC and in particular in the non-industrial part of the world for many people the only computing technology available is the phone. We think there are ways to efficiently develop software using a phone for the phone. In the paper we explored a paper and computer vision based approach for software development on the phone [4].

Elba did field studies in Panama to assess the access of phones as educational tool to children [5]. She compared different parts of the country and did interviews with teachers.

[1] Reif, I., Alt, F., Hincapié Ramos, J., Poteriaykina, K., and Wagner, J. 2010. Cleanly: trashducation urban system. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3511-3516. DOI= http://doi.acm.org/10.1145/1753846.1754010

[2] Doering, T., Pfleging, B., Kray, C., and Schmidt, A. 2010. Design by physical composition for complex tangible user interfaces. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3541-3546. DOI= http://doi.acm.org/10.1145/1753846.1754015

[3] Pfeiffer, M., Kern, D., Schöning, J., Döring, T., Krüger, A., and Schmidt, A. 2010. A multi-touch enabled steering wheel: exploring the design space. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3355-3360. DOI= http://doi.acm.org/10.1145/1753846.1753984

[4] Pfleging, B., Valderrama Bahamondez, E. d., Schmidt, A., Hermes, M., and Nolte, J. 2010. MobiDev: a mobile development kit for combined paper-based and in-situ programming on the mobile phone. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3733-3738. DOI= http://doi.acm.org/10.1145/1753846.1754047

[5] Valderrama Bahamóndez, E. d. and Schmidt, A. 2010. A survey to assess the potential of mobile phones as a learning platform for panama. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI EA ’10. ACM, New York, NY, 3667-3672. DOI= http://doi.acm.org/10.1145/1753846.1754036

CHI 2010 – My Random Pick of the Day

Harrison, C., Tan, D., and Morris, D. 2010. Skinput: appropriating the body as an input surface. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 453-462. DOI= http://doi.acm.org/10.1145/1753326.1753394

Kramer, A. D. 2010. An unobtrusive behavioral model of “gross national happiness”. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 287-290. DOI= http://doi.acm.org/10.1145/1753326.1753369

Brandt, J., Dontcheva, M., Weskamp, M., and Klemmer, S. R. 2010. Example-centric programming: integrating web search into the development environment. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 513-522. DOI= http://doi.acm.org/10.1145/1753326.1753402

Sheng, S., Holbrook, M., Kumaraguru, P., Cranor, L. F., and Downs, J. 2010. Who falls for phish?: a demographic analysis of phishing susceptibility and effectiveness of interventions. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 373-382. DOI= http://doi.acm.org/10.1145/1753326.1753383

EmoDetector App is online – Hear The Feeling of SMS

EmoDetector, by University of Duisburg-Essen and Nokia Research Center, is an application that provides auditory cues in addition to the notification tone upon receiving an SMS based on a real-time analysis of a message’s contents, see [1].The application responds currently to the following characters sets:

  • 🙂 or 🙂
  • 🙁 or 🙁
  • 😉 or 😉
  • ok (case insensitive)
  • ?

There is a version for Nokia Series 60 phones and for Android, see the download site. Have a look at the http://www.emodetector.tom-lab.de/ website for more information.

[1] Sahami Shirazi, A., Sarjanoja, A., Alt, F., Schmidt, A.,Häkkilä, J.: Understanding the Impact of Abstracted Audio Preview of SMS. In Proceeding of CHI 2010, April 10-15, Atlanta, GA, USA

CHI 2010 – Opening and Keynote

2343 attendees came to CHI 2010 this year to Atlanta. Participants are from 43 countries and the colored map suggested that a good number came from Germany. Outside it really feels like spring 🙂

Overall CHI 2010 received 2220 submission across 13 categories of which 699 were accepted. In the paper and nodes categories there were 1345 submissions of which 302 were accepted (22% acceptance rate).

Genevieve Bell from Intel is a cultural anthropologist and she presented the CHI opening keynote with the title: “Messy Futures: culture, technology and research”. She is a great story teller and showed exemplarily the value of ethnography and anthropology research. One very graphical example was the picture of what are the real consumers – typically not living in a perfect environment, but rather living clutter and mess …

A further issue she briefly addressed was the demographic shifts and urbanization (soon three quarter of people will live in cities). This followed on to an argument for designing for the real people and for their real needs (in contrast to the idea of designing for women by “shrinking and pinking it”).

Genevieve Bell discussed critical domains that drive technology: politics, religion, sex, and sports. She argued that CHI and Ubicomp has not really looked at these topics – or at least they did not publish it in CHI 😉 Here examples were quite entertaining and fun to listen to the keynote – but it created little controversy.

Zeitgeist, GNOME Activity Journal etc. – Workshop at CHI

On Saturday there was a workshop on monitoring, logging and reflecting. Know Thyself: Monitoring and Reflecting on Facets of One’s Life. In the workshop we discussed technologies and concepts for monitoring and using personal information. I started out with asking the question who knows what about you? The list is quickly growing (e.g. telecom provider, travel agent, super market, bank, mail provider, facebook, etc.) and so is the set of information they know about you. And it becomes clear that these entities keep a better record about an individual that the individuals themselves. Hence our central suggestion is that the user who is the one who could have easy access to all this information should make more of it and benefit from this information, for more see the paper [1] and the slides from the talk.

Zeitgeist Magic from Seif Lotfy on Vimeo.

There is more information about the workshop and the topic in general:

My pick of the contributions is the Dunbar email mining system from Stanford.

PS: CHI is good for your health 🙂

[1] Thorsten Prante, Jens Sauer, Seif Lotfy, Albrecht Schmidt. Personal Experience Trace: Orienting Oneself in One’s Activities and Experiences. CHI 2010 workshop on Know Thyself: Monitoring and Reflecting on Facets of One’s Life.

Full Paper and Work in Progress at Percom 2010

Together with Matthias Kranz and Carl Fisher we had a full paper at Percom 2010 – and I had the honor to present it [1]. The paper reports work that explored using the existing DECT (the wireless phone standard) infrastructure (available especially in Europe) as basic technology for localization. We compared DECT and Wifi and it is interesting that in most places you see more DECT based stations than Wifi. Overall it is a really interesting alternative to WLAN location.

From the joint work with Docomo-Eurolabs in Munich in the project AmbiVis we presented a work in progress poster. In the project we look at different options for visualizing context information – especially in ambient ways [2]. As display technologies we employed the Nabaztag and a digital picture frame.

[1] Matthias Kranz, Carl Fischer, Albrecht Schmidt: A Comparative Study of DECT and WLAN Signals for Indoor Localization. In: 8th Annual IEEE International Conference on Pervasive Computing and Communications (Percom 2010). IEEE Mannheim 2010, S. 235-243.

[2] Florian Alt, Alireza Sahami Shirazi, Andreas Kaiser, Ken Pfeuffer, Emre Gürkan, Albrecht Schmidt, Paul Holleis, Matthias Wagner: Exploring Ambient Visualizations of Context Information (Work in Progress). In: WIP, Proceedings of the Eighth Annual IEEE International Conference on Pervasive Computing and Communications, PerCom 2010. IEEE, Mannheim, Germany 2010

Keynote by Pertti Huuskonen: Ten Views to Context Awareness

Pertti Huuskonen from Nokia presented his keynote at Percom in Mannheim. I worked with Pertti in 1999 on a European Project TEA – creating context-aware phones [1].

After telling us about CERN and some achievements in physics he raised the issue that an essential skill of humans is that they are context-aware. Basically culture is context-awareness – learning how to appropriately behave in life is essential to be accepted. We do this by looking at other people and by learning how how they act and how others react. “Knowing how to behave” we become fit for social life and this questions the notion of intuitive use as it seems that most of it is learned or copied from others.

He gave a nice overview of how we can context-awareness is useful. One very simple example he showed is that people typically create context at the start of a phone call.

One example of a future to come may be ubiquitous spam – where context may be the enabler but also the enabler for blogging adverts. He also showed the potential of context in the large, see Nokoscope. His keynote was refreshing – and as clearly visible he has a good sense of humor 😉

[1] Schmidt, A., Aidoo, K. A., Takaluoma, A., Tuomela, U., Laerhoven, K. V., and Velde, W. V. 1999. Advanced Interaction in Context. In Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 89-101.

Opening of Percom, Keynote by Kurt Rothermel

About 300 people are at Percom 2010, which is held in the palace in Mannheim – and amazing location! The conference had 233 submission and is truly international (1/3 of the papers come from Europe, 1/3 from America, and 1/3 from Asia/pacific) and highly competitive (acceptance rate of about 12%).

Kurt Rothermel from the University of Stuttgart presented the opening Keynote on Large-scale Context Management. He presented a set of interesting example from Nexus (Collaborative Research Center 627, Spatial World Models for Mobile Context-Aware Applications) that showed the challenge in large scale systems. The size of the problem is can be easily seen when considering that half the population of the planet is using a mobile device and hence needs to be located… Now imagine everyone is contributing sensor data at a rate of one update per minute… For more details on their work see their 2009 percom paper [1]. In his talk he gave also some references to other interesting research platforms in this space: SensorWeb/SensorMap by Microsoft [2] and SensorPlanet by Nokia [3].

[1] Lange, R., Cipriani, N., Geiger, L., Grossmann, M., Weinschrott, H., Brodt, A., Wieland, M., Rizou, S., and Rothermel, K. 2009. Making the World Wide Space happen: New challenges for the Nexus context platform. In Proceedings of the 2009 IEEE international Conference on Pervasive Computing and Communications (March 09 – 13, 2009). PERCOM. IEEE Computer Society, Washington, DC, 1-4. DOI= http://dx.doi.org/10.1109/PERCOM.2009.4912782

[2] Kansal, A., Nath, S., Liu, J., and Zhao, F. 2007. SenseWeb: An Infrastructure for Shared Sensing. IEEE MultiMedia 14, 4 (Oct. 2007), 8-13. DOI= http://dx.doi.org/10.1109/MMUL.2007.82

[3] Abdelzaher, T., Anokwa, Y., Boda, P., Burke, J., Estrin, D., Guibas, L., Kansal, A., Madden, S., and Reich, J. 2007. Mobiscopes for Human Spaces. IEEE Pervasive Computing 6, 2 (Apr. 2007), 20-29. DOI= http://dx.doi.org/10.1109/MPRV.2007.38