3DUI Technologies for Interactive Content by Prof. Yoshifumi Kitamura

In the context of multimodal interaction in ubiquitous computing professor Yoshifumi Kitamura presented a Simtech guest lecture on 3D user interface technologies. His research goal is to create 3D display technologies that allow multi-user direct interaction. Users should be able to move in front of the display and different users should have different perspectives according to the location in front of the display. He showed a set of rotating displays (volumetric displays) that allow for the visual presentation, but not for interaction.

His approach is based on an illusion hole that allows for multiple users and direct manipulation. The approach is to have different projections for different users, that are not visible for others but that creates the illusion of interaction with a single object. It uses a display mask that physically limits the view of each user. Have a look at their SIGGRAPH Paper for more details [1]. More recent work on this can be found on the webpage of Yoshifumi Kitamura’s web page [2]

Example of the IllusionHole from [2].

Over 10 years ago they worked on tangible user interfaces based on blocks. Their system is based on a set of small electronic components with input and output, that can be connected and used to create larger structures and that provide input and output functionality. See [3] and [4] for details and applications of Cognitive Cubes and Active Cubes.

He showed examples of interaction with a map based on the concept of electric materials. Elastic scroll and elastic zoom allow to navigate with maps in an apparently intuitive ways. The mental model is straight forward, as the users can image the surface as an elastic material, see [5].

One really cool new display technology was presented at last year ITS is a furry multi-touch display [6]. This is a must read paper!

The furry display prototype – from [6].

References
[1] Yoshifumi Kitamura, Takashige Konishi, Sumihiko Yamamoto, and Fumio Kishino. 2001. Interactive stereoscopic display for three or more users. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’01). ACM, New York, NY, USA, 231-240. DOI=10.1145/383259.383285 http://doi.acm.org/10.1145/383259.383285
[2] http://www.icd.riec.tohoku.ac.jp/project/displays-and-interface/index.html
[3] Ehud Sharlin, Yuichi Itoh, Benjamin Watson, Yoshifumi Kitamura, Steve Sutphen, and Lili Liu. 2002. Cognitive cubes: a tangible user interface for cognitive assessment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02). ACM, New York, NY, USA, 347-354. DOI=10.1145/503376.503438 http://doi.acm.org/10.1145/503376.503438
[4] Ryoichi Watanabe, Yuichi Itoh, Masatsugu Asai, Yoshifumi Kitamura, Fumio Kishino, and Hideo Kikuchi. 2004. The soul of ActiveCube: implementing a flexible, multimodal, three-dimensional spatial tangible interface. Comput. Entertain. 2, 4 (October 2004), 15-15. DOI=10.1145/1037851.1037874 http://doi.acm.org/10.1145/1037851.1037874
[5] Kazuki Takashima, Kazuyuki Fujita, Yuichi Itoh, and Yoshifumi Kitamura. 2012. Elastic scroll for multi-focus interactions. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ’12). ACM, New York, NY, USA, 19-20. DOI=10.1145/2380296.2380307 http://doi.acm.org/10.1145/2380296.2380307
[6] Kosuke Nakajima, Yuichi Itoh, Takayuki Tsukitani, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura, and Fumio Kishino. 2011. FuSA touch display: a furry and scalable multi-touch display. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA, 35-44. DOI=10.1145/2076354.2076361 http://doi.acm.org/10.1145/2076354.2076361

SIGCHI Rebuttals – Some suggestions to write them

ACM SIGCHI has in it’s review process the opportunity for the authors to respond to the comments of the reviewers. I find this a good thing and to me it has two main functions:

  1. The reviewers are usually more careful in what they write as they know they have to face a response for the authors
  2. Authors can clarify points that they did not get across in the first place in the original submission.

We usually write for all submissions with an average score over 2.0 a rebuttal. For lower ranked submissions it may be OK if we think we have a chance to counter some of the arguments, which we believe are wrong or unfair.

For the rebuttal it is most critical to address the meta-review as good as possible. The primary will be in the PC meeting and if the rebuttal wins this person over the job is well done. The other reviews should be addressed, too.

For all the papers where we write a rebuttal I suggest the following steps(a table may be helpful):

  1. read all reviews in detail
  2. copy out all statements that have questions, criticism, suggestions for improvement from each review
  3. for each of these statement make a short version (bullet points, short sentence) in your own words
  4. sort the all the extracted statements by topic
  5. combine all statements that address the same issue
  6. order the combined statements according to priority (highest priority to primary reviewer)
  7. for each combined statement decide if the criticism is justified, misunderstood, or unjustified
  8. make a response for each combined statement
  9. create a rebuttal that addresses as many points as possible, without being short (trade-off in the number of issue to address and detail one can give)

Point 8 is the core…
There are three basic options:

  • if justified: acknowledge that this is an issue and propose how to fix it
  • if misunderstood: explain again and propose you will improve the explanaition in the final version
  • if unjustified: explain that this point may be disputed and provide additional evidence why you think it should be as it is

The unjustified ones are the most tricky ones. We had cases where reviewers stated that the method we used is not appropriate. Here a response could be to cite other work that used this method in the same context. Similarly we had reviewers arguing that the statistical tests we used cannot be used on our data, here we also explained in more details the distribution of the data and why the test is appropriate. Sometimes it may be better to ignore cases where the criticism is unjustified – especially if it is not from the primary.

Some additional points

  • be respectful to the reviewers – they put work in to review the papers
  • if the reviewers did not understand – we probably did not communicate well
  • do not promise unrealistic things in the rebuttal
  • try to answer direct questions with precise and direct answers
  • if you expect that one reviewer did not read the paper – do not directly write this – try to address the points (and perhaps add a hint it is in the paper, e.g. “ANSWER as we outline already in section X)

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Visiting the Culture Lab in Newcastle

While being in the north of England I stopped by in Newcastle at the Culture Lab. If the CHI-conference is a measure for quality in research in Human Computer Interaction Culture Lab is currently one of the places to be – if you are not convinced have look at Patrick Olivier’s publications. The lab is one of a few places where I think a real ubicomp spirit is left – people developing new hardware and devices (e.g. mini data acquisition boards, specific wireless sensor, embedded actuators) and interdisciplinary research plays a central role. This is very refreshing to see, especially as so many others in Ubicomp have moved to mainly creating software on phones and tables…

Diana, one of our former students from Duisburg-Essen, is currently working on her master thesis in Newcastle. She looks into new tangible forms of interaction on table top UIs. Especially actuation of controls is a central question. The approach she uses for moving things is compared to other approached, e.g. [1], very simple but effective – looking forward to reading the paper on the technical details (I promised not to tell any details here). The example application she has developed is in chemistry education.

Some years back at a visit to the culture lab I had already seen some of the concepts and ideas for the kitchen. Over the last years this has progressed and the current state is very appealing. I really thing the screens behind glass in the black design make a huge difference. Using a set of small sensors they have implemented a set of aware kitchen utensils [2]. Matthias Kranz (back in our group in Munich) worked on a similar idea and created a knife that knows what it cuts [3]. It seems worthwhile to exploring the aware artifacts vision further …

References
[1] Gian Pangaro, Dan Maynes-Aminzade, and Hiroshi Ishii. 2002. The actuated workbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02). ACM, New York, NY, USA, 181-190. DOI=10.1145/571985.572011 http://doi.acm.org/10.1145/571985.572011 

[2] Wagner, J., Ploetz, T., Halteren, A. V., Hoonhout, J., Moynihan, P., Jackson, D., Ladha, C., et al. (2011). Towards a Pervasive Kitchen Infrastructure for Measuring Cooking Competence. Proc Int Conf Pervasive Computing Technologies for Healthcare (pp. 107-114). PDF

[3] Matthias Kranz, Albrecht Schmidt, Alexis Maldonado, Radu Bogdan Rusu, Michael Beetz, Benedikt Hörnler, and Gerhard Rigoll. 2007. Context-aware kitchen utilities. InProceedings of the 1st international conference on Tangible and embedded interaction (TEI ’07). ACM, New York, NY, USA, 213-214. DOI=10.1145/1226969.1227013 http://doi.acm.org/10.1145/1226969.1227013 (PDF)

Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.” [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.“[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.” [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Introduction to the special issue on interaction beyond the desktop

After coming back from CHI2012 in Austin I found my paper copy of the April 2012 issue of IEEE Computer magazine in my letter box. This is our special issue on interaction beyond the desktop. Having the physical copy is always nice (it is because I probably grew up with paper magazines ;-).

This guest editors’ introduction [1] is an experiment as we include photos from all papers on the theme. The rational is, that probably most people will not have the paper copy in their hand. When having the digital version the overview of the papers is harder to manage, that is why we think including the photos helps to make readers curious to look at the papers in the issue. Please let us know if you think this is a good idea…

[1] Albrecht Schmidt and Elizabeth Churchill. Interaction Beyond the Keyboard. IEEE Computer, April 2012, pp. 21–24. (PDF). Link to the article in Computing Now.

Keynote at Percom 2012: Andy Hopper from Cambridge on Computing for the Future of the Planet

In his Keynote “Computing for the Future of the Planet” Andy Hopper brought up 4 topics and touched shortly on each of them: (1) Optimal digital infrastructure – green computing, (2) Sense and optimize – computing for green, (3) Predict and react – assured computing, and (4) Digital alternatives to physical activities.

In the beginning of his talk he discussed an interesting (and after he said it very obvious) option of Green Computing: move computing towards the energy source as it is easier to transmit data than to transmit power. Thinking about this I could imagine that Google’s server farms are move to a sunny dessert and then the calculations are done while the sun is shining… and using the cold of night to cool down… This could be extended to storage: storing data is easier than storing energy – this should open some opportunities.

As a sample of an embedded sensing systems Andy Hopper presented a shoe with built-in force sensing (FSR) that allows to measure contact time and this helps to work out speed. There initial research was targeted towards athletes, see Rob Harle’s page for details. It is however easy to imagine the potential this has if regular shoes can sense movement in everyday use. He hinted to think about the options if one could go to doctor and analyze the change in walking pattern over the last year.

In various examples Andy showed how Ubisense is used in commercial applications, production, and training. It seems that medium resolution tracking (e.g. below 1 meter accuracy) can be reliably achieved with such an off the shelf systems, even in harsh environments. He mentioned that the university installations of the system at an early product stage were helpful to improve the product and grow the company. This is interesting advices, and could be a strategy for other pervasive computing products, too. For close observers of the slides there were some interesting inside in the different production methods between BMW and Austin Martin and the required quality 😉

Power usage is a central topic in his labs work and he showed several examples of how to monitor power usage in different scenarios. On example is monitoring power usage on the phone, implemented as an App that looks at how power is consumed and how re-charging is done. This data is then collected and shared – at current over 8000 people are participating. For more details see Daniel T. Wagner’ page. A further example is the global personal energy meter. He envisions that infrastructure, e.g. trains and building, are broadcasting information about the use of energy and that they provide information about one individuals share of this.

With an increasing proliferation of mobile phones the users’ privacy becomes a major issue. He showed in his talk an example, where privacy is provided by faking data. In this approach fake data, e.g. for calendar events, location data, and address book, is provided to apps on the phone. By these means you can alter what an application sees (e.g. location accuracy).

For more details and papers see the website of the digital technology group: http://www.cl.cam.ac.uk/research/dtg/www/

Opening talk at the Social Media for Insurances Symposium

I was invited to Leipzig to talk about social networks and in the context of insurance companies (http://www.versicherungsforen.net/social-media). The main focus of the talk was to show what people currently do in social networks and to speculate why they do it (and  I used a picture of the seven deadly sins as an illustrations…) Additionally I discussed some prototypes of activity recognition and their potential once integrated into social media.

My talk was entitled “500 Freunde (auf Facebook): Wozu noch eine Versicherung?“ – „500 friends (on Facebook) – Is there still need for insurance?“ and discussed how ubiquitous capture and social media may shape the next community [1]. The slides in are in German.

The event was very interesting and I would expect that there is a great potential out there for insurance companies to tap into. Looking back at the original idea of insurance (e.g. old fire insurance communities) or sharing the risk of hail in farming communities can give interesting inspiration for peer-2-peer insurance models. It will be exciting to see if there a new products and services that come out of the “big players” or if new players will come to the game. To me the central issue to address is how to make insurance products more visible – and I think a user centered design approach could be very interesting…

In the future I would expect that finding the right value mix (privacy, price, safety, etc.) will be essential as we argued for other services in [2]. Some years back we wrote in an article about RFID [3] “privacy is sacred but cheap” and the more services we see the more I am convinced that this is more than a slogan. If you can create a service that is of immediate value to the user I would I expect that privacy will be a lesser concern to most? On the other hand if you reduce privacy without any value in exchange there is always an outcry…

[1] “500 Freunde (auf Facebook): Wozu noch eine Versicherung?“ – Ermöglichen allgegenwärtige Aufzeichnungs-technologien und digitale soziale Netze die nächste Solidargemeinschaft? Slides as PDF (in German)
[2] Albrecht Schmidt, Marc Langheinrich, Kristian Kersting, “Perception beyond the Here and Now,” Computer, vol. 44, no. 2, pp. 86-88, Feb. 2011, doi:10.1109/MC.2011.54 (final version at IEEE, free draft version)
[3] Schmidt, A.; Spiekermann, S.; Gershman, A.; Michahelles, F., “Real-World Challenges of Pervasive Computing“, Pervasive Computing, IEEE , vol.5, no.3pp. 91- 93, c3, July-Sept. 2006. 10.1109/MPRV.2006.57

Facebook – a platform to spot when companies go bankrupt? Real world example.

In the Germany the drug store chain Schlecker announced to be insolvent, see the Reuter news post. If you look at the company’s Facebook page and scan the comments from the last 4 weeks it is apparent that some people in the crowd and employees expected it already last year.
Schlecker is a large drug store chain with probably over 10.000 outlets in Europe and more than 30.000 employees.

The following screen shots show some selected examples I took from the following page: http://www.facebook.com/schlecker.drogerie 
The posts are in German – the minimal summary should give you some idea…

This one the company wishes a happy Christmas and reminds people of a chance to win a car. The first replies echo the holiday greetings but then one complains that they let their shops bleed out (run empty) and that the order good do not arrive (probably posted by an employee). One further speculates that the company is close to bankruptcy. (over 3 weeks before the official note of insolvency)


The company announces a 2 euro discount on a product. Then employees post that they would like to sell the goods to the customers but that they do not get the goods for their shops. Additionally they complain that the goods they get from other closed down shows are not what they need. One says we want to work but we cat (as they are running out of stock). (over 2 weeks before the official note of insolvency)

The company announces price reductions in some goods. Some says that is great – but would be much better if these goods would be in the shops to buy them. (9 days before the official note of insolvency)

Overall I think this is an instructive real world example of the information that can be found in social networks about the health/value of companies. In particular the mix of customers and employees posting makes it a good example to study. I would expect that companies will learn lessons from this with regard to guidelines for the employees… and about transparency / openness…to understand how reliable such posts are we probably need to do some more research? let us know if you are interested in working this with us.