Home

Congratulation to Dr. Florian Alt (No. 6)

Florian Alt defended his PhD thesis “A Design Space for Pervasive Advertising on Public Displays” at the University of Stuttgart. Over the last years Florian work at the crossroads of interactive public displays and pervasive advertising. His research output during the last years and while working on the http://pd-net.org project was amazing, see his DBLP entry.

The dissertation will be soon available online. If you are curious about his work right now, there are a few papers or a book you should read. A high level description of the findings is described in a paper published in IEEE Computer on Advertising on Public Display Networks [1]. The initial paper that paved the way towards understanding design space of public displays [2] is providing a comprehensive descriptions of ways for interaction with public displays. One of the highlights of the experimental research is the paper “Looking glass: a field study on noticing interactivity of a shop window” [3], which was done during Florian’s time at Telekom Innovation Laboratories in Berlin (it received a best paper award at CHI 2012). Towards the end of the thesis everyone realizes that evaluation is a most tricky thing, hence there is one paper on “How to evaluate public displays” [4]. If you are more interested on the advertising side, have a look at the book he co-edited with Jörg Müller and Daniel Michelis: Pervasive Advertising by Springer Verlag, 2011, available as kindle version at Amazon.

Florian joined my research group already back in Munich as a student researcher, where we explored ubiquitous computing technologies in a hospital environment [5]. He followed to Fraunhofer IAIS to do his MSc thesis, where he created a web annotation system that allowed parasitic applications on the WWW [6]. I nearly believed him lost, when he moved to New York – but he came back to start his PhD in Duisburg-Essen… and after one more move in 2011 to the University of Stuttgart he graduated last week! Congratulations! He is no. 6 following Dagmar Kern, Heiko Drewes, Paul Holleis, Matthias Kranz, and Enrico Rukzio. The photo shows the current team in Stuttgart – when looking at the picture it seems there are soon more to come 😉

References
[1] Alt, F.; Schmidt, A.; Müller, J.; , “Advertising on Public Display Networks,” Computer , vol.45, no.5, pp.50-56, May 2012. DOI: 10.1109/MC.2012.150, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6193076&isnumber=6197765
[2] Jörg Müller, Florian Alt, Daniel Michelis, and Albrecht Schmidt. 2010. Requirements and design space for interactive public displays. In Proceedings of the international conference on Multimedia (MM ’10). ACM, New York, NY, USA, 1285-1294. DOI=10.1145/1873951.1874203 http://doi.acm.org/10.1145/1873951.1874203
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718
[4] Florian Alt, Stefan Schneegaß, Albrecht Schmidt, Jörg Müller, and Nemanja Memarovic. 2012. How to evaluate public displays. In Proceedings of the 2012 International Symposium on Pervasive Displays (PerDis ’12). ACM, New York, NY, USA, , Article 17 , 6 pages. DOI=10.1145/2307798.2307815 http://doi.acm.org/10.1145/2307798.2307815
[5] A. Schmidt, F. Alt, D. Wilhelm, J. Niggemann,  and H. Feussner,  Experimenting with ubiquitous computing technologies in productive environments. Journal Elektrotechnik und Informationstechnik. 2006, 135-139.
[6] Florian Alt, Albrecht Schmidt, Richard Atterer, and Paul Holleis. 2009. Bringing Web 2.0 to the Old Web: A Platform for Parasitic Applications. In Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I (INTERACT ’09). Springer-Verlag, Berlin, Heidelberg, 405-418. DOI=10.1007/978-3-642-03655-2_44 http://dx.doi.org/10.1007/978-3-642-03655-2_44

Call for Papers: Augmented Human Conference 2013 (AH2013)

In 2013 the 4th Augmented Human Conference will talk place in Stuttgart, Germany. The submission deadline is January 8, 2013 and the conference is in cooperation with ACM SIGCHI. The papers will be published in the ACM digital library. Andreas Bulling and Christian Holz are the program chairs and there is a fabulous technical program committee.

With AH2013 we continue a Conference that over last years has ventures beyond the usual things in human computer interaction and pervasive computing. Improving and augmenting human abilities is at the core of the conference, ranging from navigation systems, to actuator that help human movement, to improved or novel senses. This may include hardware, sensors, actuators, and software, such as web based applications or mobile apps.

We are curious about technologies and solutions that make humans smarter and augment human capabilities. Over the last years the conference has highly valueed novel contributions, inspiring ideas, forward thinking applications and new concepts. Originality, ingenuity, creativity, novelty come in this context before rigorous evaluations and flawless statistical analysis of the study data. We are looking forward to your contributions. Please the web page at http://www.hcilab.org/ah2013/

Thanks to Patrick Lühne for the great designs!

3DUI Technologies for Interactive Content by Prof. Yoshifumi Kitamura

In the context of multimodal interaction in ubiquitous computing professor Yoshifumi Kitamura presented a Simtech guest lecture on 3D user interface technologies. His research goal is to create 3D display technologies that allow multi-user direct interaction. Users should be able to move in front of the display and different users should have different perspectives according to the location in front of the display. He showed a set of rotating displays (volumetric displays) that allow for the visual presentation, but not for interaction.

His approach is based on an illusion hole that allows for multiple users and direct manipulation. The approach is to have different projections for different users, that are not visible for others but that creates the illusion of interaction with a single object. It uses a display mask that physically limits the view of each user. Have a look at their SIGGRAPH Paper for more details [1]. More recent work on this can be found on the webpage of Yoshifumi Kitamura’s web page [2]

Example of the IllusionHole from [2].

Over 10 years ago they worked on tangible user interfaces based on blocks. Their system is based on a set of small electronic components with input and output, that can be connected and used to create larger structures and that provide input and output functionality. See [3] and [4] for details and applications of Cognitive Cubes and Active Cubes.

He showed examples of interaction with a map based on the concept of electric materials. Elastic scroll and elastic zoom allow to navigate with maps in an apparently intuitive ways. The mental model is straight forward, as the users can image the surface as an elastic material, see [5].

One really cool new display technology was presented at last year ITS is a furry multi-touch display [6]. This is a must read paper!

The furry display prototype – from [6].

References
[1] Yoshifumi Kitamura, Takashige Konishi, Sumihiko Yamamoto, and Fumio Kishino. 2001. Interactive stereoscopic display for three or more users. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’01). ACM, New York, NY, USA, 231-240. DOI=10.1145/383259.383285 http://doi.acm.org/10.1145/383259.383285
[2] http://www.icd.riec.tohoku.ac.jp/project/displays-and-interface/index.html
[3] Ehud Sharlin, Yuichi Itoh, Benjamin Watson, Yoshifumi Kitamura, Steve Sutphen, and Lili Liu. 2002. Cognitive cubes: a tangible user interface for cognitive assessment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02). ACM, New York, NY, USA, 347-354. DOI=10.1145/503376.503438 http://doi.acm.org/10.1145/503376.503438
[4] Ryoichi Watanabe, Yuichi Itoh, Masatsugu Asai, Yoshifumi Kitamura, Fumio Kishino, and Hideo Kikuchi. 2004. The soul of ActiveCube: implementing a flexible, multimodal, three-dimensional spatial tangible interface. Comput. Entertain. 2, 4 (October 2004), 15-15. DOI=10.1145/1037851.1037874 http://doi.acm.org/10.1145/1037851.1037874
[5] Kazuki Takashima, Kazuyuki Fujita, Yuichi Itoh, and Yoshifumi Kitamura. 2012. Elastic scroll for multi-focus interactions. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ’12). ACM, New York, NY, USA, 19-20. DOI=10.1145/2380296.2380307 http://doi.acm.org/10.1145/2380296.2380307
[6] Kosuke Nakajima, Yuichi Itoh, Takayuki Tsukitani, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura, and Fumio Kishino. 2011. FuSA touch display: a furry and scalable multi-touch display. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA, 35-44. DOI=10.1145/2076354.2076361 http://doi.acm.org/10.1145/2076354.2076361

SIGCHI Rebuttals – Some suggestions to write them

ACM SIGCHI has in it’s review process the opportunity for the authors to respond to the comments of the reviewers. I find this a good thing and to me it has two main functions:

  1. The reviewers are usually more careful in what they write as they know they have to face a response for the authors
  2. Authors can clarify points that they did not get across in the first place in the original submission.

We usually write for all submissions with an average score over 2.0 a rebuttal. For lower ranked submissions it may be OK if we think we have a chance to counter some of the arguments, which we believe are wrong or unfair.

For the rebuttal it is most critical to address the meta-review as good as possible. The primary will be in the PC meeting and if the rebuttal wins this person over the job is well done. The other reviews should be addressed, too.

For all the papers where we write a rebuttal I suggest the following steps(a table may be helpful):

  1. read all reviews in detail
  2. copy out all statements that have questions, criticism, suggestions for improvement from each review
  3. for each of these statement make a short version (bullet points, short sentence) in your own words
  4. sort the all the extracted statements by topic
  5. combine all statements that address the same issue
  6. order the combined statements according to priority (highest priority to primary reviewer)
  7. for each combined statement decide if the criticism is justified, misunderstood, or unjustified
  8. make a response for each combined statement
  9. create a rebuttal that addresses as many points as possible, without being short (trade-off in the number of issue to address and detail one can give)

Point 8 is the core…
There are three basic options:

  • if justified: acknowledge that this is an issue and propose how to fix it
  • if misunderstood: explain again and propose you will improve the explanaition in the final version
  • if unjustified: explain that this point may be disputed and provide additional evidence why you think it should be as it is

The unjustified ones are the most tricky ones. We had cases where reviewers stated that the method we used is not appropriate. Here a response could be to cite other work that used this method in the same context. Similarly we had reviewers arguing that the statistical tests we used cannot be used on our data, here we also explained in more details the distribution of the data and why the test is appropriate. Sometimes it may be better to ignore cases where the criticism is unjustified – especially if it is not from the primary.

Some additional points

  • be respectful to the reviewers – they put work in to review the papers
  • if the reviewers did not understand – we probably did not communicate well
  • do not promise unrealistic things in the rebuttal
  • try to answer direct questions with precise and direct answers
  • if you expect that one reviewer did not read the paper – do not directly write this – try to address the points (and perhaps add a hint it is in the paper, e.g. “ANSWER as we outline already in section X)

Karin Bee has defended her dissertation.

Karin Bee (nee Leichtenstern) has defended her dissertation at the University of Augsburg. In her dissertation she worked on methods and tools to support a user centered design process for mobile applications that use a variety of modalities. There are some papers that describe her work, e.g. [1] and [2]. To me it was particularly interesting that she revisited the experiment done in her master thesis in a smart home in Essex [3] and reproduced some of it in her hybrid evaluation environment.

It is great to see that now most of our students (HiWis and project students) who worked with us in Munich on the Embedded Interaction Project have finished their PhD (there are some who still need to hand in – Florian? Raphael?, Gregor? You have enough papers – finish it 😉

In the afternoon I got to see some demos. Elisabeth André has a great team of students. They work on various topics in human computer interaction, including public display interaction, physiological sensing and emotion detection, and gesture interaction. I am looking forward to a joined workshop of both groups. Elisabeth has an impressive set of publications which is always a good starting point for affective user interface technologies.

[1] Karin Leichtenstern, Elisabeth André,and Matthias Rehm. Tool-Supported User-Centred Prototyping of Mobile Applications. IJHCR. 2011, 1-21.

[2] Karin Leichtenstern and Elisabeth André. 2010. MoPeDT: features and evaluation of a user-centred prototyping tool. In Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS ’10). ACM, New York, NY, USA, 93-102. DOI=10.1145/1822018.1822033 http://doi.acm.org/10.1145/1822018.1822033

[3] Enrico Rukzio, Karin Leichtenstern, Vic Callaghan, Paul Holleis, Albrecht Schmidt, and Jeannette Chin. 2006. An experimental comparison of physical mobile interaction techniques: touching, pointing and scanning. In Proceedings of the 8th international conference on Ubiquitous Computing (UbiComp’06), Paul Dourish and Adrian Friday (Eds.). Springer-Verlag, Berlin, Heidelberg, 87-104. DOI=10.1007/11853565_6 http://dx.doi.org/10.1007/11853565_6

MobiSys 2012, Keynote by Paul Jones on Mobile Health Challenges

This year’s ACM MobiSys conference is in the Lake District in the UK. I really love this region in the UK. Already 15 years back when I studied in Manchester I often came up over the weekend to hike in the mountains here. The setting of the conference hotel is brilliant, overlooking Lake Windermere.
The opening keynote of MobiSys 2012 was presented by Dr. Paul Jones, the NHS Chief Technology Officer who talked about “Mobile Challenges in Health”. Health is very dear to people and the approach to health care around the world is very different.

The NHS is a unique intuition that is providing healthcare to everyone in the UK. It is taxation funded and with its 110 billion pounds per year budget it is one of the cheaper (and yet efficient) health care systems in the world. The UK spends about 7% of its national cross product on health care, whereas the US or Germany nearly spend double of this percentage. Beside the economic size the NHS is also one of the biggest employers in the world, similar in size to the US department of defense and the Chinese people’s army. The major difference to other larger employers is, that a most part of the staff in the NHS is highly educated (e.g. doctors) and is not easily taking orders

Paul started out with the statement: technology is critical to providing health care in the future. Doing healthcare as it is currently done will not work in the future. Carrying on will not work as the cost would not be payable by society. In general information technology in the health sector is helping to create more efficient systems. He had some examples that often very simple system help to make a difference. In one case he explained that changing a hospitals scheduling practice from paper based diaries to a computer based systems reduced waiting times massively (from several month to weeks, without additional personal). In another case laptops were provided to community nurses. This saved 6 hours per week and freed nearly an extra day of work per week as it reduced their need for travelling back to the office. Paul argued, that this is only a starting point and not the best we can do. Mobile computing has the potential to create better solutions than a laptop that are more fitting the real working environment of the users and patients. One further example he used is dealing with vital signs of a patient. Traditionally this is measured and when degrading a nurse is calling a junior doctor and they have to respond in a certain time. In reality nurses have to ask more often and doctors may be delayed. In this case they introduced a system and mobile device to page/call the doctors and document the call (instead of nurses calling the doctors). It improved the response times of doctors – and the main reason is that actions are tracked and performance is measured (and in the medical field nobody wants to be the worst).

Paul shared a set of challenges and problems with the audience – in the hope that researchers take inspiration and solve some of the problems 😉

One major challenge is the fragmented nature of the way health care is provided. Each hospital has established processes and doctors have a way they want do certain procedures. These processes are different from each other – not a lot in many cases but different enough that the same software is not going to work. It is not each to streamline this, as doctors usually know best and many of them make a case why their solution is the only one that does the job properly. Hence general solutions are unlikely to work and solutions need to be customizable to specific needs.

Another interesting point was about records and paper. Paul argued that the amount of paper records in hospital is massive and they are less reliable and save as many think. It is common that a significant portion of the paper documentation is lost or misplaced. Here a digital solution (even if non-perfect) is most certainly better. From our own experience I agree on the observation, but I would think it is really hard to convince people about it.

The common element through the talk was, that it is key to create systems that fit the requirements. To achieve this it seems that having multidisciplinary teams that understand the user and patient needs is inevitable. Paul’s examples were based on his experience of seeing the user users and patient in context. He made firsthand the observation, that real world environments often do not permit the use of certain technologies or create sup-optimal solution. It is crucial that the needs to are understood by the people who design and implement the systems. It may be useful to go beyond the multidisciplinary team and make each developer spending one day in the environment they design for.

Some further problems he discussed are:

  • How to move the data around to the places where it is needed? Patients are transferred (e.g. ambulance to ER, ER to surgeons, etc.) and hence data needs to be handed over. This handover has to work across time (from one visit to the next) and across departments and institutions
  • Personal mobile devices (“bring your own device”) are a major issue. It seems easy for an individual to use them (e.g. a personal tablet to make notes) but on a system-level they create huge problems, from back-up to security. In the medical field another issue arises: the validity of data is guaranteed and hence the data gathered is not useful in the overall process.

A final and very interesting point was: if you are not seriously ill, being in a hospital is a bad idea. Paul argued, that the care you get at home or in the community is likely to be better and you are less likely to be exposed to additional risks. From this the main challenge for the MobiSys community arises: It will be crucial to provide mobile and distributed information systems that work in the context of home care and within the community.

PS: I like one of the side comments: Can we imagine doing a double blind study on a jumbo jet safety? This argument hinted, that some of the approaches to research in the medical field are not always most efficient to prove the validity of an approach.

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Keynote at the Pervasive Displays Symposium: Kenton O’Hara

Kenton O’Hara, a senior researcher in the Socio-Digital-Systems group at Microsoft Research in Cambridge, presented the keynote at the pervasive displays symposium in Porto on the topic “Social context and interaction proxemics in pervasive displays“. He highlighted the importance of the spatial relationship between the users and the interactive displays and the different opportunities for interaction that are available when looking at the interaction context.

Using examples from the medical field (operating theater) he showed the issues that arise from the need of sterile interaction and hence avoiding touch interaction and moving towards a touchless interaction mode. A prototype, that uses a Microsoft Kinect sensor,  allows the surgeon to interact with information (e.g. an x-ray image) while working on the patient. It was interesting to see that gestural interaction in this context is not straightforward, as surgeons use tools (and hence have their hands not free) or gesture as a part of the communication in the team.

Another example is a public space game; there are many balls on a screen and a camera looking at the audience. Users can move the balls by body movement based on a simple edge detection video tracking mechanism and when two balls touch they form a bigger ball.  Kenten argues that “body-based interaction becomes a public spectacle” and interactions of an individum are clearly visible to others. This visibilility can lead to inhibition and may reduce the motivation of user to interact. For the success of this game the designing of the simplistic tracking algorithms is one major factor. By tracking edges/blobs the users can play together (e.g. holding hands, parents with the kids in their arm) and hence a wide range of interaction proxemics are supported. He presented some further examples of public display games on BBC large screens, also showing that the concept of interaction proxemics can be use to explain interaction .

TVs have change eating behavoir. More recent research in displays in the context of food consumptions have been in contrast mainly pragmatic (corrective, problem solving). Kenton argued that we look at the cultural values of meals and see shared eating as a social practice. Using the example of eating in front of the television (even as a family) he discusses the implications on communication and interaction (basically the communication is not happening). Looking at more recent technologies such as phones, laptops and tablets and their impact on social dynamics probably many of us realized that this is impacting many of us in our daily lives already (or who is not taking their phone to table?). It is very obvious that social relationships and culture changes with these technologies. He showed “4Photos” [1] a designed piece of technology to be put on the center of the table showing 4 photographs. Users can interact with it from all sides. It is designed in a way to stimulate rather than inhibit communication and to provide opportunities for conversation. It introduces interaction with technologies as a social gesture.

Interested in more? Kenton published a book on public displays in 2003 [2] and has a set of relevant publications in the space of the symposium.

References

[1] Martijn ten Bhömer, John Helmes, Kenton O’Hara, and Elise van den Hoven. 2010. 4Photos: a collaborative photo sharing experience. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI ’10). ACM, New York, NY, USA, 52-61. DOI=10.1145/1868914.1868925 http://doi.acm.org/10.1145/1868914.1868925

[2] Kenton O’Hara, Mark Perry, Elizabeth Churchill, Dan Russell. Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies. Kluwer Academic, 2003

Visiting the Culture Lab in Newcastle

While being in the north of England I stopped by in Newcastle at the Culture Lab. If the CHI-conference is a measure for quality in research in Human Computer Interaction Culture Lab is currently one of the places to be – if you are not convinced have look at Patrick Olivier’s publications. The lab is one of a few places where I think a real ubicomp spirit is left – people developing new hardware and devices (e.g. mini data acquisition boards, specific wireless sensor, embedded actuators) and interdisciplinary research plays a central role. This is very refreshing to see, especially as so many others in Ubicomp have moved to mainly creating software on phones and tables…

Diana, one of our former students from Duisburg-Essen, is currently working on her master thesis in Newcastle. She looks into new tangible forms of interaction on table top UIs. Especially actuation of controls is a central question. The approach she uses for moving things is compared to other approached, e.g. [1], very simple but effective – looking forward to reading the paper on the technical details (I promised not to tell any details here). The example application she has developed is in chemistry education.

Some years back at a visit to the culture lab I had already seen some of the concepts and ideas for the kitchen. Over the last years this has progressed and the current state is very appealing. I really thing the screens behind glass in the black design make a huge difference. Using a set of small sensors they have implemented a set of aware kitchen utensils [2]. Matthias Kranz (back in our group in Munich) worked on a similar idea and created a knife that knows what it cuts [3]. It seems worthwhile to exploring the aware artifacts vision further …

References
[1] Gian Pangaro, Dan Maynes-Aminzade, and Hiroshi Ishii. 2002. The actuated workbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02). ACM, New York, NY, USA, 181-190. DOI=10.1145/571985.572011 http://doi.acm.org/10.1145/571985.572011 

[2] Wagner, J., Ploetz, T., Halteren, A. V., Hoonhout, J., Moynihan, P., Jackson, D., Ladha, C., et al. (2011). Towards a Pervasive Kitchen Infrastructure for Measuring Cooking Competence. Proc Int Conf Pervasive Computing Technologies for Healthcare (pp. 107-114). PDF

[3] Matthias Kranz, Albrecht Schmidt, Alexis Maldonado, Radu Bogdan Rusu, Michael Beetz, Benedikt Hörnler, and Gerhard Rigoll. 2007. Context-aware kitchen utilities. InProceedings of the 1st international conference on Tangible and embedded interaction (TEI ’07). ACM, New York, NY, USA, 213-214. DOI=10.1145/1226969.1227013 http://doi.acm.org/10.1145/1226969.1227013 (PDF)

Media art, VIS Excursion to ZKM in Karlsruhe

This afternoon we (over 40 people from VIS and VISUS at the University of Stuttgart) went to Karlsruhe to visit the ZKM. We got guided tours to the panorama laboratory, the historic video laboratory, to the SoundARt exhibition and some parts of the regular exhibition. Additionally Prof. Gunzenhäuser gave a short introduction to the Zuse Z22 that is in on show there, too.

 The ZKM is a leading center for digital and media art that includes a museum for media art and modern art, several research institutes, and an art and design school. The approach is to bring media artists, works of art, research in media art and teaching in this field close together (within a single large building). The exhibitions include major media art works from the last 40 years.

The panorama laboratory is a 360 degree (minus a door) projection. Even though the resolution of the powerwall at VISUS [1] is higher and the presentation is in 3D, the360 degree 10 Megapixel panorama screen results in an exciting immersion. Without 3D, being surrounded by media creates a feeling of being in the middle of something that happens around you. Vivien described the sensation of movement similar to sitting in a train. The moment another train pulls out of the station you have a hard time to tell who is moving. I think such immersive environment could become very common once we will have digital display wallpaper.

The historic video laboratory is concerned with “rescuing” old artistic video material. We sometimes complain about the variety of video codecs, but looking at the many different formats for tapes and cassettes, this problem has a long tradition. Looking at historic split screen videos that were created using analog technologies one appreciates the virtues of digital video editing… Two are two amazing films by Zbigniew Rybczyński: Nowa Książka (New Book): http://www.youtube.com/watch?v=46Kt0HmXfr4 and and Tango: http://vodpod.com/watch/3791700-zbigniew-rybczynski-tango-1983

The current SoundArt exhibition is worthwhile. There are several indoor and outdoor installations on sounds. In the yard there is a monument built of speakers (in analogy to the oracle of Delphi) that you can call from anywhere (+49 721 81001818) and get 3 minutes of time to talk to whom even is in the vicinity of the installation. Another exhibit sonfied electron magnetic fields from different environments in an installation called the cloud.

[1] Powerwall at VISUS at the Univeristy of Stuttgart (6m by 2.20, 88 million pixel in, 44 million pixel per eye for 3D). http://www.visus.uni-stuttgart.de/institut/visualisierungslabor/technischer-aufbau.html.