CHI 17 Workshop


CHI 17 Workshop on Amplification and Augmentation of Human Perception

Amplification of human perception, physiology, and cognition aims at using technical systems to enhance existing human abilities or to create new capabilities. Technical sensors, such as cameras and microphones, are in temporal and spatial resolution superior to the human senses of vision and hearing. Artificial intelligence is outperforming humans in games such as chess and go. These facts are often used to paint a dark vision of the future in which machines take over the world. In this workshop, we want to bring people together who look at opportunities of using technologies to amplify and augment human perception to keep up with technical advances. In this the interaction between humans and technology and their interplay are the core scientific challenges. The CHI community has demonstrated the feasibility of amplifying human perception by extending the human vision (e.g., sensory substitution [1] or change the perspective [2]) and by adding novel sensors and learn them as new senses (e.g., color vision [3] and spidersense [4]). Amplification can be extended beyond perception to cognition and physiology. Adding additional control to muscle movement (e.g., in the Hand [5, 6] or while walking [7]) allows to create new motor abilities. This growing research area poses many fundamental questions in human-technology collaboration and sets tough technical, ethical, and societal challenges.

Topics of Interest

We focus on different topics within the context of this workshop. These topics range from application scenarios, to implementations of novel interaction concepts or novel sensing and actuating means. We also highly welcome reflections and discussions. In particular, we focus on but are not limited to:

  • Amplifying Human Senses
  • Creating Novel Senses
  • Actuating Humans through EMS
  • Brain-Computer Interfaces and Eye-wear Computers for Implicit Input
  • Super Human Sport
  • Augmented Reality for increasing Human Perception
  • Creating Seamless Interactions with Novel Senses
  • Alternative or Novel Feedback Techniques
  • Interacting with Augmented Reality
  • Models, Theories, and Concepts of Digitally Augmented Human Perception
  • Ethical Implications of Amplified Senses

Organizers

Albrecht Schmidt is a professor of human-computer interaction at the University of Stuttgart. His primary research interest is at the crossroads of human-computer interaction and human perception. Albrecht received his PhD in computer science from Lancaster University in the UK.

Stefan Schneegass is a research associate within the Human-Computer Interaction group at University of Stuttgart. His current research interest centers on ubiquitous computing and human-computer interaction (HCI) particularly wearable computing.

Kai Kunze works as an associate project professor at Keio Media Design, Keio University. Beforehand, he held an assistant professorship at Osaka Prefecture University. His major research contributions are in pervasive computing, especially in sensing, physical and cognitive activity recognition. Recently, he focuses on tracking knowledge acquisition activities, especially reading.

Jun Rekimoto is a professor at the University of Tokyo and director of the Interaction Lab, Sony Computer Science Laboratories. He was appointed to the SIGCHI Academy in 2007. Rekimoto`s research interests include human-computer interaction, computer augmented environments and computer augmented human (human-computer integration).

Woontack Woo is a Professor in the Graduate School of Culture Technology (GSCT) at Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea. The main thrust of his research has been implementing ubiquitous virtual reality in smart space, which includes Context-aware Augmented Reality, 3D Vision, HCI, and Culture Technology.

References

[1] Jamie Ward and Peter Meijer. 2010. Visual experiences in the blind induced by an auditory sensory substitution device. Consciousness and Cognition 19, 1 (2010), 492 – 500. DOI:http://dx.doi.org/10.1016/j.concog.2009.10.006
[2] Shunichi Kasahara, Mitsuhito Ando, Kiyoshi Suganuma, and Jun Rekimoto. 2016. Parallel Eyes: Exploring Human Capability and Behaviors with Paralleled First Person View Sharing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 1561–1572. DOI: http://dx.doi.org/10.1145/2858036.2858495
[3] Neil Harbisson. 2012. I listen to color. TEDXTalk, http://www.ted.com/ talks/ neil_harbisson_i_listen_to_color (2012).
[4] Victor Mateevitsi, Brad Haggadone, Jason Leigh, Brian Kunzer, and Robert V. Kenyon. 2013. Sensing the Environment Through SpiderSense. In Proceedings of the 4th Augmented Human International Conference (AH ’13). ACM, New York, NY, USA, 51–57. DOI: http://dx.doi.org/10.1145/2459236.2459246
[5] Emi Tamaki, Takashi Miyaki, and Jun Rekimoto. 2011. PossessedHand: Techniques for Controlling Human Hands Using Electrical Muscles Stimuli. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 543–552. DOI: http://dx.doi.org/10.1145/1978942.1979018
[6] Pedro Lopes, Patrik Jonell, and Patrick Baudisch. 2015. Affordance++: Allowing Objects to Communicate Dynamic Use. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 2515–2524. DOI: http://dx.doi.org/10.1145/2702123.2702128
[7] Max Pfeiffer, Tim Dünte, Stefan Schneegass, Florian Alt, and Michael Rohs. 2015. Cruise Control for Pedestrians: Controlling Walking Direction Using Electrical Muscle Stimulation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). ACM, New York, NY, USA, 2505–2514. DOI: http://dx.doi.org/10.1145/2702123.2702190