Stefan Schneegass https://www.hcilab.org/stefan Sat, 25 Mar 2017 16:28:14 +0000 en-US hourly 1 https://wordpress.org/?v=5.3.2 Smart Garments https://www.hcilab.org/stefan/2016/03/11/smart-garments/ Fri, 11 Mar 2016 10:31:11 +0000 http://www.hcilab.org/stefan/?p=120 systemIn contrast to wearable gadgets, smart garments enable even more sensing and actuating possibilities due to closeness to the user`s body. Humans naturally use garments for several reasons such as protection or aesthetics. Many parts of the human body are naturally covered by garments which can be enriched with technology. Smart garments can thereby either sense implicitly information about the user or be used for direct input and output. This can be done similar to wearable gadgets without the necessity of attaching additional sensing but only wearing enriched clothing. In addition, the area of the user`s body which can be used for interaction is increased since clothing can cover more locations compared to wearable gadgets. We expect that with further advancement in smart garments, regular garments eventually get substituted by smart garments. As soon as smart garments are producible for similar costs and offer similar properties with regards to their wearability compared to regular cloth, smart clothing will become pervasive. Every piece of garment will incorporate technology that can be used for designing novel ways of interacting. This fundamental change enables novel interaction means which can be exploited for enriching the interaction with mobile devices. However, several challenges still need to be tackled.

Extending the Input Space of Smartwatches

Smartwatches provide quick and easy access to information. Due to their wearable nature, users can perceive the information while being stationary or on the go. The main drawback of smartwatches, however, is the limited input possibility. They use similar input methods as smartphones but thereby suffer from a smaller form factor. To extend the input space of smartwatches, we present GestureSleeve, a sleeve made out of touch enabled textile. It is capable of detecting different gestures such as stroke based gestures or taps. With these gestures, the user can control various smartwatch applications. Exploring the performance of the GestureSleeve approach, we conducted a user study with a running application as use case. In this study, we show that input using the GestureSleeve outperforms touch input on the smartwatch. In the future the GestureSleeve can be integrated into regular clothing and be used for controlling various smart devices.

 

Publication

Stefan Schneegass. 2016. Enriching Mobile Interaction with Garment-Based Wearable Computing Devices. Dissertation, University of Stuttgart.

Stefan Schneegass and Oliver Amft. 2017. Introduction to Smart Textiles. Smart Textiles, Springer, 1-15.

Stefan Schneegass, Sven Mayer, Thomas Olsson, and Kristof Van Laerhoven. 2016. Mobile Interactions Augmented by Wearable Computing: a Design Space and Vision. International Journal of Mobile Human Computer Interaction (IJMHCI), 8(4), 104-114.

Stefan Schneegass and Alexandra Voit. 2016. GestureSleeve: Using Touch Sensitive Fabrics for Gestural Input on the Forearm for Controlling Smartwatches. Proceedings of the 2016 ACM International Symposium on Wearable Computers.

Stefan Schneegass, Mariam Hassib, Bo Zhou, et al. 2015. SimpleSkin: Towards Multipurpose Smart Garments. Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, ACM, 241–244. http://doi.org/10.1145/2800835.2800935

]]>
Implicit Feedback https://www.hcilab.org/stefan/2016/03/11/implicit-feedback/ Fri, 11 Mar 2016 10:30:19 +0000 http://www.hcilab.org/stefan/?p=117 Providing feedback mainly involves presenting visual or auditory cues to the user. These cues need to be perceived and understood and then the user is capable of reacting to them. While for most information retrieval tasks this works fine, several other tasks can be realized implicitly. Thus, the user just passively is involved in the interaction. One of the key technologies for allowing implicit feedback is electrical muscle stimulation (EMS). EMS allows computing systems to actuate parts of the user`s body.

Implicit Navigation

UserCalibration

Pedestrian navigation systems require users to perceive, interpret, and react to navigation information. This can tax cognition as navigation information competes with information from the real world. We propose actuated navigation, anew kind of pedestrian navigation in which the user does not need to attend to the navigation task at all. An actuation signal is directly sent to the human motor system to influence walking direction. To achieve this goal we stimulate the sartorius muscle using electrical muscle stimulation. The rotation occurs during the swing phase of the leg and can easily be counteracted. The user therefore stays in control. We discuss the properties of actuated navigation and present a lab study on identifying basic parameters of the technique as well as an outdoor study in a park. The results show that our approach changes a user’s walking direction by about 16°/m on average and that the system can successfully steer users in a park with crowded areas, distractions, obstacles, and uneven ground.

 

Emotion Actuation

The human body reveals emotional and bodily states through measurable signals, such as body language and electroencephalography. However, such manifestations are difficult to communicate to others remotely. We propose EmotionActuator, a proof-of-concept system to investigate the transmission of emotional states in which the recipient performs emotional gestures to understand and interpret the state of the sender. We call this kind of communication embodied emotional feedback, and present a prototype implementation. To realize our concept we chose four emotional states: amused, sad, angry, and neutral. We designed EmotionActuator through a series of studies to assess emotional classification via EEG, and create an EMS gesture set by comparing composed gestures from the literature to sign-language gestures. Through a final study with the end-to-end prototype interviews revealed that participants like implicit sharing of emotions and find the embodied output to be immersive, but want to have control over shared emotions and with whom.This work contributes a proof of concept system and set of design recommendations for designing embodied emotional feedback systems.

 

 

Publication

Max Pfeiffer, Tim Dünte, Stefan Schneegass, Florian Alt, and Michael Rohs. 2015. Cruise Control for Pedestrians: Controlling Walking Direction using Electrical Muscle Stimulation. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, 2505–2514. http://doi.org/10.1145/2702123.2702190

In Press

NewScientist http://www.newscientist.com/article/dn27295-human-cruise-control-app-steers-people-on-their-way.html#.VSPlaxdEF4c

Heise online http://www.heise.de/newsticker/meldung/Navigation-fuer-Fussgaenger-Elektrische-Muskelstimulation-als-Richtungsgeber-2602905.html

Wired https://www.wired.de/collection/latest/navigation-durch-elektro-schocks-am-bein

Wired http://www.wired.co.uk/news/archive/2015-04/14/electric-shock-pedestrian-gps

Wired http://www.wired.com/2015/04/scientists-using-electrodes-remote-control-people/

BBC http://www.bbc.com/news/technology-34424843

The Telegraph http://www.telegraph.co.uk/news/science/science-news/11531190/Human-Sat-Nav-guides-tourists-through-streets-by-controlling-leg-muscles.html

Health Tech Event http://www.healthtechevent.com/sensor/connected-muscles-control-walking-direction-with-your-smartphone/

Electronics Weekly http://www.electronicsweekly.com/news/research/electro-stimulation-app-gives-humans-steer-2015-04/

The Huffington Post http://www.huffingtonpost.com/2015/04/14/human-sat-nav-video_n_7054312.html

CNET http://www.cnet.com/news/human-cruise-control-zaps-legs-to-send-users-in-the-right-direction/

Gizmodo http://gizmodo.com/12-fascinating-projects-from-the-bleeding-edge-of-inter-1700656949

MIT Technology Review http://www.technologyreview.com/news/536646/researchers-use-electrodes-for-human-cruise-control/

Yahoo https://www.yahoo.com/tech/in-the-future-your-phone-could-use-electric-smart-116400304834.html?src=rss

Digital Trends http://www.digitaltrends.com/wearables/cruise-control-for-pedestrians-wearable-pants-news/

Discovery Channel http://news.discovery.com/tech/robotics/remote-control-humans-are-here-150416.htm

Big Think http://bigthink.com/ideafeed/human-cruise-control-uses-electrodes-to-steer-people-in-the-right-direction

Belfast Telegraph http://www.belfasttelegraph.co.uk/technology/human-sat-nav-zaps-peoples-legs-with-electrodes-to-guide-them-through-streets-31138658.html

Mirror http://www.mirror.co.uk/news/technology-science/technology/pedestrian-cruise-control-uses-electric-5518684

atmel http://blog.atmel.com/2015/04/15/pedestrian-cruise-control-will-steer-your-muscles-in-the-right-direction/

GoExplore http://www.goexplore.net/best-of-the-web/news/pedestrian-cruise-control-maps/

Celebnew http://www.azgossip.com/pedestrian-cruise-control-uses-electric-shocks-to-steer-you-home

Tune In https://www.youtube.com/watch?v=GWt3koXifd8&feature=youtu.be

]]>
Usable Security https://www.hcilab.org/stefan/2016/03/11/usable-security/ Fri, 11 Mar 2016 10:28:30 +0000 http://www.hcilab.org/stefan/?p=114 Current authentication systems strive to be as secure as possible. This poses several challenges to the user such as remembering complex alphanumerical passwords or changing the password every other month. Since average users currently need to remember many different passwords, making them secure against brute force attacks (i.e., making them long and complex) does not scale well anymore. This results in users writing down their passwords or using pattern on the keyboard as passwords. To tackle this issue, we developed different systems that authenticate users without forcing them to remember complex passwords.

 

Implicit Authentication using Wearable Devices

ImplicitAuthenticationSecure user identification is important for the increasing number of eyewear computers but limited input capabilities pose significant usability challenges for established knowledge-based schemes, such as a passwords or PINs. We present SkullConduct, a biometric system that uses bone conduction of sound through the user’s skull as well as a microphone readily integrated into many of these devices, such as Google Glass. At the core of SkullConduct is a method to analyze the characteristic frequency response created by the user’s skull using a combination of Mel Frequency Cepstral Coefficient (MFCC) features as well as a computationally light-weight 1NN classifier. We report on a controlled experiment with 10 participants that shows that this frequency response is person-specific and stable — even when taking off and putting on the device multiple times — and thus serves as a robust biometric. We show that our method can identify users with 97.0% accuracy and authenticate them with an equal error rate of 6.9%, thereby bringing biometric user identification to eyewear computers equipped with bone conduction technology.

 

smudgesafeA Novel Mobile Authentication Method

Touch-enabled user interfaces have become ubiquitous on portable devices. At the same time, authentication using touch input is problematic, since finger smudge traces may allow attackers to reconstruct passwords. We present SmudgeSafe, an authentication system that uses random geometric image transformations, such as translation, rotation, scaling, shearing, and flipping, to increase the security of cued-recall graphical passwords. We describe the design space of these transformations and report on two user studies: A lab-based security study involving 20 participants in attacking user-defined passwords, using high quality pictures of real smudge traces captured on a mobile phone display; and an in-the-field usability study with 374 participants who generated more than 130,000 logins on a mobile phone implementation of SmudgeSafe. Results show that SmudgeSafe significantly increases security compared to authentication schemes based on PINs and lock patterns, and exhibits very high learnability, efficiency, and memorability.

 

 

 

Exploring User Behavior in the Wild

smudgewildCommon user authentication methods on smartphones, such as lock patterns, PINs, or passwords, impose a trade-off between security and password memorability. Image-based passwords were proposed as a secure and usable alternative. As of today, however, it remains unclear how such schemes are used in the wild. We present the first study to investigate how image-based passwords are used over long periods of time in the real world. Our analyses are based on data from 2318 unique devices collected over more than one year using a custom application released in the Android Play store. We present an in-depth analysis of what kind of images users select, how they define their passwords, and how secure these passwords are. Our findings provide valuable insights into real-world use of image-based passwords and inform the design of future graphical authentication schemes.

Publications

Stefan Schneegass, Youssef Oualil, and Andreas Bulling. 2016. SkullConduct: Biometric User Identification on Eyewear Computers Using Bone Conduction Through the Skull. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (to be published), ACM.

Stefan Schneegass, Frank Steimle, Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2014. SmudgeSafe: Geometric Image Transformations for Smudge-resistant User Authentication. Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ACM, 775–786. http://doi.org/10.1145/2632048.2636090

Florian Alt, Stefan Schneegass, Alireza Sahami Shirazi, Mariam Hassib, and Andreas Bulling. 2015. Graphical Passwords in the Wild: Understanding How Users Choose Pictures and Passwords in Image-based Authentication Schemes. In Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI ’15). ACM, New York, NY, USA, 316-322. http://dx.doi.org/10.1145/2785830.2785882

 

]]>
Ubiquitous Interaction https://www.hcilab.org/stefan/2016/03/11/ubiquitous-interaction/ Fri, 11 Mar 2016 10:27:44 +0000 http://www.hcilab.org/stefan/?p=111 We explored different technologies and mechanisms to enable interaction in ubiquitous environments.

Exploiting Thermal Reflection for Interactive Systems

aluThermal cameras have recently drawn the attention of HCI researchers as a new sensory system enabling novel interactive systems. They are robust to illumination changes and make it easy to separate human bodies from the image background. Far-infrared radiation, however, has another characteristic that distinguishes thermal cameras from their RGB or depth counterparts, namely thermal reflection. Common surfaces reflect thermal radiation differently than visual light and can be perfect thermal mirrors. In this paper, we show that through thermal reflection, thermal cameras can sense the space beyond their direct field-of-view. A thermal camera can sense areas besides and even behind its field-of-view through thermal reflection. We investigate how thermal reflection can increase the interaction space of projected surfaces using camera-projection systems. We moreover discuss the reflection characteristics of common surfaces in our vicinity in both the visual and thermal radiation bands. Using a proof-of-concept prototype, we demonstrate the increased interaction space for hand-held camera-projection system. Furthermore, we depict a number of promising application examples that can benefit from the thermal reflection characteristics of surfaces.

 

Modeling Distant Pointing for Compensating Systematic Displacements

Distant pointing at objects and persons is a highly expressive gesture that is widely used in human communication. Point- ing is also used to control a range of interactive systems. For determining where a user is pointing at, different ray casting methods have been proposed. In this paper we assess how accurately humans point over distance and how to improve it. Participants pointed at projected targets on a wall display from 2m and 3m while standing and sitting. Testing three common ray casting methods, we found that even with the most accurate one the average error is 61.3cm. We found that all tested ray casting methods are affected by systematic dis- placements. Therefore, we trained a polynomial to compen- sate this displacement. We show that using a user-, pose-, and distant-independent quartic polynomial can reduce the aver- age error by 37.3%.

Publications

Alireza Sahami Shirazi, Yomna Abdelrahman, Niels Henze, Stefan Schneegass, Mohammadreza Khalilbeigi, and Albrecht Schmidt. 2014. Exploiting thermal reflection for interactive systems. Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14: 3483–3492. http://doi.org/10.1145/2556288.2557208

Sven Mayer, Katrin Wolf, Stefan Schneegass, and Niels Henze. 2015. Modeling Distant Pointing for Compensating Systematic Displacements. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM.

]]>
Cognitive Effects of Interactivity https://www.hcilab.org/stefan/2016/03/11/cognitive-effects-of-interactivity/ Fri, 11 Mar 2016 10:18:25 +0000 http://www.hcilab.org/stefan/?p=101 The way we interact with technology has a huge impact on our cognitive processes. The interaction technique is one of the ways to influence the cognitive processes. Thus, different ways of interacting influences important factors such as recall or recognition of the displayed content.

Cognitive Effects of Interactive Public Display Applications

Many public displays are nowadays equipped with different types of sensors. Such displays allow engaging and persistent user experiences to be created, e.g., in the form of gesture-controlled games or content exploration using direct touch at the display. However, as digital displays replace traditional posters and billboards, display owners are reluctant to deploy interactive content, but rather adapt traditional, non-interactive content. The main reason is, that the benefit of such interactive deployments is not obvious. Our hypothesis is that interactivity has a cognitive effect on users and therefore increases the ability to remember what they have seen on the screen — which is beneficial both for the display owner and the user. In this paper we systematically investigate the impact of interactive content on public displays on the users’ cognition in different situations. Our findings indicate that overall memorability is positively affected as users interact. Based on these findings we discuss design implications for interactive public displays.

What People Really Remember – Understanding Cognitive Effects When Interacting with Large Displays

powerwall5This paper investigates how common interaction techniques for large displays impact on recall in learning tasks. Our work is motivated by results of prior research in different areas that attribute a positive effect of interactivity to cognition. We present findings from a controlled lab experiment with 32 participants comparing mobile phone-based interaction, touch interaction and full-body interaction to a non-interactive baseline. In contrast to prior findings, our results reveal that more movement can negatively influence recall. In particular we show that designers are facing an immanent trade-off between creating immersive and engaging user experiences and memorable content.

Publications

Philipp Panhey, Tanja Doering, Stefan Schneegass, Dirk Wenig, and Florian Alt. 2015. What People Really Remember – Understanding Cognitive Effects When Interacting with Large Displays. Proceedings of the 2015 International Conference on Interactive Tabletops & Surfaces, ACM New York, NY, USA, 103–106. http://doi.org/10.1145/2817721.2817732  

Florian Alt, Stefan Schneegass, Michael Girgis, and Albrecht Schmidt. 2013. Cognitive effects of interactive public display applications. Proceedings of the 2nd ACM International Symposium on Pervasive Displays - PerDis ’13. http://doi.acm.org/10.1145/2491568.2491572

Florian Alt and Stefan Schneegass. 2013. Towards Understanding the Cognitive Effects of Interactivity. Experiencing Interactivity in Public Spaces (EIPS): Workshop at CHI’13, April 28, 2013, Paris, France: 102–106. http://www.cs.tut.fi/ihte/EIPS_workshop_CHI13/ProceedingsWorkshopMaterials/EIPS_Proceedings_CHI13.pdf

]]>
Interacting with 3D Displays https://www.hcilab.org/stefan/2016/03/11/interacting-with-3d-displays/ Fri, 11 Mar 2016 10:15:28 +0000 http://www.hcilab.org/stefan/?p=98 3D displays are becoming more and more ubiquitous. While 3D displays cost roughly the same as their 2D counterparts, the understanding of how the 3rd dimension can be utilized for interaction is still limited. We explore different applications scenarios and mechanisms to exploit the 3rd dimension for interaction.

Using Eye-Tracking to Support Interaction with Layered 3D Interfaces on Stereoscopic Displays

semi_crap1In this paper, we investigate the concept of gaze-based in- teraction with 3D user interfaces. We currently see stereo vision displays becoming ubiquitous, particularly as auto- stereoscopy enables the perception of 3D content without the use of glasses. As a result, application areas for 3D beyond entertainment in cinema or at home emerge, including work settings, mobile phones, public displays, and cars. At the same time, eye tracking is hitting the consumer market with low-cost devices. We envision eye trackers in the future to be integrated with consumer devices (laptops, mobile phones, displays), hence allowing the user’s gaze to be analyzed and used as input for interactive applications. A particular chal- lenge when applying this concept to 3D displays is that cur- rent eye trackers provide the gaze point in 2D only (x and y coordinates). In this paper, we compare the performance of two methods that use the eye’s physiology for calculating the gaze point in 3D space, hence enabling gaze-based inter- action with stereoscopic content. Furthermore, we provide a comparison of gaze interaction in 2D and 3D with regard to user experience and performance. Our results show that with current technology, eye tracking on stereoscopic displays is possible with similar performance as on standard 2D screens.

Evaluating Stereoscopic 3D for Automotive User Interfaces in a Real-World Driving Study

This paper reports on the use of in-car 3D displays in a real-world driving scenario. Today, stereoscopic displays are becoming ubiquitous in many domains such as mobile phones or TVs. Instead of using 3D for entertainment, we explore the 3D effect as a mean to spatially structure user interface (UI) elements. To evaluate potentials and drawbacks of in-car 3D displays we mounted an autostereoscopic display as instrument cluster in a vehicle and conducted a real-world driving study with 15 experts in automotive UI design. The results show that the 3D effect increases the perceived quality of the UI and enhances the presentation of spatial information (e.g., navigation cues) compared to 2D. However, the effect should be used well-considered to avoid spatial clutter which can increase the system’s complexity.

Publications

Florian Alt, Stefan Schneegass, Jonas Auda, Rufat Rzayev, and Nora Broy. 2014. Using eye-tracking to support interaction with layered 3D interfaces on stereoscopic displays. Proceedings of the 19th international conference on Intelligent User Interfaces - IUI ’14: 267–272. http://doi.org/10.1145/2557500.2557518

Nora Broy, Stefan Schneegass, Mengbing Guo, Florian Alt, and Albrecht Schmidt. 2015. Evaluating Stereoscopic 3D for Automotive User Interfaces in a Real-World Driving Study. Adjunct Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM.

]]>
Creative Engagement in Museums https://www.hcilab.org/stefan/2016/03/11/creative-engagement-in-museums/ Fri, 11 Mar 2016 10:15:14 +0000 http://www.hcilab.org/stefan/?p=104 museumMany museums keep physical archives of artifacts and documents as well as digital repositories filled with directly related and meta information. Naturally, the number of objects filed away or being in remote locations surmounts the physical exhibition space. Many objects cannot be presented to the user and, thus, remain concealed. Hence, one of the challenges curators are facing is the selection and composition of artifacts and their combination with information into a congruent story line. Common practice dictates the creation to be limited to a set of curators working together. Thereby, special interests of visitors may often be neglected.
The growing availability of digital copies and related information of cultural heritage, such as Europeana, opens up possibilities to use new technologies to include digital content in museums. Showrooms can present a larger diversity of objects and virtual artifacts can include pieces from other museums. In contrast to physical exhibits, digital objects can be (re-)arranged without much effort. This allows the creation of a more dynamic design of showrooms. By ad-hoc arrangement of cultural heritage artifacts, visitors take on an active role and contribute to the exhibition design.

 

User Defined Exhibitions – Exploring Possibilities to Involve Visitors in the Design of Museum Exhibitions

When connecting the physical experience of museums and exhibitions with relevant digital information, creating engaging exhibitions is a challenge shared by both curators and interaction designers. There is a wealth of artifacts scattered across museums, their archives, and remote locations. Additionally, online repositories hold a multitude of digital cultural heritage content about these artifacts. The challenge of creating an exhibition lies in selecting related artifacts, combining them with available or proprietary information, and arranging them in a way that serves an intended story line. Common practice dictates the creation to be limited to a set of curators working together. We propose a visitor-centered design approach including a physical interaction space where visitors can browse archived objects and put together their own exhibitions with regards to a unique or given story line. We conducted an initial field study and report findings regarding the likes and dislikes of visitors.

 

Parallel Exhibits: Combining Physical and Virtual Exhibits

People have a special fascination for original physical objects, their texture, and visible history. However, the digitiza- tion of exhibits and the use of these data is a current challenge for museums. We believe that museums need to capitalize on the affordances of physical exhibits to help users navigate their more extensive virtual collections. Although lacking materiality, virtual objects have other advantages: They can easily be manipulated, rearranged, duplicated, and moved. This offers new op- portunities for visitors to engage with museum collections and the curatorial process in a creative way. In this paper, we propose a concept designed to make use of existing digital content in combination with physical exhibits in museums, which we call Parallel Exhibits.
Parallel Exhibits is a system that enables museum visitors to interact with traditional museum collections and virtual objects at the same time. It is an interactive exhibition space where visitors and curators enter a design dialogue mediated by tech- nology. Curators display a selection of physical objects and invite visitors to complete the exhibition with virtual objects from the museum’s collections or elsewhere. The ever-changing display can be augmented with digital text labels and messages. We implemented Parallel Exhibits as a web application, which bears the advantage of easily running the application on different platforms. We tested the system both in a museum, using an interactive table and a projection wall, and as part of an online sur- vey reaching a broader audience. In the field study we observed that visitors like to share their ideas and thoughts while using the table. The results of the online survey indicate that visitors like to contribute to exhibitions. In this paper, we describe the technical design of Parallel Exhibits, as well as the outcomes of the on-site study and online survey.

 

Publications

Lars Lischke, Tilman Dingler, Stefan Schneegass, Merel Van Der Vaart, Pawel Wozniak, and Albrecht Schmidt. 2014. Parallel Exhibits: Combining Physical and Virtual Exhibits. Nodem 2014: 149–156.

Lars Lischke, Stefan Schneegass, Tilman Dingler, and Albrecht Schmidt. 2014. User Defined Exhibitions - Exploring Possibilities to Involve Visitors in the Design of Museum Exhibitions. Postersession at the 8th International Conference on Tangible, Embedded and Embodied Interaction.

]]>
Prototyping 3D Interfaces https://www.hcilab.org/stefan/2016/03/11/prototyping-3d-interfaces/ Fri, 11 Mar 2016 10:13:07 +0000 http://www.hcilab.org/stefan/?p=95 In this project, we identify design guidelines for stereoscopic 3D (S3D) user interfaces and present the MirrorBox and the FrameBox, two user interface prototyping tools for S3D displays. As auto-stereoscopy becomes available for the mass market we believe the design of 3D UIs for devices, for example, mobile phones, public displays, or car dashboards, will rapidly gain importance. A benefit of such UIs is that they can group and structure information in a way that makes them easily perceivable for the user. For example, important information can be shown in front of less important information. This paper identifies core requirements for designing S3D UIs and derives concrete guidelines. The requirements also serve as a basis for two depth layout tools we built with the aim to overcome limitations of traditional prototyping when sketching S3D UIs. We evaluated the tools with usability experts and compared them to traditional paper prototyping.

FrameBox

FrameboxThe core idea behind the FrameBox is to allow users to work with a large variety of materials, including paper, foil, and 3D mockups created with a laser printer, and to spatially position the different elements. Hence, we designed a cubic box made of acrylic glass with a number of slots that represent the different depth layers and allow for positioning UI elements on the z-axis in discrete steps. Within each slot, UI elements can be easily moved in the x-direction. Positioning on the y-axis can be achieved by means of paper-clips. In accordance to the specified requirements, we built a series of FrameBoxes for different application areas. One FrameBox was aimed for the design of automotive UIs and two for the design of mobile phones UIs (one for landscape and one for portrait mode).

Download the *.svg layouts here.

 

MirrorBox

mirrorAs a second prototyping tool, we designed the MirrorBox. We use a number of semi-transparent mirrors in the front and a surface-coated mirror in the back to allow users to see the mirrored image of a UI element projected from below. The mirrors are aligned one after another on top of a light source. Foils can be used to design UI elements, which are then sliced between the mirrors and the light source to make them visible to the user in front of the MirrorBox.

]]>
Pervasive Displays https://www.hcilab.org/stefan/2016/03/11/pervasive-displays/ Fri, 11 Mar 2016 10:11:03 +0000 http://www.hcilab.org/stefan/?p=92 After years in the lab, interactive public displays are finding their way into public spaces, shop windows, cinemas, and user`s home. They are equipped with a multitude of sensors as well as (multi-) touch surfaces allowing not only the audience to be sensed, but also their effectiveness to be measured. The lack of generally accepted de- sign guidelines for public displays and the fact that there are many different objectives (e.g., increasing attention, optimizing interac- tion times, finding the best interaction technique) make it a chal- lenging task to pick the most suitable evaluation method. Based on a literature survey and our own experiences, this paper provides an overview of study types, paradigms, and methods for evaluation both in the lab and in the real world. Following a discussion of de- sign challenges, we provide a set of guidelines for researchers and practitioners alike to be applied when evaluating public displays.

Let me catch this! Experiencing Interactive 3D Cinema through Collecting Content with a Mobile Phone

cinemaThe entertainment industry is going through a transforma- tion, and technology development is affecting howwe can en- joy and interact with the entertainment media content in new ways. In our work, we explore how to enable interaction with content in the context of 3D cinemas by means of a mobile phone. Hence, viewers can use their personal devices to re- trieve, for example, information on the artist of the soundtrack currently playing or a discount coupon on the watch the main actor is wearing. We are particularly interested in the user ex- perience of the interactive 3D cinema concept, and how dif- ferent interactive elements and interaction techniques are per- ceived. We report on the development of a prototype applica- tion utilizing smart phones and on an evaluation in a cinema context with 20 participants. Results emphasize that design- ing for interactive cinema experiences should drive for holis- tic and positive user experiences. Interactive content should be tied together with the actual video content, but integrated into contexts where it does not conflict with the immersive experience with the movie.

prototype2Midair Displays: Free-Floating Pervasive Displays

Due to advances in technology, displays could replace literally any surface in the future, includingwalls, windows, and ceilings. At the same time, midair remains a relatively unexplored domain for the use of displays as of today, particularly in public space. Neverthe- less, we see large potential in the ability to make displays appear at any possible point in space, both indoors and outdoors. Such dis- plays, that we call midair displays, could control large crowds in emergency situations, they could be used during sports for naviga- tion and feedback on performance, or as group displays. We see midair displays as a complementary technology to wearable dis- plays. In contrast to statically deployed displays they allow infor- mation to be brought to the user anytime and anywhere.We explore the concept of midair displays and show that with current technol- ogy, e.g., copter drones, such displays can be easily built. A study on the readability of such displays showcases the potential and fea- sibility of the concept and provides early insights.

An Interactive Curtain for Media Usage in the Shower

splash5Access to digital information became almost ubiquitous. There are only few situations left where digital media cannot be ac- cessed. Showering is probably the only regular and common activity that does not allow to access and interact with digital media. Based on a large-scale survey, we identified potential applications that users want to use in the shower and designed a system that augments the user’s showering experience to pro- vide pervasive media access. We developed a projection-based system that augments shower curtains from the back side and recognizes user input using a thermal camera. Through a user study in a running shower, we collected feedback from poten- tial users and evaluated different algorithms to recognize touch input on a shower curtain. Our results show that participants are enthusiastic about accessing and controlling media using an interactive shower curtain. Furthermore, we identified two algorithms that are robust enough to be used in challenging environments such as a shower.

Publications

Florian Alt, Stefan Schneegass, Albrecht Schmidt, Jörg Müller, and Nemanja Memarovic. 2012. How to evaluate public displays. Proceedings of the 1st International Symposium on Pervasive Displays, 17.

Jonna R. Häkkilä, Maaret Posti, Stefan Schneegass, Florian Alt, Kunter Gultekin, and Albrecht Schmidt. 2014. Let me catch this! Experiencing Interactive 3D Cinema through Collecting Content with a Mobile Phone. Proceedings of the 32nd annual ACM conference on Human factors in computing systems - CHI ’14: 1011–1020. http://doi.org/10.1145/2556288.2557187

Stefan Schneegass, Florian Alt, Jürgen Scheible, and Albrecht Schmidt. 2014. Midair Displays: Concept and First Experiences with Free-Floating Pervasive Displays. Proceedings of The International Symposium on Pervasive Displays (PerDis’14): 27–31. http://doi.org/10.1145/2611009.2611013

Markus Funk, Stefan Schneegass, Michael Behringer, Niels Henze, and Albrecht Schmidt. 2015. An Interactive Curtain for Media Usage in the Shower. Proceedings of the 4th International Symposium on Pervasive Displays (PerDis’15): 225–231.

]]>
Automotive User Interfaces https://www.hcilab.org/stefan/2016/03/11/automotive-user-interfaces/ Fri, 11 Mar 2016 09:40:40 +0000 http://www.hcilab.org/stefan/?p=89 map2Driving a car is becoming increasingly complex. Many new features (e.g., for communication or entertainment) that can be used in addition to the primary task of driving a car increase the driver’s workload. Assessing the driver’s workload, however, is still a challenging task. A variety of means are explored which rather focus on experimental conditions than on real world scenarios (e.g., questionnaires). We focus on physiological data that may be assessed in an non-obtrusive way in the future and is therefore applicable in the real world. Hence, we conducted a real world driving experiment with 10 participants measuring a variety of physiological data as well as a post-hoc video rating session. We recorded all parameters and release the dataset to be publicly available for other research projects.

As a first analysis (see citation below) we used this data to analyze the differences in the workload in terms of road type as well as especially important parts of the route such as exits and on-ramps. Furthermore, we investigated the correlation between the objective assessed and subjective measured data.

Citation / First Paper

A first publication about this data collection project has been published at AutomotiveUI ’13:

Important Note and Copyright

Disclaimer: This data is property of the hciLab Group, Institute for Visualization and Interactive Systems, Unversity of Stuttgart, Germany. If you use this dataset, you agree to the license information available in the license file license.txt (and accompanying files) delivered with the dataset. If you would like to use the data set but need a different license for your specific use case, please contact us! You may use this data for scientific, non-commercial purposes, provided that you give credit to the owners when publishing any work based on this data as defined in the provided license files. We would also be very interested to hear back from you if you use our data in any way and are happy to answer any questions or address any remarks related to it.

hciLab Driving Dataset: Copyright © 2012-2013 hciLab, Institute for Visualization and Interactive Systems (VIS), University of Stuttgart, Pfaffenwaldring 5a, 70569 Stuttgart, Germany

This hciLab Driving Dataset is made available under the Open Database License (ODbL 1.0): http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License (DbCL 1.0): http://opendatacommons.org/licenses/dbcl/1.0/ – See more at: http://opendatacommons.org/licenses/odbl/#sthash.TfyXJQAv.dpuf

This material is Open Knowledge You are free to copy, distribute, use, modify, transform, build upon, and produce derived works from our data as long as you attribute any use of the data, or works produced from the data, in the manner specified in the licenses. Read the full ODbL 1.0 and DbCL 1.0 license texts for the exact terms that apply. The licenses are courtesy of the Open Knowledge Foundation.

Download the Dataset

This material is Open Knowledge The dataset is available as a ZIP file (37.9 MB): hcilab Driving Dataset

Publication

Stefan Schneegass, Bastian Pfleging, Nora Broy, Albrecht Schmidt, and Frederik Heinrich. 2013. A data set of real world driving to assess driver workload. In Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI ’13). ACM, New York, NY, USA, 150-157. http://doi.acm.org/10.1145/2516540.2516561

Bastian Pfleging, Stefan Schneegass, Dagmar Kern, and Albrecht Schmidt. 2014. Vom Transportmittel zum rollenden Computer - Interaktion im Auto. Informatik-Spektrum 37, 5: 418–422. http://doi.org/10.1007/s00287-014-0804-6

]]>