Mobile Applications
Christine Lv
Pedestrians have difficulty noticing hybrid vehicles (HVs) and electrical vehicles (EVs) quietly approaching from behind. We propose a vehicle detection scheme using a smartphone carried by a pedestrian. A notification of a vehicle approaching can be delivered to wearable devices such as Google Glass. We exploit the high-frequency switching noise generated by the motor unit in HVs and EVs. Although people are less sensitive to these high-frequency ranges, these sounds are prominent even on a busy street, and it is possible for a smartphone to detect these signs . The ambient sound captured at 48 kHz is converted to a feature vector in the frequency domain. A J48 classifier implemented on a smartphone can determine whether an EV or HV is approaching. We have collected a large amount of vehicle data at various locations. The false-positive and false-negative rates of our detection scheme are 1.2% and 4.95%, respectively. The first alarm was detected as early as 11.6 s before the vehicle approached the observer. The scheme can also determine the vehicle speed and vehicle type.
Masaru Takagi, Kosuke Fujimoto, Yoshihiro Kawahara, Tohru Asami
The goal of this work is to provide an abstraction of ideal sound environments to a new emerging class of Mobile Multi-speaker Audio (MMA) applications. Typically, it is challenging for MMA applications to implement advanced sound features (e.g., surround sound) accurately in mobile environments, especially due to unknown, irregular loudspeaker configurations. Towards an illusion that MMA applications run over specific loudspeaker configurations (i.e., speaker type, layout), this work proposes AMAC, a new Adaptive Mobile Audio Coordination system that senses the acoustic characteristics of mobile environments and controls individual loudspeakers adaptively and accurately. The prototype of AMAC implemented on commodity smartphones shows that it provides the coordination accuracy in sound arrival time in several tens of microseconds and reduces the variance in sound level substantially.
Hyosu Kim, SangJeong Lee, Jung-Woo Choi, Hwidong Bae, Jiyeon Lee, Junehwa Song, Insik Shin
Quality improvement in mobile applications should be based on the consideration of several factors, such as users’ diversity in spatio-temporal usage, as well as the device’s resource usage, including battery life. Although application tuning should consider this practical issue, it is difficult to ensure the success of this process during the development stage due to the lack of information about application usage. This paper proposes a user interaction-based profiling system to overcome the limitations of development-level application debugging. In our system, the analysis of both device behavior and energy consumption is possible with fine-grained process-level application monitoring. By providing fine-grained information, including user interaction, system behavior, and power consumption, our system provides meaningful analysis for application tuning. The proposed method does not require the source code of the application and uses a web-based framework so that users can easily provide their usage data. Our case study with a few popular applications demonstrates that the proposed system is practical and useful for application tuning.
Seokjun Lee, Chanmin Yoon, Hojung Cha
We propose a novel technique that aggregates multiple sensor streams generated by totally different types of sensors into a visually enhanced video stream. This paper shows major features of SENSeTREAM and demonstrates enhancement of user experience in an online live music event. Since SENSeTREAM is a video stream with sensor values encoded in a two-dimensional graphical code, it can transmit multiple sensor data streams while maintaining their synchronization. A SENSeTREAM can be transmitted via existing live streaming services, and can be saved into existing video archive services. We have implemented a prototype SENSeTREAM generator and deployed it to an online live music event. Through the pilot study, we confirmed that SENSeTREAM works with popular streaming services, and provide a new media experience for live performances. We also indicate future direction for establishing visual stream aggregation and its applications.
Takuro Yonezawa, Masaki Ogawa, Yutaro Kyono, Hiroki Nozaki, Jin Nakazawa, Osamu Nakamura, Hideyuki Tokuda
Wearable Input/Output
Kent Lyons
The Tongue and Ear Interface: A Wearable System for Silent Speech Recognition
We address the problem of performing silent speech recognition where vocalized audio is not available (e.g. due to a user's medical condition) or is highly noisy (e.g. during firefighting or combat). We describe our wearable system to capture tongue and jaw movements during silent speech. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the user's tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in a set of earmolds. We collected a data set of 1901 utterances of 11 distinct phrases silently mouthed by six able-bodied participants. Recognition relies on using hidden Markov model-based techniques to select one of the 11 phrases. We present encouraging results for user dependent recognition.
Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, Maysam Ghovanloo
Hands-free gesture control with a capacitive textile neckband
We present a novel sensing modality for hands-free gesture controlled user interfaces, based on active capacitive sensing. Four capacitive electrodes are integrated into a textile neckband, allowing continuous unobtrusive head movement monitoring. We explore the capability of the proposed system for recognising head gestures and postures. A study involving 12 subjects was carried out, recording data from 15 head gestures and 19 different postures. We present a quantitative evaluation based on this dataset, achieving an overall accuracy of 79.1% for head gesture recognition and 40.4% for distinguishing between head postures (69.9% when merging the most adjacent positions), respectively. These results indicate that our approach is promising for hands-free control interfaces. An example application scenario of this technology is the control of an electric wheelchair for people with motor impairments, where recognised gestures or postures can be mapped to control commands.
Marco Hirsch, Jingyuan Cheng, Attila Reiss, Mathias Sundholm, Paul Lukowicz, Oliver Amft
FabriTouch: Exploring Flexible Touch Input on Textiles
Touch-sensitive fabrics let users operate wearable devices unobtrusively and with rich input gestures similar to those on modern smartphones and tablets. While hardware prototypes exist in the DIY crafting community, HCI designers and researchers have little data about how well these devices actually work in realistic situations. FabriTouch is the first flexible touch-sensitive fabric that provides such scientifically validated information. We show that placing a FabriTouch pad onto clothing and the body instead of a rigid support surface significantly reduces input speed but still allows for basic gestures. We also show the impact of sitting, standing, and walking on horizontal and vertical swipe gesture performance in a menu navigation task. Finally, we provide the details necessary to replicate our FabriTouch pad, to enable both the DIY crafting community and HCI researchers and designers to build on our work.
Florian Heller, Stefan Ivanov, Chat Wacharamanotham, Jan Borchers
SwitchBack: An On-Body RF-Based Gesture Input Device
We present SwitchBack, a novel e-textile input device that can register multiple forms of input (tapping and bi-directional swiping) with minimal calibration. The technique is based on measuring the input impedance of a 7 cm microstrip short-circuit stub consisting of a strip of conductive fabric separated from a conductive fabric ground plane (also made of conductive fabric) by a layer of denim. The input impedance is calculated by measuring the stub's reflection coefficient using a simple RF reflectometer circuit, operating at 900MHz. The input impedance of the stub is affected by the dielectric properties of the surrounding material, and changes in a predictable manner when touched. We present the theoretical formulation, device and circuit design, and experimental results. Future work is also discussed.
Dana T Hughes, Halley P Profita, Nikolaus J Correll
Wearable Jamming Mitten for Virtual Environment Haptics
This paper presents a new mitten incorporating vacuum layer jamming technology to provide haptic feedback to a user. We demonstrate that layer jamming technology can be successfully applied to a mitten, and discuss advantages layer jamming provides as a wearable technology through its low profile form factor. Jamming differs from traditional wearable haptic systems by restricting a user's movement, rather than applying an actuation force on the user's body. Restricting the user's movement is achieved by varying the stiffness of wearable items, such as gloves. We performed a pilot study where the qualitative results showed users found the haptic sensation of the jamming mitten similar to grasping the physical counterpart.
Timothy M Simon, Ross T Smith, Bruce H Thomas
MagicWatch: Interacting & Segueing
Seeking for more friendly, more efficient, and more effective human-computer interaction ways is an eternal hot topic. This video demonstrates a MagicWatch that can sense user gestures, understand user intensions, and achieve expected tasks with the underlying core techniques and the support of a back-end context aware smart system on a cloud platform. The MagicWatch can act as a pointer, a remote controller, and an information portal. Just using hand, you can point a building, a person, or a screen; you can control a device, for instance, changing TV channel, adjusting temperature, or switching slides; and you can get necessary information from the cloud. Moreover, this video highlights MagicWatch seamless interactions with objects in its surrounding and easy segueing cyber-physical spaces.
Feng Yang, Shijian Li, Runhe Huang, Shugang Wang, Gang Pan
Health & Children
Inseok Hwang
The recent emergence of comfortable wearable sensors has focused almost entirely on monitoring physical activity, ignoring opportunities to monitor more subtle phenomena, such as the quality of social interactions. We argue that it is compelling to address whether physiological sensors can shed light on quality of social interactive behavior. This work leverages the use of a wearable electrodermal activity (EDA) sensor to recognize ease of engagement of children during a social interaction with an adult. In particular, we monitored 51 child-adult dyads in a semi-structured play interaction and used Support Vector Machines to automatically identify children who had been rated by the adult as more or less difficult to engage. We report on the classification value of several features extracted from the child's EDA responses, as well as several other features capturing the physiological synchrony between the child and the adult.
Javier Hernandez, Ivan Riobo, Agata Rozga, Gregory D. Abowd, Rosalind W. Picard
This paper describes the design of a digital fork and a mobile interactive and persuasive game for a young child who is a picky eater and/or easily distracted during mealtime. The system employs Ubicomp technology to educate children on the importance of a balanced diet while motivating proper eating behavior. To sense a child's eating behavior, we have designed and prototyped a sensor-embedded digital fork, called the Sensing Fork. Furthermore, we have developed a story-book and persuasive game, called the Hungry Panda, on a smartphone. This capitalizes on the capabilities of the Sensing Fork to interact with and modify children's eating behavior during mealtime. We report the results of a real-life study that involves mother-child subjects and tested the effectiveness of the Sensing Fork and Hungry Panda game in addressing children's eating problems. Our findings exhibit positive effects for changing children's eating behavior.
Azusa Kadomura, Cheng-Yuan Li, Koji Tsukada, Hao-Hua Chu, Itiro Siio
Health sensing through smartphones has received considerable attention in recent years because of the devices’ ubiquity and promise to lower the barrier for tracking medical conditions. In this paper, we focus on using smartphones to monitor newborn jaundice, which manifests as a yellow discoloration of the skin. Although a degree of jaundice is common in healthy newborns, early detection of extreme jaundice is essential to prevent permanent brain damage or death. Current detection techniques, however, require clinical tests with blood samples or other specialized equipment. Consequently, newborns often depend on visual assessments of their skin color at home, which is known to be unreliable. To this end, we present BiliCam, a low-cost system that uses smartphone cameras to assess newborn jaundice. We evaluated BiliCam on 100 newborns, yielding a 0.85 rank order correlation with the gold standard blood test. We also discuss usability challenges and design solutions to make the system practical.
Lilian de Greef, Mayank Goel, Min Joon Seo, Eric C Larson, James W Stout, James A Taylor, Shwetak N Patel
In this work, we present ChildSafe, a classification system which exploits human skeletal features collected using a 3D depth camera to classify visual characteristics between children and adults. ChildSafe analyzes the histograms of training samples and implements a bin boundary-based classifier. We train and evaluate Child- Safe using a large dataset of visual samples collected from 150 elementary school children and 43 adults, ranging in the ages of 7 and 50. Our results suggest that ChildSafe successfully detects children with a proper classification rate of up to 97%, a false negative rate of as low as 1.82%, and a low false positive rate of 1.46%. We envision this work as an effective sub-system for designing various child protection applications.
Can Basaran, Hee Jung Yoon, Ho-Kyeong Ra, Taejoon Park, Sang Hyuk Son, JeongGil Ko
SoberDiary: A Phone-based Support System for Assisting Recovery from Alcohol Dependence
Alcohol dependence is a chronic disorder associated with severe harm in multiple areas, and relapsing is easy despite treatment. After alcohol-dependent patients complete alcohol withdrawal treatment and return to their regular lives, they face further challenges in order to maintain sobriety. This study proposes SoberDiary, a phone-based support system that enables alcohol-dependent patients to self-monitor and self-manage their own alcohol use behavior and remain sober in their daily lives. Results from a 4-week user study involving 11 clinical patients show that, using SoberDiary, patients can self-monitor and self-manage their alcohol use behavior, reducing their total alcohol consumption and the number of drinking or heavy drinking days that occur following intervention.
Kuo-Cheng Wang, Yi-Hsuan Hsieh, Chi-Hsien Yen, Chuang-Wen You, Yen-Chang Chen, Ming-Chyi Huang, Seng-Yong Lau, Hsin-Liu Cindy Kao, Hao-Hua Chu