UbiComp / ISWC 2024
Workshops and Symposia

UbiComp/ISWC 2024 features 11 Full-Day Workshops, 2 Half-Day Workshops, and 1 Half-Day Tutorial that will be running on Saturday, October 5 and Sunday, October 6, before the start of the main conference.

Workshops and Symposia provide an effective forum for attendees with common interests and are a great opportunity for community building. They can vary in program length, size, and format depending on their specific objectives.

*Workshops and Symposia, as any other track at UbiComp/ISWC 2024, will be in-person only.
More details for exceptions can be found here.

Summary of Key Dates

  • April 29, 2024: Distribution of all accepted workshop CFPs
  • May 24, 2024: Deadline for the camera-ready version of the workshop description (from the proposal) for inclusion in the ACM DL
  • June 7, 2024: Submission deadline for Workshop papers
  • June 28, 2024: Notification of Workshop papers by each accepted Workshop
  • July 26, 2024: Deadline for camera-ready version of papers to include in the ACM DL
  • October 5-6, 2024: Workshops in Melbourne, Australia

 

Note: the following schedule is tentative. Workshops may be shuffled between the two days depending on registration numbers and room availability. Details will be confirmed at a later date.

Workshops (Oct 5, 2024)

In the advancing ubiquitous computing age, computing technology has already spread into many aspects of our daily lives, such as office work, home and housekeeping, health management, transportation, or even cities. We have been experiencing that much of the influence from those technologies are both contributing to a better quality of life (QoL) of our individual and organizational lives, and causing new types of stress and pain at the same time. The term “well-being” has recently gained attention as a term that covers our general happiness and even more concrete good conditions in our lives, such as physical, psychological, and social wellness.

An increasing number of researchers, engineers, and people are paying attention to how their work can contribute to the better quality of lives, social good, and well-being. In spite of recent activities in academia and the society, unified academic research activities on computing and well-being is anticipated within the ubicomp research community. Active research not only in the HCI domain but in various other ubicomp research areas (systems, mobile/wearable sensing, mobile computing, persuasive applications and services, behavior change, etc.) are needed towards drawing the big picture of “computing for well-being” from different viewpoints and layers of computing. For example, an additional viewpoint of users’ well-being in activity recognition research may invent new types of applications that comprehensively cover different types of recognition of user’s physical, mental and social activities. Ever since Mark Weiser introduced the term of ubiquitous computing, the ubiquity of computing in our daily lives and society has been certainly progressing. Now it is time for the community to more seriously envision the benefits that such computing technologies can bring.

Users of digital devices are increasingly confronted with a tremendous amount of notifications that appear on multiple devices and screens in their environment. If a user owns a smartphone, a tablet, a smartwatch and a laptop and an email-client is installed on all of these devices an incoming e-mailproduces up to four notifications – one on each device. In the future, we will receive notifications from all our ubiquitous devices. Therefore, we need a smart attention management for incoming notifications. One way for a less interrupting attention management could be the use of ambient representations of incoming notifications.

Following our successful 6 workshops (WellComp 2018, 2019, 2020, 2021, 2022 and 2023), this year we will bring together people from industry and academia who are active in the areas of activity recognition, mental health, social good, context-awareness and ubiquitous computing. The main objective of WellComp 2024 is to share the latest research in various areas in computing, related to users’ physical, mental, and social well-being. Especially this year’s special attention will be drawn to “challenges for physical, social and mental well-being monitoring using ubicomp technologies”. Relevance to such topics will be considered in the paper review and selection process. Furthermore, the workshop aims to identify future research challenges, research opportunities, and applications of our research outcomes to the society.

The topics of interest include -but are not limited- to the following:

  • Measurement and representation of physical, mental, and social well-being with ubicomp technologies.
  • Design and implementation of platforms for collecting, processing, and interpreting health and well-being data.
  • Design and development of computational models predictive of one or several aspects of wellbeing.
  • Leveraging large foundation models to improve computing for wellbeing.
  • Unsupervised, semi-supervised, and supervised representation learning for well-being.
  • Classification, regression, and clustering problems related to well-being aspects.
  • Approaches addressing challenges in wearable sensor data (e.g., missing and noisy data, irregular sampling rates, few labels, out-of-distribution inputs, etc) used for well-being monitoring.
  • Development of explainable, robust, privacy-aware, and trustworthy pipelines for well-being monitoring.
  • Multi-modal approaches integrating information from several data sources (e.g., physiological, behavioral, audio, video).
  • Fairness in computing systems for wellbeing
  • Ethical considerations from data collection, system development to deployment
  • Computing systems for promoting well-being-awareness.
  • Innovative well-being applications and diverse target populations (e.g., children, patients, or elderly people).

Submission Details

We will accept two types of submission: long and short papers. A long paper should have a length of maximum 6 pages and short paper maximum 4 pages. Both types of papers should use the SIGCHI Master Article Template and will be reviewed by at least two workshop organizers. Successful submissions will have the potential to raise discussion, provide insights for other attendees, and illustrate open challenges and potential solutions. All accepted publications will be published on the workshop website and in the ACM Digital Library. At least one author of each accepted paper needs to register for the conference and the workshop itself. During the workshop, each paper will be presented briefly by one of the authors. In addition, there will be room for demonstrations as well as discussions. All papers need to be anonymized.

Organizing Committee

  • Ting Dang (University of Melbourne)
  • Shkurta Gashi (ETH AI Center, ETH Zürich)
  • Dimitris Spathis (Nokia Bell Labs / University of Cambridge)
  • Alexander Hoelzemann (University of Siegen)

Website

https://wellcomp2024.github.io/

Feedback modalities are an essential aspect of the success and effectiveness of wearable systems that are used during mobile activities. In the past decades, researchers have explored a variety of feedback and feed-forward modalities systems aimed at mobile interactions. Dynamic activities such as sports inhibit interactions with devices generally but also offer opportunities for novel interaction experiences. The choice of modalities is essential to provide feedback that is understandable, timely, and does not interfere with the sports activity.

This well-balanced hands-on workshop aims to bring together practitioners and researchers working on and interested in mobile and wearable systems. Workshop participants will be offered a platform to collectively discuss and explore current approaches, methods, and tools related to feedback modalities for mobile interactions.

Apply by e-mailing Expression of Interest (UPDATED)

As deadlines for submission via PCS (with inclusion of the position papers to the ACM digital library) has passed, we offer the following option to apply for the workshop.

Send an e-mail to Vincent.vanrheden@plus.ac.at with an expression of interest including:

  • Background: Describing the participant’s experience using wearable, mobile or interactive systems in sports or movement-centered HCI practices, as well as their previous research practice in the area.
  • (Optional) Sport systems and experiences: Two good and two bad examples of modality usage in sports, arguing the choice of the examples. For each example describe how the feedback modality was utilized and the type of feedback that was given and argue why this was a good or bad approach and consider alternative modalities and provide key insights and challenges. If possible, add a representative image. These examples can be industry or research projects, including one’s own. Note: participants are still expected to present this in the workshop.
  • (Optional) Participants are encouraged to bring material for quick-and-dirty prototyping to explore novel feedback modalities (e.g. actuators, wearables, mobile systems that can be repurposed). Consider providing a short (visual) description of these materials and how they can be used.

· Application deadline:        15.09.2024

· Notification to authors:    20.09.2024

· Workshop date:                 05.10.2024

Submissions will be reviewed by the organizers and selected according to their relevance to the workshop and the likelihood of sparking discussions and inspire novel feedback approaches and modalities. Please note that at least one author of each accepted submission must attend the workshop, and UbiComp/ISWC 2024 is in-person only. For more information, visit: https://exertiongameslab.org/workshops-events/ubicomp-iswc-2024-multimodal-sports-interaction-wearables-and-hci-in-motion or feel free to reach out to Vincent.vanrheden@plus.ac.at.

Website

https://exertiongameslab.org/workshops-events/ubicomp-iswc-2024-multimodal-sports-interaction-wearables-and-hci-in-motion

This workshop deals with the challenges of designing reproducible experimental setups, running large-scale dataset collection campaigns, designing activity and context recognition methods that are robust and adaptive, and evaluating systems in the real world. We wish to reflect on future methods, such as lifelong learning approaches that allow open-ended activity recognition. 

The objective of this workshop is to share the experiences among current researchers around the challenges of real-world activity recognition, the role of datasets and tools, and breakthrough approaches towards open-ended contextual intelligence.

Topics of interest include but are not limited to:

  •  Data collection / Corpus construction
  •  Effectiveness of Data / Data-Centric Research
  •  Tools and Algorithms for Activity Recognition
  •  Real World Application and Experiences
  •  Sensing Devices and Systems
  •  Mobile experience sampling, experience sampling strategies:
  •  Unsupervised pattern discovery
  •  Dataset acquisition and annotation through crowd-sourcing, web-mining
  •  Transfer learning, semi-supervised learning, lifelong learning

Submission Guidelines

The correct template for submission is a double-column Word Submission Template or a double-column LaTeX Template. The maximum paper length is 6 pages, including references. Anonymization is not required.

Please see https://www.ubicomp.org/ubicomp-iswc-2024/authors/formatting/ for more details on submission format and templates.

Submit your papers via PCS https://new.precisionconference.com/submissions

Please select SIGCHI -> UbiComp/ISWC 2024 -> UbiComp/ISWC 2024 12th Workshop on HASCA 2024

Organisers

  • Kazuya MURAO (Ritsumeikan University, Japnan)
  • Yu ENOKIBORI (Nagoya University, Japan)
  • Hristijan GJORESKI (Ss. Cyril and Methodius University, N. Macedonia)
  • Paula LAGO (Concordia University, Canada)
  • Tsuyoshi OKITA (Kyushu Institute Technology, Japan)
  • Pekka SIIRTOLA (University of Oulu, Finland)
  • Kei HIROI (Kyoto University, Japan)
  • Philipp M. SCHOLL (University of Freiburg, Germany)
  • Mathias CILIBERTO (University of Sussex, UK)
  • Kenta URANO (Nagoya University, Japan)
  • Marius BOCK  (University of Siegen, Germany)

Contact

hasca-organizer@ml.hasc.jp

Website

http://hasca2024.hasc.jp

Human-Information Interaction (HII) has become increasingly ubiquitous. While it is crucial to understand and improve the user experience in HII, several challenges remain from a ubiquitous computing perspective, such as the definitions discrepancy of cognitive activities involved in HII and the lack of standard practice for experimental task design and physiological methods to measure cognitive activities during the interaction.

In this workshop, we seek to form a common understanding and community standards of quantifying the cognitive aspects of user experience in HII.

We invite researchers and practitioners who use physiological data to measure user experience in HII to submit their contributions as a short research summary or position paper (4 pages in the SIGCHI one-column format, excluding references) discussing one or more of the workshop themes. Accepted submissions will be invited to give a talk at our workshop and included in the ACM DL (as part of the UbiComp/ISWC ’24 Adjunct Proceedings).

For more details, please visit https://hii-biosignal.github.io/ubi24/ or get in touch with the workshop organizers via biosignal.ubicomp24@gmail.com.

Submission Details

To submit your contribution, please go to PCS (https://new.precisionconference.com/submissions), select conference “UbiComp/ISWC 2024” and select track “UbiComp/ISWC 2024: Workshop on Physiological Methods for HII”.

Website

https://hii-biosignal.github.io/ubi24/

Heads-Up Computing is an emerging interaction paradigm within Human-Computer Interaction (HCI) that focuses on integrating computing systems into the user’s natural environment and daily activities seamlessly. The goal is to deliver information and computing capabilities in an unobtrusive manner that complements ongoing tasks without interfering with users’ natural forms from the real-world context.


To better understand the concept of Heads-Up Computing, let’s use a cooking analogy to explore its components:

Imagine you’re preparing to cook a meal. The first decision is selecting the right hardware; this could be a wok, a steamer, or a barbecue rack depending on what you’re planning to cook. Next, consider the ingredients. If you’re a vegetarian, your choices will naturally exclude meat, focusing instead on vegetables and plant-based products. Finally, the cooking method comes into play. Each cuisine, such as French or Chinese Sichuan, has its distinct techniques and methods that define its flavors and outcomes.

So, what are the hardware, ingredients, and strategies in the context of Heads-Up Computing? 


1) Hardware: Body-Compatible Hardware Components

Traditional devices like mobile phones often distract users, turning them into so-called “smartphone zombies” because they require concentrated interaction. In contrast, Heads-Up Computing leverages a distributed design that aligns with human capabilities. While numerous hardware design possibilities exist, achieving a balance between compatibility, convenience, practicality, and existing technological constraints is crucial. We anticipate that, at least in the near future (5-10 years), the hardware platform for Heads-Up Computing will primarily consist of two fundamental components: a head-piece and a hand-piece. In the future, we also anticipate a body-piece in the form of a robot that can further enhance the capability of the heads-up hardware platform.

Head-piece responsibilities:

  • Provides real-time visual and aural feedback.
  • Understands the user’s visual perspective, auditory environment, facial gestures, and emotions.
  • Recognizes speech input and user attention.

Hand-piece responsibilities:

  • Offers real-time haptic feedback.
  • Tracks hand position, posture, and movements.
  • Facilitates additional interaction commands.

While some systems, such as Apple’s Vision Pro, integrate the head-piece and hand-piece into a single device, this approach compromises wearability, resulting in a device that is too bulky for everyday use. Consequently, a two-piece solution is more likely to achieve greater portability, thus possible to serve as an everyday device. For example, systems like Eyeditor, GlassMessaging, and PandaLens utilize a smart-glasses as the head-piece, and a wearable ring mouse as the hand-piece to achieve a balance between functionality and portability.  Note that the hand-piece used in these examples is only a basic one and only achieve partial functionalities for an ideal hand-piece which aims to provide comprehensive tracking and feedback capabilities.

2) Ingredients: Multimodal Voice, Gaze, and Gesture Interaction

For effective interaction during daily activities, Heads-Up Computing utilizes complementary communication channels, as most tasks involve sight and manual activities:

  • Voice Control: Facilitates hands-free device interaction. Projects like EDITalk and Eyeditor have made strides in voice interactions, significantly enhancing user experience when combined with smart glasses.
  • Gaze Tracking: Directs computing experiences through eye movements. This technology is awaiting advancements like those anticipated with Apple’s Vision Pro to overcome the limitations of mobile usage.
  • Micro-Gesture Recognition: Employs subtle gestures for interaction without disrupting other activities. Research has identified gestures suitable for Heads-Up Computing, improving the practicality of technologies such as the wearable ring mouse.

3) Strategies: Static and Dynamic Interface & Interaction Design Approaches

Designing interface and interaction strategies for Heads-Up Computing presents unique challenges, as it requires minimal interference with the user’s current activities. This necessitates the use of transparent displays that adapt as the user moves, and the avoidance of traditional input methods like keyboards, mice, and touch interactions, which demand significant attention and resources.


To create heads-up friendly interfaces and interactions, two main approaches can be considered:

a) Static Interface & Interaction Design: Environmentally Aware and Fragmented Attention Friendly


This approach aims to design interfaces that are suited for environments requiring fragmented attention, such as multitasking scenarios. Example research work in this category include:

  • Adapting text spacing and presentation for readability on the go
  • Utilizing icons instead of text for unobtrusive notifications 
  • Redesigning the presentation of dynamic information, such as videos, to accommodate mobile multitasking 
  • Glanceable interfaces 

In addition, tools like VidAdapter are instrumental in adapting existing media to these new interfaces, taking into account both the physical and cognitive availability of the user.

b) Dynamic Interface & Interaction Design: Resource-Aware

Instead of one-size fit all interface solutions, one can also design interfaces that dynamically respond to the user’s current cognitive and physical state. This is what we can “resource-aware interaction” approach which adjusts the system’s behavior and generates interfaces and interactions that are context-sensitive, providing a more personalized and efficient user experience. An example of such interface has been proposed by Lindlbauer’s group. However, such interfaces require the system to have a stronger understanding of the environment, the users’ cognitive status, and the device constraints in real time, which is much harder to do. However, this is a research direction that’s worth further investigations, and Heads-up Multitasker is one such attempt that tries to understand users’ cognitive model in heads-up computing scenarios.  


Heads-Up Computing signifies a transformative shift towards a human-centric approach, where technology is designed to augment rather than hinder user engagement with the real world. Although there has been some initial progress in this domain, much more exploration is needed to fully realize its potential. We view this workshop as a valuable opportunity to outline a research roadmap that will direct our future endeavors in this exciting field. This roadmap will help us identify key areas of focus, address current challenges, and explore innovative solutions that enhance user interaction seamlessly.

Topics of Interest

We look for participants with research background in AR, MR, wearable computing, and/or intelligent assistants. Interested academic participants are asked to submit a 2-4 page position paper or research summary on topics including but not limited to:

  • Interfaces and Interactions: As smart glasses usher us into a new age, they bring forth the question of designing interactions that are intuitive, seamless, and socially acceptable. How can we meld technology with human instincts?
  • Mobility/Multitasking: The mobility that smart glasses bring is undeniable. The design nuances of catering to a user on-the-move—be it walking, driving, or merely existing in public spaces—deserves detailed discussion.
  • Ergonomics and Comfort: Functionality does not necessarily warrant comfort. Balancing capability with user comfort will be a pivotal area of exploration.
  • Inclusive and trustworthy Information Access: Information empowers people’s lives. With a constant influx of information, users stand at the risk of being overwhelmed. This theme will dissect the impact of information accessibility and how to manage and interact with information without jeopardizing safety.
  • Privacy and Ethics: In an age where user data is holds high value to various holders, wearable technologies walk a fine line between being informative and invasive. The ethical implications of data collection, storage, and usage will be a prime area of focus.
  • Abuse and Addiction: Every technological marvel comes with its own set of pitfalls. The potential misuse, both by vendors and individuals, will be scrutinized. Delving into these dark patterns will help us forecast and possibly prevent misuse.

Submission guidelines

To submit your workshop paper for Ubicomp24p, please ensure your documents are formatted as PDF files. You can upload your proposals through the following link: Ubicomp24p Submission Portal.

For Academic Participants, you can submit:

  • Position Paper: Focus on a specific issue within the realm of Heads-Up Computing.
  • Research Summary: Provide a comprehensive overview of multiple projects you are involved in.



Once accepted, all position and research summary papers will be compiled into the workshop proceedings and will be accessible on the ArXiv platform.

For Industry Participants:

  • If you do not have previous publications in this area but wish to attend the workshop, please submit a 1-page cover letter. In your letter, describe your background and outline what you hope to learn and contribute during the workshop.

In addition to this standard format, we ask everyone to submit a simple online form with the following information: 

  • Brief introduction to their research area.
  • Past and ongoing research topics.
  • What do you want to get out of the workshop?
  • Perceived major issues with the next interaction paradigm of wearable intelligent assistant.
  • Insights or solutions you might have in mind.


It is imperative that at least one author of each accepted submission attend the workshop. Furthermore, all participants must register both for the workshop and for a minimum of one day of the main conference. We eagerly await your valuable contributions and insights. Together, let’s shape the future of human-computer interaction.

Organizers

  • Shengdong Zhao: Professor, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong, China
  • Ian Oakley: Professor, KAIST, Daejeon, South Korea
  • Yun Huang: Associate Professor, University of Illinois at Urbana-Champaign, Rono-Hills, Urbana-Champaign, Illinois, USA
  • Haiming Liu: Associate Professor, University of Southampton, Southampton, UK
  • Can Liu: Associate Professor, City University of Hong Kong, 18 Tat Chee Avenue, Kowloon, Hong Kong, China

If you have any questions: please contact us on ubicomp24p@precisionconference.com.  

Website

https://sites.google.com/view/heads-up-computing-ubicomp/home?authuser=1

We aim FairComp to be an interdisciplinary forum beyond just presenting papers, where we can bring together academia and industry. Notably, we reach out to researchers and practitioners whose work lies within the ACM SIGCHI domains (e.g., UbiComp, HCI, CSCW), as well as FAccT, ML & AI, Social sciences, Philosophy, Law, Psychology, and others. The workshop organizers are actively engaged in the aforementioned themes and will encourage their network of colleagues and students to participate. In particular, the goal of this workshop is to collaboratively: Assess the evolving socio-technical themes and concerns in relation to fairness across ubiquitous technologies, ranging from health, behavioral, and emotion sensing to human-activity recognition, mobility, and navigation. Map the space of ethical risks and possibilities regarding technological interventions (e.g., input modalities, learning paradigms, design choices). Envision new sensing and data-acquisition paradigms to fairly and accurately gather ubiquitous physical, physiological, and experiential qualities. Explore novel methods for generalization, domain adaptation, and bias mitigation and investigate their suitability for diverse ubiquitous case studies. Initiate a discourse around the future of “ubiquitous fairness” and co-create research agenda(s) to meaningfully address it. Consolidate an international network of researchers to further develop these research agendas through funding proposals and through steering future funding instruments.

The topics of interest include, but are not limited to, the following:

  •   New definitions, metrics, and criteria of fairness and robustness, tailored for ubiquitous computing.
  •   Indirect notions of fairness on devices (e.g., unfair resource allocation, energy, connectivity).
  •   New methods for bias identification and mitigation.
  •   Bias, discrimination, and measurement errors in data, labels, and under-represented input modalities.
  •   New benchmark datasets for fairness and robustness evaluation (e.g., sensor data with protected attributes).
  •   Geographical equity across datasets and applications (e.g., WEIRD research, Global South).
  •   New user study methodologies beyond conventional protocols (e.g., Fairness-by-Design).
  •   Robustness (e.g., out-of-distribution generalization, uncertainty quantification) of ML models in high-stake and real-world applications.
  •   Investigation of fairness trade-offs (e.g., fairness vs. accuracy, privacy, resource efficiency, generalizability).
  •   Implications of regulatory frameworks for UbiComp.

Submission details

We invite complete and ongoing research works, use cases, field studies, review, as well as position papers between 4-6 pages (excluding references). Submission should follow UbiComp’s publication vendor instructions, and submitted through PCS. Specifically, the correct template for submission is double-column Word Submission Template or double-column LaTeX Template, and the correct template for publication (i.e., after conditional acceptance) is single-column Word Submission Template or double-column LaTeX template. Each article will be reviewed by 2 reviewers from a panel of experts consisting of external reviewers and organizers. To ensure accessibility, all authors should adhere to SIGCHI’s Accessible Submission Guidelines. All accepted publications will be published on the workshop website and the ACM Digital Library as part of the UbiComp 2024 proceedings. At least one author of each accepted paper needs to register for the conference and the workshop itself. During the workshop, each paper will be presented in-person by one of the authors. All papers need to be anonymized. Any questions should be mailed to faircomp.workshop@gmail.com.

For submissions, please go to the website: https://new.precisionconference.com/submissions (Society: SIGCHI > Conference: Ubicomp/ISWC 2024 > Track: Ubicomp/ISWC 2024 Workshop: FairComp).   

Organizing Committee

  • Lakmal Meegahapola (ETH Zurich)
  • Dimitris Spathis (Nokia Bell Labs | University of Cambridge)
  • Marios Constantinides (Nokia Bell Labs | University of Cambridge)
  • Han Zhang (University of Washington)
  • Sofia Yfantidou (Aristotle University of Thessaloniki)
  • Niels van Berkel (Aalborg University)
  • Anind K. Dey (University of Washington)

Website

https://faircomp-workshop.github.io/2024/

We will solicit three categories of papers:

  • Full papers (up to 6 pages including references) should report a reasonably mature work with earables, and is expected to demonstrate concrete and reproducible results albeit scale may be limited.
  • Experience papers (up to 4 pages including references) that present extensive experiences with implementation, deployment, and operations of earable-based systems. Desirable papers are expected to contain real data as well as descriptions of the practical lessons learned.
  • Short papers (up to 2 pages including references) are encouraged to report novel, and creative ideas that are yet to produce concrete research results but are at a stage where community feedback would be useful.

Moreover, we will have a special submission category – “Dataset Paper” – soliciting a 1-2 page document describing a well curated and labeled dataset collected with earables (eventually accompanied by the dataset). Full research papers will be in ACM sigconf template with 2 columns and the accepted papers will be included in the ACM Digital Library.

All papers will be digitally available through the workshop website, and the UbiComp adjunct proceedings. For each category of papers, we will offer a “Best Paper” and “Best Dataset” awards sponsored by Nokia Bell Labs. In addition, depending on the quality and depth of the submissions we might consider producing a Book on “Earable Computing” contributed by the authors of the papers, and edited by the Workshop Organizers.

Topics of interest are:

  • Acoustic Sensing with Earables
  • Kinetic Sensing with Earables
  • Multi-Modal Learning with Earables
  • Multi-Task Learning with Earables
  • Active Learning with Earables
  • Low-Power Sensing Systems for Earables
  • Authentication & Trust mechanisms for Earables
  • Quality-Aware Data Collection with Earables
  • Experience Sampling with Earables
  • Crowd Sourcing with Earables
  • Novel UI and UX for Earables
  • Auditory Augmented Reality Application with Earables
  • Lightweight Deep Learning on Earables
  • Tiny Machine Learning on Earables
  • Health and Wellbeing Applications of Earables
  • Emerging applications of Earables

Website

https://www.esense.io/earcomp2024/

Tutorial (Oct 6, 2024)

Feature extraction remains the core challenge in Human Activity Recognition (HAR) – the automated inference of activities being performed from sensor data. Over the past few years, the community has witnessed a shift from manual feature engineering using statistical metrics and distribution-based representations, to feature learning via neural networks. Particularly, self-supervised learning methods that leverage large-scale unlabeled data to train powerful feature extractors have gained significant traction, and various works have demonstrated its ability to train powerful feature extractors from large-scale unlabeled data. Recently, the advent of Large Language Models (LLMs) and multi-modal foundation models has unveiled a promising direction by leveraging well-understood data modalities. This tutorial focuses on existing representation learning works, from single-sensor approaches to cross-device and cross-modality pipelines. Furthermore, we will provide an overview of recent developments in multi-modal foundation models, which originated from language and vision learning, but have recently started incorporating inertial measurement units (IMU) and time-series data. This tutorial will offer an important forum for researchers in the mobile sensing community to discuss future research directions in representation learning for HAR, and in particular, to identify potential avenues to incorporate the latest advancements in multi-modal foundation models, aiming to finally solve the long-standing activity recognition problem.

Organizing Committee

  • Harish Haresamudram (Georgia Institute of Technology)
  • Chi Ian Tang (Nokia Bell Labs)
  • Sungho Suh (FKI and RPTU)
  • Paul Lukowicz (DFKI and RPTU)
  • Thomas Ploetz (Georgia Institute of Technology)

Website

https://sites.google.com/view/soar-tutorial-ubicomp2024/home

Workshops (Oct 6, 2024)

The goal of this workshop is to provide a platform for researchers, software and medical practitioners, and designers to share and debate both the pros and cons of applying the Large Language Model (LLM) and Internet of Things (IoT) for diagnosis and personalized training for autistic children. Through hosting multiple activities during the half-day workshop, including oral presentation, demo and panel discussion, we hope to use this opportunity to build a network of experts to dedicate our efforts on benefiting children with special needs and further inspire the research on taking advantage of the emerging technologies for these under-privileged group users, caregivers and special education teachers.

This workshop explores the benefits, challenges, and future directions for involving creative interactive design using LLMs and IoT with/for autistic children in diagnosis and personalized training. By engaging in presentations, demonstrations, and group discussions, participants will have the chance to exchange their related experiences and insights.

Submissions of position papers, work-in-progress reports, or demonstration papers for a short presentation or demonstration related to the interactive design with autistic children using LLM and IoT for diagnosis and personalized training or relevant fields are welcomed. Specifically, the proposed workshop is expecting:

  • Position papers (2-4 pages) discussing research questions, opportunities, benefits, or challenges.
  • Work-in-progress reports (2-4 pages) highlighting current research.
  • Demonstration papers (1 page) illustrating a leading-edge system in use, under development, or at a testing stage.

The suggested topics include (but are not limited to):

  • Optimized and Personalized Training for Special Education
  • AI, IoT, and/or Smart Sensors for Special Education
  • Large Language Models (LLMs), and/or Large Vision Models (LVM) for Special Education
  • Technology-Based Intervention (TBI) for Special Education
  • Interactive Design with Children
  • Emerging Applications for Special Education and/or Healthcare

Authors of accepted works will be invited to present their submissions in a dedicated presentation or demo session.

Submission Guidelines

Submissions should be in the UbiComp/ISWC 2024 Proceedings Formats and submitted via UbiComp PCS. The submission portal will open in May 2024.

Website

https://idwac.github.io/

For the 2024 UbiComp Mental Health Sensing and Intervention workshop, we invite paper submissions at the intersection of mental health, well-being, ubiquitous computing, and human-centered design.

This year, we are adding a special call for workshop papers that inspire new research directions. These papers should include initial findings that are valuable to the community, but are not fully publishable or finished contributions. Based upon prior years’ work, these papers could include methods and/or topics such as:

  • Ethical deployments of ubiquitous computing systems in historically underserved communities.
  • Ethical frameworks for developing and implementing ubiquitous technologies for mental health.
  • Experience reports from clinical studies in any phase, from early pilot studies to large-scale clinical trials.
  • Experience reports of clinical implementation from any perspective in the healthcare system.
  • Identification of opportunities for ubiquitous computing technologies to help solve global issues that impact mental health, like climate change.
  • Integration of ubiquitous technologies into existing healthcare infrastructures (e.g., payment models, regulatory frameworks) and policy.
  • Investigation of new methodologies for intervention (e.g., conversational agents, AR/VR applications).
  • Proposals of novel frameworks to implement and sustain ubiquitous computing technologies in mental healthcare.
  • Reflections on implementing ubiquitous computing-based technologies to improve mental health and well-being in both clinical and general populations.

We still encourage submissions from other topics, including but not limited to (in alphabetical order):

  • Analyses of fairness and bias in mental health–ubiquitous computing technologies.
  • Design and implementation of computational platforms (e.g., mobile phones, instrumented homes, skin-patch sensors) to collect health and well-being data.
  • Design and implementation of feedback or decision-support (e.g., reports, visualizations, proactive behavioral interventions, subtle or subconscious interventions etc.) for both patients and caregivers towards improved mental health.
  • Design of privacy-preserving strategies for data collection, analysis, and management.
  • Development of methods for sustaining user adherence and engagement over the course of an intervention.
  • Development of robust models that can handle data sparsity and mislabeling issues within mobile sensing and mental health data.
  • Identification of opportunities for UbiComp approaches (e.g., digital phenotyping, predictive modeling, micro-randomized intervention trials, adaptive interventions) to better understand factors related to substance abuse.
  • Integration of multimodal data (with potentially clinical data) from various sensor streams for predicting or measuring mental health and well-being.

We are soliciting five types of contributions for the workshop:

  • Scientific papers describing novel technologies, approaches, and studies related to ubiquitous computing and mental health. We encourage these submissions to focus on learnings that are beneficial for the community, and not finished contributions.
  • Challenge papers, in which authors describe a specific challenge to be pitched and discussed at the workshop. These papers often lead to a lively discussion during the workshop.
  • Demonstrations, to facilitate authors demonstrating developed technologies and early systems at the workshop.
  • Experience reports that can introduce novel perspectives on real-world implementation, such as in clinical settings, or historically underserved communities.
  • Critical reflections of one’s own research or existing research at the intersection of ubiquitous computing and mental healthcare. We expect critical reflection papers to contribute towards better research practices in the community.

We will accept submissions up to 6 pages, including figures and references. The 6 pages are not a requirement; shorter submissions (e.g., 3 pages) are welcome. Papers should be submitted using the UbiComp ISWC 2024 proceedings format, see the UbiComp website for more details: https://ubicomp.org/ubicomp-iswc-2024/authors/formatting/

All submitted papers will be reviewed and judged on originality, technical correctness, relevance, and quality of presentation. We explicitly invite submissions of papers that describe preliminary results or work-in-progress, including early clinical experience. The accepted papers will appear in the UbiComp supplemental proceedings and in the ACM Digital Library. Authors of accepted papers will be invited to present their work in-person in Melbourne and receive feedback from workshop attendees.

Submission Guidelines

Submit your papers on at https://new.precisionconference.com/user/login (please select SIGCHI-> UBICOMP2024 -> Ubicomp 2024 Mental Health).

Website

https://ubicomp-mental-health.github.io/

The OpenWearables 2024 workshop aims to address the challenges and opportunities in the field of open source wearable technology. We invite submissions from researchers, developers and innovators on topics such as open source designs of wearable devices, applications and evaluations of open source wearables, software that supports the design and development of open wearables, and frameworks. Submissions should be concise, limited to a maximum of 4 pages excluding references, and demonstrate the use, build and interface processes of open hardware, software or systems. Papers should use pictures, graphs and functional diagrams as often as possible in the explanation of the work. An essential requirement is that the projects presented adhere to open source principles. Papers will be selected based on adherence to these principles and the clarity of the paper. During the workshop, authors will be required to present their research paper and also provide a demonstration of their open wearable work to showcase the practical applications and potential impact of their research.

The workshop will feature a mix of keynote speeches, paper presentations, demo sessions, and group discussions, providing a platform for participants to showcase their work, share insights, and foster collaboration within the open wearables community. We particularly encourage demonstrations of open source wearable projects during the hands-on demo sessions, in addition to the paper.

All accepted papers will be considered for inclusion in a special position paper summarising the results of the workshop, which will be published in the proceedings. We will also make all workshop materials available on open-wearables.org and GitHub, creating a lasting resource for the community.

Join us at OpenWearables 2024 to help democratise wearable technology, accelerate innovation and establish standards for open wearables. Together, we can create a future where wearable technologies are accessible, interoperable and impactful across applications and industries.

Website

open-wearables.org

Rapid technological advancements are expanding the scope of virtual reality and augmented reality (VR/AR) applications; however, users must contend with a lack of sensory feedback and limitations on input modalities by which to interact with their environment. Gaining an intuitive understanding of any VR/AR application requires the complete immersion of users in the virtual environment, which can only be achieved through the adoption of realistic sensory feedback mechanisms. This workshop brings together researchers in UbiComp and VR/AR to investigate alternative input modalities and sensory feedback systems with the aim of developing coherent and engaging VR/AR experiences mirroring real-world interactions.

Submission Guidelines 

Online Submission System (PCS): https://new.precisionconference.com/.

Please select “SIGCHI” as Society, “Ubicomp/ISWC 2024” as Conference/Journal, and “UbiComp/ISWC 2024 Workshop MIMSVAI” as the track on the submission page. All papers need to be anonymized. Please submit papers with a maximum length of 5 pages (4-page + 1 references) in ACM SIGCHI sigconf template with 2 columns. Please contact us (ubicomp.mimsvai@gmail.com) if you have any problems when preparing your submissions. The accepted papers will be published in the UbiComp/ISWC Adjunct Proceedings, which will be included in the ACM Digital Library as part of the UbiComp conference supplemental proceedings.

Contact

All questions about submissions should be emailed to ubicomp.mimsvai@gmail.com.

Best Paper Award

The Best Paper Award will be conferred upon the most outstanding paper presented at the MIMSVAI 2024 workshop.

List of Topics

Papers may include, but not be limited to, topics:

  • 2D/3D and volumetric display and projection technology
  • Immersive analytics and visualization
  • Modeling and simulation
  • Multimodal capturing and reconstruction
  • Scene description and management issues
  • Storytelling
  • Tracking and sensing
  • Embodied agents and self-avatars
  • Haptic and tactile interfaces, wearable haptics, passive haptics, pseudo haptics
  • Mediated and diminished reality
  • Multimodal input and output
  • Multisensory rendering, registration, and synchronization
  • Perception and cognition
  • Presence, body ownership, and agency
  • Teleoperation and telepresence
  • 3D user interaction
  • 3D user interface metaphors
  • Collaborative interactions
  • Human factors and ergonomics

Website

https://mimsvai.github.io

AI is increasingly integrated with physical entities for sensing and actuation, directly impacting our daily lives. This integration spans from routine goods to specialized AI-infused products, from small actuators to large electro-mechanical systems with ubiquitous intelligence. The complexity of these systems and their direct impact on our physical reality pose unique challenges in designing interpretable and inclusive interactions. As immersive technologies blur the boundaries between the physical and digital worlds, there are new opportunities to augment the capabilities of AI-infused physical systems. This workshop aims to explore the challenges and opportunities in designing interpretable, inclusive, and immersive interactions with ubiquitous AI-infused physical systems, considering their physical exertion and expanding capabilities. We invite research that addresses the research questions below:

  • RQ1: How can we design interpretable interactions for ubiquitous AI-infused physical systems that bridge the gap between human understanding, anticipation, and actual system behavior to ensure user trust and adoption?
  • RQ2: What are the key challenges and opportunities in designing interactions for ubiquitous AI-infused physical systems that are adaptive and responsive to diverse user needs and preferences while promoting long-term user well-being?
  • RQ3: How can we leverage emerging technologies and extended reality to enable new forms of natural and intuitive interaction with ubiquitous AI-infused physical systems?
  • RQ4: What are the ethical, social, and cultural implications of ubiquitous AI-infused physical systems, and how can we develop best practices, design guidelines, and principles for creating interpretable, inclusive, and immersive interactions that align with human values and expectations?
  • RQ5: How can we design inclusive and accessible interactions for ubiquitous AI-infused physical systems that accommodate diverse user groups and abilities?

Track 1: Research Contributions

  • Artifacts and prototypes showcasing interaction with networked or embedded intelligence in ‘physical’ systems, including immersive solutions to augment their capabilities.
  • Positions for novel interaction paradigms, design principles, and frameworks for AI-infused ‘physical’ systems, pushing the envelope between the physical and virtual worlds.
  • Case studies and empirical evaluations assessing the interaction quality and user experience of AI-infused ’physical’ systems in specific application areas, including the impact of immersive technologies on user experience.
  • Enabling technologies, platforms, and infrastructures supporting the development of AI-infused ’physical’ systems and their integration with immersive technologies.
  • User studies for user needs and preferences in physical exertion and immersive experience with AI-infused ’physical’ systems, given the interpretability and inclusivity challenges.

Track 2: Algorithmic Contributions

Artifact contributions may require time and resources that exceed the constraints of a workshop deadline. To provide wider opportunities for junior researchers or those with limited access to facilities, we also encourage algorithmic contributions that align with the workshop’s scope and objectives, leveraging existing open datasets. While open to all types of open datasets, for those released by the organizers (bold), we can provide guidance and feedback throughout the workshop, although all contributions will undergo the same review process. Exemplary datasets include but are not limited to:

  • Engagnition: A multi-dimensional dataset for engagement recognition of children with autism spectrum disorder 
  • MultiSenseBadminton:Wearable Sensor–Based Biomechanical Dataset for Evaluation of Badminton Performance 

Submission Guidelines

Submission and review processes are the same for both research and algorithmic contributions. 

Submission Formatting and Procedure:

  • Extended abstracts can be up to 4 pages long, in the 2-columns ACM Proceedings format, excluding references. 
  • Submissions should be made via the Precision Conference.
  • Submissions should follow UbiComp 2024’s guidelines for accessible materials.
  • All accepted papers will be included in the UbiComp/ISWC Adjunct Proceedings, which will be indexed in the ACM DL.

Review Criteria:

  • All submissions will be peer-reviewed by at least two reviewers, including organizers, steering committee members, and external reviewers.
  • As the workshop aims to stimulate discussions on future research agendas, the review will prioritize relevance, novelty, and ideas while also considering soundness and clarity.

Organizers

  • Gwangbin Kim, GIST
  • Minwoo Seong, GIST
  • Dohyeon Yeo, GIST
  • Yumin Kang, GIST
  • SeungJun Kim, GIST

Website

https://sites.google.com/view/i4u2024

We invite submissions of original research, insightful case studies, and work in progress that address XAI applications within Ubiquitous and Wearable Computing, including but not limited to:

  • XAI in time-series and multimodal data analysis
  • Techniques and challenges in interpreting complex data streams from wearable and ubiquitous computing devices.
  • User-centered explanations for AI-driven systems
  • Designing explanations that are meaningful and accessible to end-users.
  • Deployment and evaluation of XAI tools in real-world scenarios
  • Case studies and empirical research on the effectiveness of XAI applications.
  • Multimodal XAI for behavior analysis
  • Leveraging diverse data sources for comprehensive behavior analysis.
  • Interconnected ML components in wearable and ubiquitous computing
  • Strategies for explaining the dynamics and decisions of interconnected AI systems and models.
  • Ethical considerations and user privacy in XAI
  • Addressing the ethical implications and privacy concerns of deploying XAI in ubiquitous computing.
  • Multimodal XAI in affective computing
  • Techniques for understanding and interpreting human emotions through AI.
  • Empirical evaluation methods
  • Methods for assessing the effectiveness and impact of XAI and multimodal AI systems.

Submission Guidelines

Submissions should be anonymized and up to 4 pages (including references). ACM requires UbiComp/ISWC 2024 workshop submissions to use the double-column template. Please check the UbiComp website for more details about the template.

Submissions can be made via PCS at http://new.precisionconference.com/sigchi. The submission site opens in May 2024. On the submissions tab, please select SIGCHI society, the UbiComp/ISWC 2024 conference, and the “UbiComp/ISWC 2024 XAI for U” track.

Website

https://ubicomp-xai.github.io/

IMPORTANT DATES

Submission Deadline:
June 7, 2024

Workshops in Melbourne:
October 5-6, 2024

CONTACT

UbiComp / ISWC

Past Conferences

The ACM international joint conference on Pervasive and Ubiquitous Computing (UbiComp) is the result of a merger of the two most renowned conferences in the field: Pervasive and UbiComp. While it retains the name of the latter in recognition of the visionary work of Mark Weiser, its long name reflects the dual history of the new event.

The ACM International Symposium on Wearable Computing (ISWC) discusses novel results in all aspects of wearable computing, and has been colocated with UbiComp and Pervasive since 2013.

A complete list of UbiComp, Pervasive, and ISWC past conferences is provided below.