UbiComp / ISWC 2025
Keynotes
Keynotes
Keynote Speakers

Guoying Zhao
Guoying Zhao received the Ph.D. degree in computer science from the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China, in 2005. She is currently an Academy Professor and full Professor with University of Oulu and a visiting professor with Aalto University. She is Fellow of IEEE, IAPR and ELLIS, and member of Academia Europaea. Her publications have attracted 34100+ citations in Google Scholar with h-index 89. She is Associate Editor-in-Chief for Computer Vision and Image Understanding (CVIU), was/is associate editor for IEEE Trans. on Multimedia, Pattern Recognition, IEEE Trans. on Circuits and Systems for Video Technology, Image and Vision Computing and Frontiers in Psychology Journals. She is/was program co-chair for ECCV 2028 and ICMI 2021, general co-chair for ACII 2025, tutorial chair for ICPR 2024, panel chair for FG 2023, and publicity chair for SCIA 2023 and FG 2018. Her current research interests include affective computing, computer vision, machine learning, and biometrics. Her research has been reported by Finnish TV programs, newspapers and MIT Technology Review.
Emotion-Aware Interfaces: Bridging Human and Intelligent Machines
Emotions are fundamental to human communication and play a critical role in shaping interactions. As Artificial Intelligence advances, there is a growing demand for systems that can not only process information but also understand and respond to human emotions. Emotion-aware interfaces represent an essential step toward this goal, enabling machines to recognize usersâ affective states and adapt their responses in more natural and meaningful ways.
This talk introduces emotion-aware interfaces and outlines their potential impact across diverse domains, including humanârobot collaboration, conversational agents, healthcare, customer engagement, and security. Our research contributions in areas such as facial (micro-)expression recognition, emotional gesture analysis, and remote heart rate estimation from video are presented with the discussion about emerging opportunities and key challenges that must be addressed to fully realize the promise of emotionally intelligent humanâmachine interaction.

Moritz Simon Geist
Moritz Simon Geist is a music producer and researcher working with sound, robotics and algorithms. Beginning his academic career in semiconductor sciences as a PhD student, Geist made a career shift to focus on art and music, where he now merges sound with robotics and algorithms. His approach to electronic music, which involves creating sound through mechanical robots, has earned him international recognition. In 2012, Geist’s first work, the “Drum Robot MR-808,” went viral, and he has since explored the sound making and producing of electronic music with robots and mechanics as well as releasing many influential and viral works. Geist’s compositions are influenced by a broad range of musical styles, including various electronic music genres and classical music, creating a unique and experimental sound. Over the years, he has collaborated with a range of renowned artists, including Mouse On Mars, Tyondai Braxton, Robert Lippok, and ThieĂ Mynther. His work has been showcased at many international venues and events such as the Venice Biennale, South by Southwest (SXSW), the Philharmonie de Paris, and the Elbphilharmonie in Hamburg. He has also presented his work in Japan, Australia, and South Korea. From 2017 – 2020 he held a master class at the NYU Berlin. His contributions to the field have been recognized with numerous awards, such as the ISEA Conference 2024, the Working Grant ZER01NE Seoul in 2023, German Pop Music Prize 2022, and the VIA VUT Award in 2019.
Techno, Art, and Music Robots
With a background in electrical engineering and a passion for hands-on sound creation, Geist’s work is driven by a desire to interact physically with music. His robotic instruments are crafted using advanced technologies such as 3D printing, CNC milling, and laser cutting and have been shown all around the world.
In this talk, Geist will give insight into his art practice, share how he stopped working with human musicians and started working with music robots, and explain why AI music robots will not replace human musicians (soon).

Visa Koivunen
Visa Koivunen (IEEE Fellow, EURASIP Fellow, AAIA Fellow) received the D.Sc. degree (Hons.) in electrical engineering from the Department of Electrical Engineering, University of Oulu. He was a Visiting Researcher with the University of Pennsylvania, Philadelphia, USA, from 1991 to 1995. Since 1999 he has been a Full Professor of signal processing with Aalto University (formerly Helsinki UT), Finland and Aalto Distinguished Professor since 2020. He was an Academy Professor in 2010-14. From 2003 to 2006, he was an Adjunct Full Professor with the University of Pennsylvania, USA. He has spent two full sabbaticals and mini-sabbaticals each year with Princeton University. On his sabbatical year from 2022 to 2023, he was a Visiting Professor with EPFL, Lausanne, Switzerland.
His research interests include statistical signal processing, wireless communications, radar, multisensor systems, data science, and machine learning. He was awarded the 2015 European Association for Signal Processing (EURASIP) Technical Achievement Award for fundamental contributions to statistical signal processing and its applications in wireless communications, radar, and related fields. He has co-authored multiple papers receiving the best paper award
including the IEEE SP Society Best Paper Award for the year 2007 (with J. Eriksson) and 2017 (with Zoubir, Muma, and Chakhchouk). He has served on the editorial board for The Proceedings of the IEEE, and IEEE Signal Processing Magazine among other journals.
Integrated Radar Sensing and Communications: convergence, co-design and learning
Integrated sensing and communications (ISAC) systems operate within shared, congested, or even contested spectrum, aiming to deliver high performance in both wireless communications and radio frequency (RF) sensing. Communications and sensing functionalities are co-designed for mutual benefit. ISAC systems share hardware and antenna resources, use joint waveforms, and function within shared spectra and exchange awareness about their radio environments to jointly optimize performance. ISAC technology has opened genuinely new lines of research and development in emerging 6G systems rather than merely being an evolution of 5G. ISAC takes advantage of the ongoing parallel convergence of modern radar and multi-antenna communications technologies. Sensing may be a service, can aid communications, and conversely, communications may aid sensing. ISAC is expected to provide unprecedented quality of experience towards reliable, resilient, and ubiquitous connectivity, beyond the high data-rate transmission of bits. This talk focuses on various waveform design, signal processing, optimization, adaptation and reinforcement learning techniques employed to achieve desired sensing and communications performances in multicarrier (MC) and multiantenna ISAC systems. Examples on waveform design, optimization, resource allocation in multi-user and multi target scenarios are provided.

Daqing Zhang
Daqing Zhang is a Chair Professor at IP Paris and Peking University. His research interests include ubiquitous computing, mobile computing, big data analytics and AIoT. He has published more than 300 technical papers in leading conferences and journals, with a citation of over 33200 and H-index of 94. He developed the OWL-based context model and Fresnel Zone-based wireless sensing theory, which are widely used by pervasive computing, mobile computing, wireless networks and service computing communities. He was the winner of the Ten Years CoMoRea Impact Paper Award at IEEE PerCom 2013, and the Ten Years Most Influential Paper Award at IEEE UIC 2019 and FCS 2023, the Best Paper Award Runner-up at ACM MobiCom 2022, the Distinguished Paper Award of IMWUT (UbiComp 2021), etc.. He is now in the editorial board of ACM IMWUT, ACM TOSN and CCF TPCI. Daqing Zhang is a Fellow of IEEE and Member of the Academy of Europe.
From WiFi Sensing to Quantum Sensing: Toward a Ubiquitous Sensing Theory
WiFi/4G/5G based wireless sensing has attracted a lot of attention from both academia and industry in the last decade. However, most of the work focused on developing effective techniques for a certain application, very few work attempted to explore the fundamental sensing theory and answer fundamental questions such as the sensing model, sensing limit, sensing boundary and sensing quality of WiFi/4G/5G signals. In this talk, I will first introduce the Fresnel zone model we proposed in UbiComp 2016 as a generic theoretic basis for contactless human sensing with WiFi/4G/5G signals, revealing the relationship among the received CSI signal, the distance between the two transceivers, the location and heading of the sensing target with respect to the transceivers, and the environment. Then I will present the Sensing Signal to Noise Ratio (SSNR) as a new metric to inform the sensing limit, sensing boundary and sensing signal quality of WiFi/4G/5G-based human sensing systems. In order to further increase SSNR and push the wireless sensing limit, we explore the Rydberg theory about how the Rydberg atom interacts with RF signals and develop the world first Rydberg quantum sensing system which senses tiny variations of RF signals caused by human activities, showing significant performance increase compared to WiFi or mmWave Radar based sensing systems.
Luca Barbieri
Luca Barbieri received the M.Sc. degree in telecommunication engineering and the Ph.D. degree in information technology from Politecnico di Milano, Italy, in 2019 and 2023, respectively. In 2022, he was a visiting researcher at the King’s Communications, Learning & Information Processing (KCLIP) lab at King’s College London, London, UK. From 2023 to 2024, he was a postdoctoral researcher at the Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) of Politecnico di Milano. He is currently a Researcher at Nokia Bell Labs in Stuttgart, Germany, working at the intersection between unlicensed spectrum communications and AI/ML strategies.
What will Wi-Fi 8 be? A primer on next generation Wi-Fi for communication and sensing
Wi-Fi technology has been traditionally focused on providing higher data rates through its standards and evolutions. However, recent applications and use cases have been pushing for highly reliable communications, demanding Wi-Fi to accommodate such requirements starting with Wi-Fi 8. This keynote presents an overview of Wi-Fi 8, highlighting the necessary changes and advancements brought by the standard to provide Ultra High Reliability (UHR) communication features. Besides introducing the new features, we will also discuss the increasingly important role of complementary functionalities, like Wi-Fi in sensing, focusing on the techniques enabling Access Points (APs) and User Equipments (UEs) to initiate and/or carry out sensing procedures. Finally, we will conclude by presenting research questions and open challenges to truly deliver UHR communications and to enhance sensing functionalities in Wi-Fi networks. Â

Takayuki Hoshi
Takayuki Hoshi received a Ph.D (Information Science and Technology) from the University of Tokyo in 2008. After working as JSPS Research Fellowship for Young Scientists DC2/PD (2007-2009), assistant professor at Kumamoto University (2009-2011), Nagoya Institute of Technology (2011-2016), and the University of Tokyo (2016-2017), he founded Pixie Dust Technologies, Inc. in 2017. He is an expert in wave control technology based on full use of physics and mathematics. He developed the world-first scannable prototype of airborne ultrasound tactile display in 2008 and he demonstrated the world-first 3D acoustic manipulation in 2013. He was awarded Significant Contribution to Science and Technology in 2014 by NISTEP, MEXT, Japan. He is currently working on social implementation of wave control technology through industry-academia collaboration and open innovation.
The Trajectory of Ultrasonic Haptics: From Its History to Healthcare Applications, and Its Influence on Acoustic Levitation
Ultrasonic haptics has emerged as a powerful modality in mid-air tactile feedback, enabling contactless interactions that enrich immersive VR/AR experiences. This keynote traces its trajectory from its early conceptualization to the development of practical phased-array systems, culminating in real-world applications in healthcare. The talk will further explore how these techniques have influenced adjacent domains, particularly acoustic levitation, where precise wave control enables dynamic object manipulation in mid-air. By bridging fundamental physics with user-centered design, ultrasonic haptics demonstrates how wave-based modalities can serve as a versatile interface layer in multisensory environments. This journey offers not only a retrospective of technological evolution, but also a prospective look at cross-modal fusion and societal impact, providing insights for researchers aiming to integrate novel modalities into next-generation XR systems.

Yunqi Guo
Dr. Guo Yunqi is a Postdoctoral Fellow at The Chinese University of Hong Kong (CUHK), working with Prof. Guoliang Xing in the CUHK AIoT Lab. His research spans augmented reality (AR)/mobile systems, visualâlanguage interaction, and AIoT for accessibility and eldercare. He focuses on Assistive AR technologies that enhance humanâhuman and humanâenvironment interactions, and AI-driven sensing systems for privacy-preserving elderly monitoring. Dr. Guo earned his B.S. from Shanghai Jiao Tong University, and his M.S. and Ph.D. in Computer Science from the University of California, Los Angeles (UCLA).
Dr. Guoâs work bridges academic research and real-world deployment, collaborating with Deaf communities, eldercare organizations, and industry partners to translate innovation into meaningful social impact. He is the founder of AnySign, an interdisciplinary team developing real-time, bidirectional sign language translation for AR glasses, supported by AWS, OpenAI, and other partners. The team has sponsored accessibility at international conferences and deployed their technologies in the fieldâmost notably supporting MobiCom and CPS-IoT Week.
Assistive AR: Bridging Human Sensing and Everyday Assistance
Augmented Reality (AR) glasses have evolved from mere entertainment gadgets to practical wearable devices that enhance everyday life. This talk examines how AR can transform the way we interact with others and perceive our environment. Dr. Yunqi Guo will introduce Assistive ARâa class of AR systems his team is developing to support daily activities by improving both humanâhuman and humanâenvironment interactions. These systems combine human-like sensing capabilities, efficient cross-device computation, and intuitive generative displays. He will highlight their work on two key applications: breaking communication barriers for Deaf users through real-time, bidirectional sign language translation with AR glasses, and enhancing social interactions with proactive large language models (LLMs) that understand conversational context and offer response suggestions. Through AnySignâa team Dr. Guo founded during his PhDâthis research is extending beyond academia to real-world assistance for end users. Looking ahead, ARâs ability to align closely with human sensing will enable a new generation of versatile, personalized assistance tools that seamlessly integrate into daily life.

Gari Clifford
From foundation models to tiny ML – the path ahead for ambient neuropsychiatry
Data is the new oil. AI is the new electricity. Every American will be wearing a wearable within four years. The clichĂ©s surrounding AI are endless. Perhaps there is some partial truth in them, but the real question is whether this suddenly fashionable tool will be useful and cost-effective. In the domain of health, we will need to overcome the barriers inherent in analyzing human physiology and activity, including distrust, noncompliance, laziness, misleading labels, misaligned incentives, limited resources, and climate armageddon. That seems like a lot to overcome, but I’m going to have a stab at outlining some promising pathways forward with a focus on neuropsychiatry.

Heli KoskimÀki
Wearables 101: What Changes (and Doesnât) as Your Data Scales
Every startup has many stories. This talk tells Ouraâs through its data and the algorithms built on it through my lens. Iâll show how scientific and product-facing approaches meet, and how evidence thresholds, ground truth, and validation shift when the target is user value rather than publication. Weâll look at what changes as signals, users, and expectations grow, and what doesnât. The aim is a practical lens on scaling ring-based data through algorithm highlights grounded in real user stories. Iâll also share what I wouldnât change through principles that held up at scale to the collaboration that make it possible.