A figure demonstrates the function of Travelogue (our proposed system) which shows a person in different locations of a room and a digital picture frame is on the wall which renders an informative art based on the person's indoor trajectory

Travelogue: Representing Indoor Trajectories as Informative Art

Yunzhi Li, Tingyu Cheng, and Ashutosh Dhekne - CHI 2022 Late-Breaking Work


In this work, we explore if informative art can represent a user’s indoor trajectory and promote user’s self-reflection, creating a new type of interactive space. Under the assumption that the simplicity of a digital picture frame can be an appealing way to represent indoor activities and further create a dyadic relationship between users and the space they occupy, we present Travelogue, a picture-frame like self-contained system which can sense human movement using wireless signal reflections in a device free manner. Breaking away from traditional dashboard-based visualization techniques, Travelogue only renders the high-level extent and location of users’ activities in different informative arts. Our preliminary user study with 12 participants shows most users found Travelogue intuitive, unobtrusive, and aesthetically pleasing, as well as a desired tool for self-reflection on indoor activity.

A figure demonstrates the sensing mechanism of our designed and fabricated flexible computational photodetectors.

Flexible Computational Photodetectors for Self-powered Activity Sensing

Dingtian Zhang, Canek Fuentes-Hernandez, Raaghesh Vijayan, Yang Zhang, Yunzhi Li, Jung Wook Park, Yiyang Wang, Yuhui Zhao, Nivedita Arora, Ali Mirzazadeh, Youngwook Do, Tingyu Cheng, Saiganesh Swaminathan, Thad Starner, Trisha L. Andrew, and Gregory D. Abowd - Nature Flexible Electronics

[PDF] [Video]

Conventional vision-based systems, such as cameras, have demonstrated their enormous versatility in sensing human activities and developing interactive environments. However, these systems have long been criticized for incurring privacy, power, and latency issues due to their underlying structure of pixel-wise analog signal acquisition, computation, and communication. In this research, we overcome these limitations by introducing in-sensor analog computation through the distribution of interconnected photodetectors in space, having a weighted responsivity, to create what we call a computational photodetector. Computational photodetectors can be used to extract mid-level vision features as a single continuous analog signal measured via a two-pin connection. We develop computational photodetectors using thin and flexible low-noise organic photodiode arrays coupled with a self-powered wireless system to demonstrate a set of designs that capture position, orientation, direction, speed, and identification information, in a range of applications from explicit interactions on everyday surfaces to implicit activity detection.

A figure demonstrates the hardware components of Duco (our proposed system) and its potential applications.

Duco: Autonomous Large-Scale Direct-Circuit-Writing (DCW) on Vertical Everyday Surfaces Using A Scalable Hanging Plotter

Tingyu Cheng, Bu Li, Yang Zhang, Yunzhi Li, Charles Ramey, Eui Min Jung, Yepu Cui, Saiganesh Swaminathan, Youngwook Do, Manos Tentzeris, Gregory D. Abowd, and HyunJoo Oh. - UbiComp 2021

[PDF] [Video]

We present Duco, a large-scale electronics fabrication robot that enables room-scale & building-scale circuitry to add interactivity to those vertical everyday surfaces. Duco negates the need for any human intervention by leveraging a hanging robotic system that automatically sketches multi-layered circuity to enable novel large-scale interfaces. Our technical evaluation shows that Duco’s mechanical system works robustly on various surface materials with a wide range of roughness and surface morphologies. And we demonstrate our system with five application examples, including an interactive piano, an IoT coffee maker controller, an FM energy-harvester printed on a large glass window, a human-scale touch sensor and a 3D interactive lamp.

A figure demonstrates the applications of OptoSense (our proposed system) including on/off sensing, 1D and 2D touch sensing.

OptoSense: Towards Ubiquitous Self-Powered Ambient Light Sensing Surfaces

Dingtian Zhang, Jung Wook Park, Yang Zhang, Yuhui Zhao, Yiyang Wang, Yunzhi Li, Tanvi Bhagwat, Wen-Fang Chou, Xiaojia Jia, Bernard Kippelen, Canek Fuentes-Hernandez, Thad Starner, and Gregory D. Abowd - UbiComp 2020

[PDF] [Video]

We present OptoSense, a general-purpose self-powered sensing system which senses ambient light at the surface level of everyday objects as a high-fidelity signal to infer user activities and interactions. To situate the novelty of OptoSense among prior work and highlight the generalizability of the approach, we propose a design framework of ambient light sensing surfaces, enabling implicit activity sensing and explicit interactions in a wide range of use cases with varying sensing dimensions (0D, 1D, 2D), fields of view (wide, narrow), and perspectives (egocentric, allocentric). OptoSense supports this framework through example applications ranging from object use and indoor traffic detection, to liquid sensing and multitouch input. Additionally, the system can achieve high detection accuracy while being self-powered by ambient light.

A figure demonstrates a sample flow graph obtained by our explorative system during one user study which shows the change of sensed students' flow states.

How Presenters Perceive and React to Audience Flow Prediction In-situ: An Explorative Study of Live Online Lectures

Yunzhi Li* , Wei Sun*, Feng Tian, Xiangmin Fan, Hongan Wang (*joint first author) - CSCW 2019


The degree and quality of instructor-student interactions are crucial for the engagement, retention, and learning outcomes of students. However, such interactions are limited in live online lectures, where instructors no longer have access to important cues such as raised hands or facial expressions at the time of teaching. This project presents an explorative study investigating how presenters perceive and react to audience flow prediction when giving live-stream lectures, which has not been examined yet. The study was conducted with an experimental system that can predict audience’s psychological states (e.g., anxiety, flow, boredom) through real-time facial expression analysis, and can provide aggregated views illustrating the flow experience of the whole group.

Human-AI Interaction in Healthcare: Three Case Studies About How Patient(s) and Doctors Interact with AI in a Multi-Tiers Healthcare Network

Yunzhi Li, Liuping Wang, Shuai Ma, Xiangmin Fan, Zijun Wang, Junfeng Jiao, Dakuo Wang - CHI 2019 Workshop


This position paper presents three ongoing research projects that aim to study how to design, develop, and evaluate the systems supporting human-AI interaction in the healthcare domain. Collaborating with the local government administrators, hospitals, clinics and doctors, we get a valuable opportunity to study and improve how AI-empowered technologies are changing people’s life in providing or receiving healthcare services in a suburb district in Beijing, China.