Menu
Log in

TorCHI

CHI'2020 Talks | Students from U of T [PDF, VIDEO]

  • 20 Apr 2020
  • 7:00 PM - 9:00 PM
  • Online - see event details

Registration


Registration is closed

TITLE: BlyncSync: Enabling Multimodal Smartwatch Gestures  with Synchronous Touch and Blink 

ABSTRACT: Input techniques have been drawing abiding attention along with the continual miniaturization of personal computers. In this paper, we present BlyncSync, a novel multi-modal gesture set that leverages the synchronicity of touch and blink events to augment the input vocabulary of smartwatches with a rapid gesture, while at the same time, offers a solution to the false activation problem of blink-based input. BlyncSync contributes the concept of a mutual delimiter, where two modalities are used to jointly delimit the intention of each other’s input. A study shows that BlyncSync is 33% faster than using a baseline input delimiter (physical smartwatch button), with only 150ms in overhead cost compared to traditional touch events. Furthermore, our data indicates that the gesture can be tuned to elicit a true positive rate of 97% and a false positive rate of 1.68%.


BIO: Bryan Wang holds a CS MSc from the University of Toronto, supervised by Prof. Tovi Grossman at the Dynamic Graphics Project Lab (DGP), and will be working with Google Research this summer. He has published at top-tier venues in both HCI and AI and has done research on wearable interaction, circuit prototyping tools, and DL-based audio music generation. He currently focuses on AI-infused interactive systems supporting creativity and skill acquisition.  Prior to his graduate studies at U of T, he completed his BS in CS at National Taiwan University, working with MusicAI Lab and NTU HCI Lab. He received the Best Talk Award at ACM UIST 2016.http://www.dgp.toronto.edu/~bryanw/


TITLE: Designing Voice Interfaces: Back to the (Curriculum) Basics

ABSTRACT: Voice user interfaces (VUIs) are rapidly increasing in popularity in the consumer space. This leads to a concurrent explosion of available applications for such devices, with many industries rushing to offer voice interactions for their products. This pressure is then transferred to interface designers; however, a large majority of designers have been only trained to handle the usability challenges specific to Graphical User Interfaces (GUIs). Since VUIs differ significantly in design and usability from GUIs, we investigate in this paper the extent to which current educational resources prepare designers to handle the specific challenges of VUI design. For this, we conducted a preliminary scoping scan and syllabi meta review of HCI curricula at more than twenty top international HCI departments, revealing that the current offering of VUI design training within HCI education is rather limited. Based on this, we advocate for the updating of HCI curricula to incorporate VUI design, and for the development of VUI-specific pedagogical artifacts to be included in new curricula.


BIO: Christine Murad is a PhD student at the Technologies for Aging Gracefully lab in the Department of Computer Science at the University of Toronto. Her research looks at the usability and design of conversational voice interfaces, and exploring the development of different tools and resources to aid in intuitive and user-friendly conversational voice interaction. Her research also looks at how to improve VUI design training in HCI education.

PRESENTATION SLIDES [PDF]


Agenda

6:50 - The web meeting waiting room opens

7:00  - Presentation via ZOOM

8:00 to 8:30 - Networking on ZOOM


Presentation on YouTube



Powered by Wild Apricot Membership Software