Projects

...

PERSONAL OBJECT RECOGNIZER FOR PEOPLE WITH VISUAL IMPAIRMENTS

The Motivation

Blind people often need to identify objects around them, from packages of food to items of clothing. Automatic object recognition continues to provide limited assistance in such tasks because models tend to be trained on images taken by sighted people with different background clutter, scale, viewpoints, occlusion, and image quality than in photos taken by blind users.

Related Publication(s)

Kacorri, H., Kitani, K. M., Bigham, J. P., & Asakawa, C. (2017, May). People with Visual Impairment Training Personal Object Recognizers: Feasibility and Challenges. CHI'17 (pp. 5839-5849). ACM. [PDF]

...

IMPROVING SELFIE EXPERIENCES FOR BLIND PEOPLE

The Motivation

Selfies have been a major social trend for many years. However, it can be challenging for people with visual impairments to take part in this activity although their use of social media is as high as sighted people. Thus, we designed and developed a mobile application for helping people with visual impairments with taking and managing selfies.

Related Publication(s)

Yunjung Lee, hajung Kim, Hyeji Jang, Yujin Han, Uran Oh. (2019) Selfer: Selfie Guidance Mobile Application for the Blind. Proceedings of HCI Korea 2019. 971-975

...

YOLO-BASED WALKING ASSISTANCE FOR BLIND PEOPLE

The Motivation

People with visual impairments walk along braille blocks. In particular, viscous blocks help to stop walking at a point where it can be dangerous. However, many Braille blocks on Korean roads are often damaged or absent. Thus, we have implemented a mobile walking assistance app for people with visual impairments, which informs users' current location, sidewalk location, bus info with verbal feedback.

Related Publication(s)

Jiwon Yoo, DongHee Han, Chayoung Hur, Uran Oh. (2019) YOLO-based Walking Assistance Application for Blind People. Korea Computer Congress 2019. Participation Award for Student Paper Competition

NAVCOG: INDOOR NAVIGATION ASSISTANCE FOR PEOPLE WITH VISUAL IMPAIRMENTS

The Motivation

When in an unfamiliar place, people tend to use a walking navigation system on their device to compare the map location to the surrounding views. However, visually impaired people cannot check the map or the surrounding scenery to bridge the gap between the ground truth and the rough GPS location. NavCog aims for an improved high-accuracy walking navigation system that uses BLE beacons together with various kinds of sensors with a new localization algorithm for both indoors and outdoors. [A link to the project]

Related Publication(s)

Ahmetovic, D., Gleason, C., Ruan, C., Kitani, K., Takagi, H., & Asakawa, C. (2016, September). NavCog: a navigational cognitive assistant for the blind. MobileHCI'16 (pp. 90-99). ACM. [PDF]

NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment [PDF]

NONVISUAL ON-BODY INTERACTION

The Motivation

For users with visual impairments, who do not necessarily need the visual display of a mobile device, non-visual on-body interaction (e.g., Imaginary Interfaces) could provide accessible input in a mobile context. Such interaction provides the potential advantages of an always-available input surface, and increased tactile and proprioceptive feedback compared to a smooth touchscreen.

Related Publication(s)

Design of and Subjective Response to on-Body Input for People With Visual Impairments [PDF]

A Performance Comparison of on-Hand Versus on-Phone Nonvisual Input by Blind and Sighted Users [PDF]

Localization of Skin Features on the Hand and Wrist from Small Image Patches [PDF]

Investigating Microinteractions for People with Visual Impairments and the Potential Role of On-Body Interaction [PDF]

READING ASSISTANCE VIA A FINGER-MOUNTED DEVICE

The Motivation

The recent miniaturization of cameras has enabled finger-based reading approaches that provide blind and visually impaired readers with access to printed materials. Compared to handheld text scanners such as mobile phone applications, mounting a tiny camera on the user’s own finger has the potential to mitigate camera framing issues, enable a blind reader to better understand the spatial layout of a document, and provide better control over reading pace.

Related Publication(s)

The Design and Preliminary Evaluation of Finger-Mounted Camera and Feedback System to Enable Reading of Printed Text for the Blind [PDF]

Evaluating Haptic and Auditory Directional Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras [PDF]

MOBILE AND WEARABLE DEVICE USE FOR PEOPLE WITH VISUAL IMPAIRMENTS

The Motivation

With the increasing popularity of mainstream wearable devices, it is critical to assess the accessibility implications of such technologies. For people with visual impairments, who do not always need the visual display of a mobile phone, alternative means of eyes-free wearable interaction are particularly appealing.

Related Publication(s)

Current and Future Mobile and Wearable Device Use by People With Visual Impairments [PDF]

TOUCHSCREEN GESTURES SONIFICATION

The Motivation

While sighted users may learn to perform touchscreen gestures through observation (e.g., of other users or video tutorials), such mechanisms are inaccessible for users with visual impairments. As a result, learning to perform gestures without visual feedback can be challenging.

Related Publication(s)

Follow That Sound: Using Sonification and Corrective Verbal Feedback to Teach Touchscreen Gestures [PDF]

Audio-Based Feedback Techniques for Teaching Touchscreen Gestures [PDF]

END-USER TOUCHSCREEN GESTUERS CUSTOMIZATION

The Motivation

The vast majority of work on understanding and supporting the gesture creation process has focused on professional designers. In contrast, gesture customization by end users— which may offer better memorability, efficiency and accessibility than pre-defined gestures—has received little attention.

Related Publication(s)

The Challenges and Potential of End-User Gesture Customization [PDF]

NONVISUAL GUIDANCE FOR ASSISTING SPATIAL TASKS IN 3D SPACE WITH SIX DEGREES OF FREEDOM

The Motivation

With the advancement of technologies in augmented or virtual reality (AR/VR) research, the interaction such as navigation or object manipulation is no longer limited to two dimension. However, the assitance for supporting interactions in 3-dimensional space for people with visual impairments has not been well-explored, especially when the degree of freedom is high to be delievered to users at once.

The Goal and Expected Contributions

Designing and implementing a system that conveys spatial information of the surrondings in 3D space with nonvisual feedback with minimum cognitive loads. This system may be used for assisting object localization (i.e., helping a blind person to reach to a specific object) and photography for people with visual impairments.

SUPPORTING INSTANCE NAVIGATION OF PHOTOS FOR PEOPLE WITH VISUAL IMPAIRMENTS

The Motivation

Object classification/localization or scene summarization has been an on-going research topic in computer vision for many years. While these can benefit people with visual impairments to have better access to visual contents of an image, it is still challenging for them to fully understand the scene, which may prevents many of them from being socially engaged with friends [ref].

The Goal and Expected Contributions

Developing a system that enables users with visual impairments to gain better understanding of a complex scene with multiple instance by allowing users to spatially explore each instance in the scene by touch. The system can improve the accessibility of images by providing visual information of images in detail.

EYES-FREE TEXT ENTRY WITH WEARABLE SENSORS

The Motivation

Text-entry can sometimes be not always-available or efficient, especially in mobile contexts where users have to constantly monitor their surrondings for their safety (e.g., avoid bumping into people or obstacles).

The Goal and Expected Contributions

Designing and implementiong a wearable device with finger-, or wrist-mounted sensors such as inertial motion unit or electromyography (EMG) sensors. This device should enables users to enter texts without visual feedback.

ADAPTIVE APP LAUNCHER FOR SMARTWATCHES

The Motivation

Selecting a target from a collection of items on a smartwatch is a frequent yet challenging task. The constrained screen real estate can only accommodate a small number of items to be large enough for finger tips. As a result, users often need to search through a long list of items or navigate UI hierarchies to find and select a target.

The Goal and Expected Contributions

Developing an app launcher for a smartwatch which adaptively changes the layout of the apps or the size of the app icons based their app launch likelihoods. This would enable users to find and open a desired app with efficiency.