Dissertations and Theses

Date of Award


Document Type



Computer Science

First Advisor

Zhigang Zhu


Indoor navigation, blind and visually impaired, hybrid modeling, route planning algorithm, task scheduling algorithm, ARKit


We propose an integrated solution of indoor navigation using a smartphone, especially for assisting people with special needs, such as the blind and visually impaired (BVI) individuals. The system consists of three components: hybrid modeling, real-time navigation, and client-server architecture. In the hybrid modeling component, the hybrid model of a building is created region by region and is organized in a graph structure with nodes as destinations and landmarks, and edges as traversal paths between nodes. A Wi-Fi/cellular-data connectivity map, a beacon signal strength map, a 3D visual model (with destinations and landmarks annotated) are collected while a modeler walks through the building, and then registered with the floorplan of the building. The client-server architecture allows the scale-up of the system to a large area such as a college campus with multiple buildings, and the hybrid models are saved in the cloud and only downloaded when needed. In the real-time navigation component, a mobile app on the user’s smartphone will first download the beacon strength map and data connectivity map, and then use the beacon information to put the user in a region of a building. After the visual model of the region is downloaded to the user’s phone, the visual matching module will localize the user accurately in the region. A path planning algorithm takes the visual, connectivity and user preference information into account in planning a path for the user’s current location to the selected destination, and a scheduling algorithm is activated to download visual models of neighboring regions considering the connectivity information. Our current implementation uses ARKit on an iPhone to create local visual models and perform visual matching. User interfaces for both modeling and navigation are developed using visual, audio and haptic displays for our targeted users. Experimental results in real-time navigation are provided to validate our proposed approach.


To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.