Dissertations, Theses, and Capstone Projects

Date of Degree

2-2025

Document Type

Dissertation

Degree Name

Ph.D.

Program

Computer Science

Advisor

Sos S. Agaian

Committee Members

Liang Zhao

Hui Chen

Johanna Devaney

Dongfang Zhao

Subject Categories

Computer Engineering | Electrical and Computer Engineering

Keywords

Lightweight model, Artificial Intelligence, Computer Vision, Model Quantization

Abstract

The proliferation of deep learning (DL) has significantly advanced intelligent systems across various domains. However, the deployment of DL models on resource-constrained devices such as embedded systems and Internet-of-Things (IoT) units presents critical challenges due to high computational demands, limited memory, and constrained energy resources. Tiny Artificial Intelligence (Tiny AI) addresses these challenges by enabling efficient model execution on low-power devices. This dissertation investigates two primary research problems: accelerating models without sacrificing performance and compressing models to suit devices with limited resources. The research goals include developing hybrid neural network architectures to optimize computational efficiency and accuracy, implementing Binary Neural Networks (BNNs) for model quantization, and benchmarking these approaches in real-world applications. The contributions of this work include: (1) the development of novel hybrid models—UCM-Net, UCM-Netv2, and MUCM-Net—tailored for medical image segmentation, leveraging techniques like skip connections, depthwise separable convolutions, and Mamba layers to balance performance and efficiency; (2) the proposal of BiThermalNet and a BNN-optimized PSPNet, demonstrating the application of BNNs for model quantization in thermal object detection and semantic segmentation tasks; and (3) comprehensive performance evaluations, showcasing significant improvements in computational efficiency and inference speed compared to state-of-the-art methods, with validation on datasets like PH2, ISIC2017, and ISIC2018, as well as constrained hardware platforms. The findings advance the state-of-the-art in Tiny AI, making DL feasible for deployment on resource-limited devices and unlocking new possibilities in fields like telemedicine, IoT, and autonomous systems. Future research will focus on further optimizing these methods and expanding their applicability to a broader range of resource-constrained environments.

This work is embargoed and will be available for download on Monday, February 01, 2027

Share

COinS