Dissertations and Theses
Date of Award
2025
Document Type
Dissertation
Department
Engineering
First Advisor
Samah M Saeed
Keywords
Noisy Intermediate-Scale Quantum (NISQ), Quantum circuit partitioning, Reinforcement Learning, Systolic array, AI accelerators
Abstract
Modern computing systems continue to advance rapidly, enabling unprecedented computational capabilities across diverse domains. Among these powerful architectures are AI accelerators such as Tensor Processing Units (TPUs) and quantum computing systems utilizing Quantum Processing Units (QPUs) stand out as trans-formative technologies. Despite their potential, these systems share significant challenges related to hardware infidelity, manifested as error-induced degradation in computational accuracy and reliability. This thesis explicitly addresses cost-effective methods to screen-out defective AI accelerators across next-generation computing architectures and boost quantum circuit output fidelity through hierarchical circuit optimization using advanced machine learning methodologies.
This thesis first proposes a cost-effective testing methodology of AI accelerator chips to minimize yield loss. This methodology utilizes detailed circuit-level fault models, exposing how hardware defects propagate to application-level outputs. The approach establishes structured fault characterization that improves application-level robustness without excessive hardware redundancy. Building on insights from classical accelerator fault modeling, the thesis next addresses the analogous issue of fidelity in quantum computing. This work tackles the practical realization of quantum algorithms on Noisy Intermediate-Scale Quantum (NISQ) devices, which are substantially impeded by quantum hardware noise. The computational cost of compiling and re-synthesizing such large circuits grows exponentially with increased qubit count, typically more than 100 qubits, and increased circuit depth, rendering brute-force optimization infeasible, thus, severely reducing circuit fidelity.
To mitigate the hardware noise challenge on the output fidelity, we propose a novel quantum circuit partitioning framework based on reinforcement learning (RL) methodologies combined with Graph Neural Networks (GNNs) designed to incorporate both circuit structural properties and device-specific error rates. The integration of GNNs further enriches the RL agents’ capabilities, providing sophisticated representations of quantum circuits as dependency graphs and allowing detailed exploration of inter-gate relationships and noise impacts on fidelity. Crucially, the framework exposes a tunable knob that balances the runtime overhead of RL-driven optimization and the quality of the synthesized circuits, enabling practitioners to trade compilation latency for fidelity gains according to application demands. We show that our approach delivers improved performance in circuit output fidelity and overall reduction in gate count compared to the state-of-the-art optimization techniques across a diverse suite of benchmarks. The methodologies proposed herein are rigorously evaluated through extensive simulations and practical experiments conducted on IBM’s superconducting quantum computers. In summary, this thesis delivers a unified, cross-domain perspective on hardware infidelity. It extends accelerator-style yield-improvement tools to quantum devices. It utilizes machine learning strategies to enhance computational practicality, robustness, and scalability across diverse hardware architectures to improve AI accelerators yield and advance quantum circuit compilation techniques. By drawing conceptual parallels to classical accelerator fault modeling and quantum circuit optimization, it provides a comprehensive, cross-domain perspective on error-aware design crucial for next-generation computing systems.
Recommended Citation
Charrwi, Mohammad Walid M., "From TPU to QPU: Bridging Fidelity Gaps Across Next-Generation Computing Systems" (2025). CUNY Academic Works.
https://academicworks.cuny.edu/cc_etds_theses/1254
