Dissertations and Theses
Date of Award
2025
Document Type
Dissertation
Department
Engineering
First Advisor
Myung Jong Lee
Keywords
Machine Learning, Mobile Edge Cloud, Resource Allocation, 5G/6G, In-network Computing
Abstract
Mobile Edge Computing (MEC) is recognized as a pivotal technology supporting cloud computing and innovative services at the network edge, offering significant reductions in system delay and mitigating network traffic congestion. It supports latency-sensitive applications like Augmented Reality (AR), Virtual Reality (VR), caching and surveillance through efficient cloud-based processing. The rapid advancement of smart cities, empowered by the convergence of Artificial Intelligence (AI) and the Internet of Things (IoT), is transforming urban infrastructures toward greater intelligence, efficiency, and responsiveness. The integration of AI within IoT—termed AIoT—has unlocked significant capabilities for data-driven decisionmaking, real-time responsiveness, and resource optimization. With advancements in 6G technologies, expectations for quality of service (QoS) have been heightened, highlighting the benefits of 6G’s ultra-reliability, low latency, high bandwidth, and massive data handling capabilities. Consequently, the MEC model has evolved to bring cloud capabilities closer to users, ensuring ultra-low latency and reliable resource allocation, which are vital for MEC’s efficacy.
This dissertation addresses the pressing challenge of efficient resource management (RM) in MEC environments, particularly under the constraints of dynamic workloads, limited resources, and stringent QoS demands. Motivated by the need for intelligent, adaptive, and scalable solutions, we propose AI/ML-based frameworks tailored for representative MEC application scenarios. Specifically, we develop a Graph Neural Network (GNN) model to enhance clustering in the Internet of Vehicles (IoV) by leveraging both mobility and topological features, improving cluster stability, bandwidth efficiency, and communication delay. For wireless networks, we model the resource allocation problem as a Markov Decision Process (MDP) and develop a model-free Deep Reinforcement Learning (DRL) framework to ensure reliable, low-latency, and energy-efficient communication in 6G networks. To address security concerns, a security-aware resource scheduling method is introduced, where user requests are classified by security level, and resources are allocated accordingly, factoring in security processing overhead. To further reduce network communication latency by offloading time-sensitive computing tasks from servers to the network, we extend reinforcement learning (RL) to programmable network devices and propose an in-network learning approach based on programmable switches. This approach supports fine-grained, flow-level bandwidth allocation directly within the data plane, enhancing responsiveness and reducing reliance on central controllers.
Through comprehensive evaluations and real-world testbed experiments, our proposed methods demonstrate superior performance in terms of efficiency, adaptability, and scalability compared to traditional and heuristic approaches. Collectively, this research contributes novel, practical solutions to the field of AIempowered mobile edge computing and lays the groundwork for resilient and intelligent networked systems in the 6G era.
Recommended Citation
Hu, Hang, "Machine Learning Approach for Resource Allocation in Mobile Edge Computing" (2025). CUNY Academic Works.
https://academicworks.cuny.edu/cc_etds_theses/1275
