Dissertations and Theses

Date of Award

2025

Document Type

Thesis

Department

Mechanical Engineering

First Advisor

Bo Wang

Keywords

Reinforcement Learning, Deep Deterministic Policy Gradient (DDPG), Autonomous Parking, Control Barrier Functions (CBF), Mobile Robot Control, Safe Reinforcement Learning

Abstract

This thesis studies a hybrid framework that combines reinforcement learning (RL) with control barrier function (CBF)-based methods to achieve safe autonomous vehicle control, focusing on parking with obstacle avoidance. We apply Deep Deterministic Policy Gradient (DDPG) methods for continuous control and evaluate policies across three Simulink environments of increasing fidelity: a kinematic model, a dynamic model, and a dynamic model with actuator disturbance. In parking tasks, DDPG learns smooth, stable trajectories and maintains performance under modeling uncertainty and input noise.

To address hard safety requirements in obstacle-rich settings, we augment the RL policy with a CBF safety filter that enforces forward invariance of a state-based safe set in real time. Experiments show that (i) reward shaping alone yields “soft safety” (avoidance behavior without guarantees), (ii) post-hoc CBF filtering prevents collisions but can cause abrupt corrections if the policy was not trained with the filter in the loop, and (iii) retraining the agent with the CBF filter active achieves smooth, collision-free navigation with formal constraint satisfaction.

In general, the RL-CBF approach preserves the adaptability of learning while providing control-theoretic safety guarantees, pointing to a practical path for reliable autonomous control in uncertain and nonlinear environments.

Available for download on Tuesday, December 01, 2026

Share

COinS