Date of Degree

9-2021

Document Type

Dissertation

Degree Name

Ph.D.

Program

Computer Science

Advisor

Feng Gu

Committee Members

Liang Zhao

Shuqun Zhang

Wei Zhong

Subject Categories

Computer Sciences

Keywords

Particle Filters, Sequential Monte Carlo Methods, Convergence Analysis, High Performance Computing, Parallel Computing, Hybrid Resampling.

Abstract

Particle filters, also known as sequential Monte Carlo (SMC) methods, use the Bayesian inference and the stochastic sampling technique to estimate the states of dynamic systems from given observations. Parallel/Distributed particle filters were introduced to improve the performance of sequential particle filters by using multiple processing units (PUs). The classical resampling algorithm used in parallel/distributed particle filters is a centralized scheme, called centralized resampling, which needs a central unit (CU) to serve as a hub for data transfers. As a result, the centralized resampling procedures produce extra communication costs, which lowers the speedup factors in parallel computing. Even though some efficient particle routing policies had been introduced, the centralized resampling still suffered from high communication costs. A decentralized resampling algorithm was introduced to decrease the communication cost in parallel/distributed particle filters. In the decentralized resampling, each PU independently handles its particles and transfers a portion of particles to its neighboring PUs after the resampling step. Because of the lack of the global information, the estimate accuracy of the decentralized resampling is relatively low compared to that of the centralized resampling. Hybrid resampling algorithms were proposed to improve the performance by alternatively executing the centralized resampling and the decentralized resampling, which can reduce the communication costs without losing the estimation accuracy. In this study, we propose novel hybrid resampling algorithms to adjust the intervals between the centralized resampling steps and the decentralized resampling steps based on the measured system convergence. We analyze the computation time, the communication time, and speedup factors of parallel/distributed particle filters with various resampling algorithms, state sizes, system complexities, numbers of processing units, and model dimensions. The experimental results indicate that the decentralized resampling achieves the highest speedup factors due to the local transfer of particles, the centralized resampling always has the lowest speedup factors because of the global transfer of particles, and the hybrid resampling attains the speedup factors between. Moreover, we define the complexity-state ratio, as the ratio between the system complexity and the system state size to study how it impacts the speedup factor. The experiments show that the high complexity-state ratio results in the increase of the speedup factors. This is one of the earliest attempts to analyze and compare the performance of parallel/distributed particle filters with different resampling algorithms. The analysis can provide potential solutions for further performance improvements and guide the appropriate selection of the resampling algorithm for parallel/distributed particle filters. Meanwhile, we formalize various hybrid resampling algorithms to be a generic resampling algorithm and prove it to be uniformly convergent. The proof provides a solid theoretical foundation for their wide adoptions in parallel/distributed particle filters.

Share

COinS