In these notes, we introduce particle filtering as a recursive importance sampling method that approximates the minimum-mean-square-error (MMSE) estimate of a sequence of hidden state vectors in scenarios where the joint probability distribution of the states and the observations is non-Gaussian and, therefore, closed-form analytical expressions for the MMSE estimate are generally unavailable.

We begin the notes with a review of Bayesian approaches to static (i.e., time-invariant) parameter estimation. In the sequel, we describe the solution to the problem of sequential state estimation in linear, Gaussian dynamic models, which corresponds to the well-known Kalman (or Kalman-Bucy) filter. Finally, we move to the general nonlinear, non-Gaussian stochastic filtering problem and present particle filtering as a sequential Monte Carlo approach to solve that problem in a statistically optimal way.

We review several techniques to improve the performance of particle filters, including importance function optimization, particle resampling, Markov Chain Monte Carlo move steps, auxiliary particle filtering, and regularized particle filtering. We also discuss Rao-Blackwellized particle filtering as a technique that is particularly well-suited for many relevant applications such as fault detection and inertial navigation. Finally, we conclude the notes with a discussion on the emerging topic of distributed particle filtering using multiple processors located at remote nodes in a sensor network.

Throughout the notes, we often assume a more general framework than in most introductory textbooks by allowing either the observation model or the hidden state dynamic model to include unknown parameters. In a fully Bayesian fashion, we treat those unknown parameters also as random variables. Using suitable dynamic conjugate priors, that approach can be applied then to perform joint state and parameter estimation.

### Table of Contents

Introduction

Bayesian Estimation of Static Vectors

The Stochastic Filtering Problem

Sequential Monte Carlo Methods

Sampling/Importance Resampling (SIR) Filter

Importance Function Selection

Markov Chain Monte Carlo Move Step

Rao-Blackwellized Particle Filters

Auxiliary Particle Filter

Regularized Particle Filters

Cooperative Filtering with Multiple Observers

Application Examples

Summary

### About the Author(s)

**Marcelo G. S. Bruno**, Instituto Tecnologico de Aeronautica (ITA), Sao Jose dos Campos, Brazil

Marcelo G.S. Bruno received the bachelor's and master's degrees in Electrical Engineering from the University of Sao Paulo, Brazil, and the Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University, Pittsburgh PA, U.S.A. He is currently affiliated with the Electrical Engineering Division at Instituto Tecnologico de Aeronautica (ITA), Sao Jose dos Campos, Brazil, where he is an Assistant Professor. Dr. Bruno's research interests are in statistical signal/image processing, particularly Markov random fields (Mrfs), hidden Markov models (HMMs), particle filters/sequential Monte Carlo methods, Markov Chain Monte Carlo (MCMC), Bayesian networks, non-parametric belief propagation and their applications in target localization/tracking, image processing, machine learning, mobile robotics, and telecommunications. His current research is focused on distributed estimation and filtering in sensor networks combining Bayesian methods with iterative consensus, diffusion and random information dissemination strategies. Particular areas of application under investigation include distributed emitter tracking using networks of passive sensors and cooperative equalization of frequency-selective communication channels using multiple receivers. Dr. Bruno served as an Associate Editor for the IEEE Signal Processing Letters and the IEEE Transactions on Signal Processing and is a member of several IEEE societies.