This writeup is an extension of the writeup I posted last week, Mathematical Derivation of the Bayes and Kalman Filters. Both these writeups were written when I was studying *Probabilistic Robotics* by Sebastian Thrun, Wolfram Burgard, and Dieter Fox; I think that book is awesome, and I really wanted to understand all the mathematical details. This writeup should be viewed as a supplement to Chapter 3.3 in that book.

The Extended Kalman Filter is an extension of the basic Kalman filter, which requires linear transition models and measurement models for each step, to the case where the transition and measurement models are nonlinear. EKFs aren’t as widely applicable as certain other popular Bayes Filter methods (*cough* particle filters), because they can’t represent as wide a range of types of belief distributions. Still, they work well for certain problems and apparently are widely used in practice.

Next week I’ll post something different, but first I needed to get all the Kalman Filter stuff out of my system. Do you have any requests for writeups on other applied math topics?

This is a “level 3” writeup: for grad students and hardcore practitioners.