Michael C. Burkhart

Hello
… and welcome to my website.
Michael C. Burkhart
Update
I recently joined the Adaptive Brain Lab at the University of Cambridge as a research associate. I will be working to develop machine learning-based approaches for the early diagnosis of neurodegenerative disease, specifically Alzheimer's, as part of the EDoN initiative.
About Me
I earned my Ph.D. in 2019 from Brown University's Division of Applied Mathematics. For my dissertation, I derived a novel approach to Bayesian filtering, the Discriminative Kalman Filter, motivated by and developed with my advisor M. Harrison and collaborators D. Brandman and L. Hochberg. We validated and successfully implemented this filter as part of the BrainGate Clinical Trial that enables participants with quadriplegia to communicate and interact with their environments in real time using mental imagery alone.
Filtering
Suppose there is some underlying process Z1:t=Z1,…,Zt about which we are very interested, but that we cannot observe. Instead, we are sequentially presented with observations or measurements X1:t=X1,…,Xt where each Xi depends only on the current latent state Zi. We visualize this process with the following graph:
Graphical Model for Filtering
Filtering is the process by which we use the observations X1,…,Xt to form our best guess for the current latent state Zt.
Dynamic State Space Models
Under the Bayesian approach to filtering, X1:t , Z1:t are endowed with a joint probability distribution. The graphical model encodes the process to generate X1:t , Z1:t as:
Graphical Model with Equations
This model is variously known as a dynamic state-space or hidden Markov model. It provides a visual description of how to generate a sample x1:t , z1:t from the random variables X1:t , Z1:t. We start with z1 drawn from its marginal distribution p(z1). We then generate an observation x1 using the distribution p(x1|z1). At each subsequent time step t, we draw zt from the distribution p(zt|zt-1) and xt from the distribution p(xt|zt). These two conditional distributions are very important and characterize the generative process up to the specification of Z1. The first, p(zt|zt-1), relates the state at time t to the state at time t-1 and is often called the state or prediction model. The second, p(xt|zt ), relates the current observation to the current state and is called the measurement or observation model.
Bayesian Filtering
The Bayesian solution to the filtering problem returns the conditional distribution of Zt given that X1 ,…, Xt have been observed to be x1 ,…, xt. We refer to this distribution p(zt|x1:t) as the posterior distribution, or simply the posterior. The Chapman–Kolmogorov recursion
Chapman–Kolmogorov Equation
relates the posterior at time t to the one at time t-1. Bayesian filtering solves or approximates the above recursion. Common approaches include Kalman filtering, variational methods, quadrature methods, and Monte Carlo-based particle filtering.
Kalman Filter
The Kalman filter specifies both the state model and measurement model as linear, Gaussian.
Kalman Graphical Model
Here, ηd(· ; μ ,Σ) denotes the d-dimensional Gaussian density with mean μ and covariance Σ. In this way, the posterior is Gaussian and quickly computable. NASA used a variant of this filter to orient the Apollo Lunar Module and land the first humans on the moon.
Our Approach
We apply Bayes' rule to the measurement model and make the Gaussian approximation p(zt|xt ) ≈ ηd(zt ; f(xt ), Q(xt )) where the functions f and Q can be learned from data.
Discriminative Kalman Filter Graphical Model
This approach allows for a nonlinear relationship between the measurements and latent states and has been found to perform particularly well when the Xt are much higher dimensional than the Zt, as is often the case with neural decoding.
Discriminative Kalman Filter
The Discriminative Kalman Filter adopts the Kalman state model, p(zt|zt-1) = ηd(zt ; Azt-1, Γ), with initialization p(z0) = ηd(z0; 0, S) where S satisfies S=ASA'+Γ (so that the latent process is stationary) and uses the measurement model introduced above. Given these specifications, it follows that we may recursively approximate the posterior as Gaussian. Namely, if
p(zt-1|x1:t-1) ≈ ηd(zt-1 ; μt-1, Σt-1),
then given a new observation Xt=xt, we have that
p(zt|x1:t ) ≈ ηd(zt ; μt , Σt )
where
Mt = AΣt-1A' + Γ,
Σt = (Mt-1 + Q(xt )-1 - S-1)-1,
μt = Σt(Mt-1t-1 + Q(xt )-1f(xt)).
This approximation is functionally exact when Q(xt )-1 - S -1 is positive-definite; otherwise we let
Σt = (Mt-1 + Q(xt )-1)-1.
In this way, the Discriminative Kalman Filter maintains fast, closed-form updates while allowing for a nonlinear relationship between the latent states and observations. When supervised training data is available, off-the-shelf nonlinear/nonparameteric regression tools can readily be used to learn the discriminatively specified observation model. In related work, we demonstrate how this framework can also be leveraged to ameliorate non-stationarities, or changes to the relationship between the latent states and observations, and increase the robustness of estimates.
Relevant Publications
more ☜
Find Me Online
LinkedInGitHubInstagramTwitterGoogle ScholarORCID
C.V. ☜

© Michael C. Burkhart, 2023
Cambridge, UK