GHFilter¶
Implements the g-h filter.
Copyright 2015 Roger R Labbe Jr.
FilterPy library. http://github.com/rlabbe/filterpy
Documentation at: https://filterpy.readthedocs.org
Supporting book at: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
This is licensed under an MIT license. See the readme.MD file for more information.
- class filterpy.gh.GHFilter(x, dx, dt, g, h)[source]¶
Implements the g-h filter. The topic is too large to cover in this comment. See my book “Kalman and Bayesian Filters in Python” [1] or Eli Brookner’s “Tracking and Kalman Filters Made Easy” [2].
A few basic examples are below, and the tests in ./gh_tests.py may give you more ideas on use.
- Parameters:
- x1D np.array or scalar
Initial value for the filter state. Each value can be a scalar or a np.array.
You can use a scalar for x0. If order > 0, then 0.0 is assumed for the higher order terms.
x[0] is the value being tracked x[1] is the first derivative (for order 1 and 2 filters) x[2] is the second derivative (for order 2 filters)
- dx1D np.array or scalar
Initial value for the derivative of the filter state.
- dtscalar
time step
- gfloat
filter g gain parameter.
- hfloat
filter h gain parameter.
- Attributes:
- x1D np.array or scalar
filter state
- dx1D np.array or scalar
derivative of the filter state.
- x_prediction1D np.array or scalar
predicted filter state
- dx_prediction1D np.array or scalar
predicted derivative of the filter state.
- dtscalar
time step
- gfloat
filter g gain parameter.
- hfloat
filter h gain parameter.
- ynp.array, or scalar
residual (difference between measurement and prior)
- znp.array, or scalar
measurement passed into update()
References
[1] Labbe, “Kalman and Bayesian Filters in Python” http://rlabbe.github.io/Kalman-and-Bayesian-Filters-in-Python
[2] Brookner, “Tracking and Kalman Filters Made Easy”. John Wiley and Sons, 1998.
Examples
Create a basic filter for a scalar value with g=.8, h=.2. Initialize to 0, with a derivative(velocity) of 0.
>>> from filterpy.gh import GHFilter >>> f = GHFilter (x=0., dx=0., dt=1., g=.8, h=.2)
Incorporate the measurement of 1
>>> f.update(z=1) (0.8, 0.2)
Incorporate a measurement of 2 with g=1 and h=0.01
>>> f.update(z=2, g=1, h=0.01) (2.0, 0.21000000000000002)
Create a filter with two independent variables.
>>> from numpy import array >>> f = GHFilter (x=array([0,1]), dx=array([0,0]), dt=1, g=.8, h=.02)
and update with the measurements (2,4)
>>> f.update(array([2,4]) (array([ 1.6, 3.4]), array([ 0.04, 0.06]))
- update(z, g=None, h=None)[source]¶
performs the g-h filter predict and update step on the measurement z. Modifies the member variables listed below, and returns the state of x and dx as a tuple as a convienence.
Modified Members
- x
filtered state variable
- dx
derivative (velocity) of x
- residual
difference between the measurement and the prediction for x
- x_prediction
predicted value of x before incorporating the measurement z.
- dx_prediction
predicted value of the derivative of x before incorporating the measurement z.
- Parameters:
- zany
the measurement
- gscalar (optional)
Override the fixed self.g value for this update
- hscalar (optional)
Override the fixed self.h value for this update
- Returns:
- x filter output for x
- dx filter output for dx (derivative of x
- batch_filter(data, save_predictions=False, saver=None)[source]¶
Given a sequenced list of data, performs g-h filter with a fixed g and h. See update() if you need to vary g and/or h.
Uses self.x and self.dx to initialize the filter, but DOES NOT alter self.x and self.dx during execution, allowing you to use this class multiple times without reseting self.x and self.dx. I’m not sure how often you would need to do that, but the capability is there. More exactly, none of the class member variables are modified by this function, in distinct contrast to update(), which changes most of them.
- Parameters:
- datalist like
contains the data to be filtered.
- save_predictionsboolean
the predictions will be saved and returned if this is true
- saverfilterpy.common.Saver, optional
filterpy.common.Saver object. If provided, saver.save() will be called after every epoch
- Returns:
- resultsnp.array shape (n+1, 2), where n=len(data)
contains the results of the filter, where results[i,0] is x , and results[i,1] is dx (derivative of x) First entry is the initial values of x and dx as set by __init__.
- predictionsnp.array shape(n), optional
the predictions for each step in the filter. Only retured if save_predictions == True
- VRF_prediction()[source]¶
Returns the Variance Reduction Factor of the prediction step of the filter. The VRF is the normalized variance for the filter, as given in the equation below.
\[VRF(\hat{x}_{n+1,n}) = \frac{VAR(\hat{x}_{n+1,n})}{\sigma^2_x}\]References
Asquith, “Weight Selection in First Order Linear Filters” Report No RG-TR-69-12, U.S. Army Missle Command. Redstone Arsenal, Al. November 24, 1970.
- VRF()[source]¶
Returns the Variance Reduction Factor (VRF) of the state variable of the filter (x) and its derivatives (dx, ddx). The VRF is the normalized variance for the filter, as given in the equations below.
\[ \begin{align}\begin{aligned}VRF(\hat{x}_{n,n}) = \frac{VAR(\hat{x}_{n,n})}{\sigma^2_x}\\VRF(\hat{\dot{x}}_{n,n}) = \frac{VAR(\hat{\dot{x}}_{n,n})}{\sigma^2_x}\\VRF(\hat{\ddot{x}}_{n,n}) = \frac{VAR(\hat{\ddot{x}}_{n,n})}{\sigma^2_x}\end{aligned}\end{align} \]- Returns:
- vrf_x VRF of x state variable
- vrf_dx VRF of the dx state variable (derivative of x)