Hi all, hereâ€™s how to implement a position-and-velocity Kalman filter from scratch, in C++ish pseudocode, from the perspective of a developer who isnâ€™t a professional mathematician and doesnâ€™t understand the kind of symbology used in many Kalman articles. I wrote another article about a simpler Kalman filter as a primer on the subject, if you read it first it will hopefully help make this article easier to understand.

Firstly we need to implement some matrix maths stuff as a prerequisite. It might help to skim-read Wikipedia on the topic first.

```
// A 2-row column vector.
struct Vector {
double a // aka Position or Price
double b // aka Velocity
}
// A 2x2 matrix.
struct Matrix {
double tl // Top Left, eg row 1 column 1
double tr // Top right, r1c2
double bl // Bottom left, r2c1
double br // Bottom right, r2c2
}
// vectorA + vectorB.
+(Vector lhs, Vector rhs) -> Vector {
return Vector {
a: lhs.a + rhs.a,
b: lhs.b + rhs.b
}
}
// vectorA - vectorB.
-(Vector lhs, Vector rhs) -> Vector {
return Vector(
a: lhs.a - rhs.a,
b: lhs.b - rhs.b
)
}
// matrixA + matrixB.
+(Matrix lhs, Matrix rhs) -> Matrix {
return Matrix(
tl: lhs.tl + rhs.tl, tr: lhs.tr + rhs.tr,
bl: lhs.bl + rhs.bl, br: lhs.br + rhs.br
)
}
// matrixA - matrixB.
-(Matrix l, Matrix r) -> Matrix {
return Matrix(
tl: l.tl - r.tl, tr: l.tr - r.tr,
bl: l.bl - r.bl, br: l.br - r.br
)
}
// matrixA * matrixB.
*(Matrix l, Matrix r) -> Matrix {
return Matrix(
tl: l.tl * r.tl + l.tr * r.bl,
tr: l.tl * r.tr + l.tr * r.br,
bl: l.bl * r.tl + l.br * r.bl,
br: l.bl * r.tr + l.br * r.br
)
}
// vectorOutput = matrixA * vectorB.
*(Matrix m, Vector v) -> Vector {
return Vector(
a: m.tl * v.a + m.tr * v.b,
b: m.bl * v.a + m.br * v.b
)
}
// Invert a matrix (the matrix version of 1 / m).
invert(Matrix m) -> Matrix {
return Matrix(
tl: m.br, tr: -m.tr,
bl: -m.bl, br: m.tl
)
}
// Transpose a matrix (mirror it diagonally).
transpose(Matrix m) -> Matrix {
return Matrix(
tl: m.tl, tr: m.bl,
bl: m.tr, br: m.br
)
}
```

Next, youâ€™ll need the â€˜external noise covariance matrixâ€™. This is usually referred to as Q in Kalman literature:

```
const double Q_POSITION_VARIANCE = 0.1
const double Q_VELOCITY_VARIANCE = 0.1
const double Q_POSITION_VELOCITY_COVARIANCE = 0.1
const Matrix Q_EXTERNAL_NOISE = {
tl: Q_POSITION_VARIANCE, tr: Q_POSITION_VELOCITY_COVARIANCE,
bl: Q_POSITION_VELOCITY_COVARIANCE, br: Q_VELOCITY_VARIANCE,
}
```

Next, youâ€™ll need the â€˜measurement noise covariance matrixâ€™, aka R in Kalman articles:

```
const double R_POSITION_VARIANCE = 0.1
const double R_VELOCITY_VARIANCE = 0.1
const double R_POSITION_VELOCITY_COVARIANCE = 0.1
const Matrix R_MEASUREMENT_NOISE = {
tl: R_POSITION_VARIANCE, tr: R_POSITION_VELOCITY_COVARIANCE,
bl: R_POSITION_VELOCITY_COVARIANCE, br: R_VELOCITY_VARIANCE,
}
```

How do you select good Q and R values? Iâ€™m not an expert, but hereâ€™s my two tips:

- If youâ€™re filtering position and velocity sensors that have a normal / gaussian error, and they really are separate sensors as opposed to faking velocity by nowPosition-lastPosition, use the variance and covariance of the two sensors for R, and perhaps experiment with Q until it works the way youâ€™d like. Variance = average[residual^2], covariance = average[positionResidual * velocityResidual], residual = measuredValue - realValue. If you donâ€™t have real values, you could try substituting some kind of moving average.
- If youâ€™re filtering non-gaussian data (eg stock prices), or your velocity is faked, simply experiment with Q and R until youâ€™re happy with the filter performance. Iâ€™d write a program to generate a million random Q+R pairs, then choose the pair that predicts â€˜nextâ€™ data points best. That is a (whopping) exercise for the reader, sorry!

Next, we need our â€˜state transition matrixâ€™, aka F in Kalman articles. This is a constant matrix that, when multiplied by the current state vector (position + velocity), gives us our expected next state vector (old position + velocity, same velocity). Eg it looks like:

```
const Matrix F_STATE_TRANSITION = {
tl: 1, tr: 1,
bl: 0, br: 1,
}
```

Maybe the above needs more explaining. Say our old state is position 100m, velocity 10m/s. Multiplying this by F works out like so:

```
Normal matrix * vector formula:
a: mat.tl * vec.a + mat.tr * vec.b,
b: mat.bl * vec.a + mat.br * vec.b
```

AKA:

```
new position: f.tl * position + f.tr * velocity
new velocity: f.bl * position + f.br * velocity
```

AKA:

```
new position: 1 * position + 1 * velocity
new velocity: 0 * position + 1 * velocity
```

AKA:

```
new position: position + velocity
new velocity: velocity
```

AKA:

```
new position: 110m
new velocity: 10m/s
```

In other words, for a timestep of 1 second, F simply adds the velocity-per-second to the current position, and assumes no friction hence velocity stays constant.

Next, we need the â€˜state errorâ€™, aka P in Kalman articles. I find it usually works fine to start filtering with this as 1. (Perhaps this should be scaled to match your typical values.)

```
Matrix pStateError = {
tl: 1, tr: 1,
bl: 1, br: 1,
}
```

Next, we need our â€˜current stateâ€™, aka X in Kalman articles. If, like me, youâ€™re faking the velocity, start the filter with this set using the first two measurements:

```
Vector xCurrentState = {
a: data[1],
b: data[1] - data[0], // Faux velocity.
}
```

Phew! Now for the interesting bit. For each new data point/measurement, we need to first predict what itâ€™ll be, then blend that prediction with the measurement to produce an estimate. This estimate is the filterâ€™s output. A â€˜Kalman gainâ€™ (known as K in Kalman articles) is calculated then is used to weight the prediction vs measurement when blending them to form the estimate:

```
// Make predictions.
Vector xPredicted = F_STATE_TRANSITION * xCurrentState
Matrix pPredicted = F_STATE_TRANSITION * pStateError * transpose(F_STATE_TRANSITION) + Q_EXTERNAL_NOISE_VARIANCE
// Update it with the measurement.
Vector zMeasurement = Vector {
a: data[current_index], // Position.
b: data[current_index] - data[previous_index] // Faux velocity.
}
Matrix kKalmanGain = pPredicted * invert(pPredicted + R_MEASUREMENT_NOISE_VARIANCE)
Vector xEstimate = xPredicted + kKalmanGain * (zMeasurement - xPredicted)
Matrix pEstimate = pPredicted - kKalmanGain * pPredicted
// Store for the next iteration.
pStateError = pEstimate
xCurrentState = xEstimate // This is the output!
```

Thanks for reading, I hope this helped, God bless, and have a nice week :)

You can read more of my blog here in my blog archive.

(Comp Sci, Hons - UTS)

iOS Developer (Freelancer / Contractor) in Australia.

I have worked at places such as Google, Cochlear, Assembly Payments, News Corp, Fox Sports, NineMSN, FetchTV, Coles, Woolworths, Trust Bank, and Westpac, among others. If you're looking for help developing an iOS app, drop me a line!

**Get in touch:**

[email protected]

github.com/chrishulbert

linkedin

my resume