Navigation, SLAM, and SFM

Balbianello

In [1]:
# Install pre-release version of gtsam and gtbook to enable bleeding edge code
# %pip install -q --index-url https://test.pypi.org/simple gtsam==4.2a5
# %pip install -q --index-url https://test.pypi.org/simple gtbook==0.0.17

Overview

  • Probabilistic Inference
  • Navigation/Localization in 1D
  • Nonlinear Optimization (sensor networks example)
  • Optimization on Manifolds (Pose SLAM)
  • Optimization with Landmarks (SLAM, Visual SLAM, and SfM)
  • Neural Radiance Fields

References

Probabilistic Inference

  • Markov Chains
  • Dynamic Bayes Nets
  • Factor Graphs
  • Inference Algorithms
  • Applications

Markov Chains

Let's use the GTSAM library, which has some built-in support for creating variables over time.

In [5]:
# try varying 5. I tried up to 120, but graphics are less useful for large numbers...
N = 5
indices = range(1, N+1)
X = VARIABLES.discrete_series(
    character='X', indices=indices, domain=["A", "B", "C"])
In [6]:
VARIABLES
Out[6]:
VariableDomain
X1A, B, C
X2A, B, C
X3A, B, C
X4A, B, C
X5A, B, C

A Markov chain is specified by a prior $P(X_1)$ on the first state $X_1$ and the transition probability $P(X_{k+1}|X_k)$.

In [7]:
markov_chain = gtsam.DiscreteBayesNet()
prior_pmf = "3/1/1"
transition_cpt = "3/2/0 0/3/2 0/0/1"
markov_chain.add(X[1], prior_pmf)
for k in indices[:-1]:
    markov_chain.add(X[k+1], [X[k]],  transition_cpt)
In [8]:
ROW(pretty(markov_chain.at(j)) for j in range(N))
Out[8]:

P(X1):

X1value
A0.6
B0.2
C0.2

P(X2|X1):

X1ABC
A0.60.40
B00.60.4
C001

P(X3|X2):

X2ABC
A0.60.40
B00.60.4
C001

P(X4|X3):

X3ABC
A0.60.40
B00.60.4
C001

P(X5|X4):

X4ABC
A0.60.40
B00.60.4
C001

We can represent a Markov chain graphically using the language of Bayes nets. The Bayes net is generative, and encodes $$P(\mathcal{X}) = P(X_1) \prod_{k>1} P(X_{k+1}|X_k)$$

In [9]:
show_discrete(markov_chain, hints={'X': 1})
Out[9]:
varX1 X1 varX2 X2 varX1->varX2 varX3 X3 varX2->varX3 varX4 X4 varX3->varX4 varX5 X5 varX4->varX5

We can then use ancestral sampling, e.g. here are some samples:

In [11]:
reverse_markov_chain = reverse(markov_chain)
ROW(pretty(reverse_markov_chain.sample()) for j in range(3))
Out[11]:
Variablevalue
X1A
X2B
X3C
X4C
X5C
Variablevalue
X1A
X2A
X3A
X4B
X5B
Variablevalue
X1A
X2B
X3C
X4C
X5C

The stationary distribution (you might know it as PageRank) of a Markov chain can be computed using the power method.

In [12]:
T = np.array([[0.75, 0.25, 0], [0, 0.75, 0.25],[0, 0, 1]]).transpose()
pi0 = [1,1,1]; pi = [np.array(pi0)/sum(pi0)]
for _ in range(20): 
    pi.append(T@pi[-1])
px.line(np.array(pi))