gtsam  4.0.0 gtsam
gtsam::DoglegOptimizerImpl Struct Reference

## Detailed Description

This class contains the implementation of the Dogleg algorithm.

It is used by DoglegOptimizer and can be used to easily put together custom versions of Dogleg. Each function is well-documented and unit-tested. The notation here matches that in "trustregion.pdf" in doc, see this file for further explanation of the computations performed by this class.

## Static Public Member Functions

template<class M , class F , class VALUES >
static IterationResult Iterate (double delta, TrustRegionAdaptationMode mode, const VectorValues &dx_u, const VectorValues &dx_n, const M &Rd, const F &f, const VALUES &x0, const double f_error, const bool verbose=false)
Compute the update point for one iteration of the Dogleg algorithm, given an initial trust region radius $$\delta$$. More...

static VectorValues ComputeDoglegPoint (double delta, const VectorValues &dx_u, const VectorValues &dx_n, const bool verbose=false)
Compute the dogleg point given a trust region radius $$\delta$$. More...

static VectorValues ComputeBlend (double delta, const VectorValues &x_u, const VectorValues &x_n, const bool verbose=false)
Compute the point on the line between the steepest descent point and the Newton's method point intersecting the trust region boundary. More...

## Public Types

enum  TrustRegionAdaptationMode { SEARCH_EACH_ITERATION, SEARCH_REDUCE_ONLY, ONE_STEP_PER_ITERATION }
Specifies how the trust region is adapted at each Dogleg iteration. More...

## Classes

struct  IterationResult

## Member Enumeration Documentation

Specifies how the trust region is adapted at each Dogleg iteration.

If this is SEARCH_EACH_ITERATION, then the trust region radius will be increased potentially multiple times during one iteration until increasing it further no longer decreases the error. If this is ONE_STEP_PER_ITERATION, then the step in one iteration will not exceed the current trust region radius, but the radius will be increased for the next iteration if the error decrease is good. The former will generally result in slower iterations, but sometimes larger steps in early iterations. The latter generally results in faster iterations but it may take several iterations before the trust region radius is increased to the optimal value. Generally ONE_STEP_PER_ITERATION should be used, corresponding to most published descriptions of the algorithm.

## ◆ ComputeBlend()

 VectorValues gtsam::DoglegOptimizerImpl::ComputeBlend ( double delta, const VectorValues & x_u, const VectorValues & x_n, const bool verbose = false )
static

Compute the point on the line between the steepest descent point and the Newton's method point intersecting the trust region boundary.

Mathematically, computes $$\tau$$ such that $$0<\tau<1$$ and $$\| (1-\tau)\delta x_u + \tau\delta x_n \| = \delta$$, where $$\delta$$ is the trust region radius.

Parameters
 delta Trust region radius x_u Steepest descent minimizer x_n Newton's method minimizer

## ◆ ComputeDoglegPoint()

 VectorValues gtsam::DoglegOptimizerImpl::ComputeDoglegPoint ( double delta, const VectorValues & dx_u, const VectorValues & dx_n, const bool verbose = false )
static

Compute the dogleg point given a trust region radius $$\delta$$.

The dogleg point is the intersection between the dogleg path and the trust region boundary, see doc/trustregion.pdf for more details.

The update is computed using a quadratic approximation $$M(\delta x)$$ of an original nonlinear error function (a NonlinearFactorGraph) $$f(x)$$. The quadratic approximation is represented as a GaussianBayesNet $$\bayesNet$$, which is obtained by eliminating a GaussianFactorGraph resulting from linearizing the nonlinear factor graph $$f(x)$$. Thus, $$M(\delta x)$$ is

$M(\delta x) = (R \delta x - d)^T (R \delta x - d)$

where $$R$$ and $$d$$ together are a Bayes' net. $$R$$ is upper-triangular and $$d$$ is a vector, in GTSAM represented as a GaussianBayesNet, containing GaussianConditional s.

Parameters
 delta The trust region radius dx_u The steepest descent point, i.e. the Cauchy point dx_n The Gauss-Newton point
Returns
The dogleg point $$\delta x_d$$

## ◆ Iterate()

template<class M , class F , class VALUES >
 DoglegOptimizerImpl::IterationResult gtsam::DoglegOptimizerImpl::Iterate ( double delta, TrustRegionAdaptationMode mode, const VectorValues & dx_u, const VectorValues & dx_n, const M & Rd, const F & f, const VALUES & x0, const double f_error, const bool verbose = false )
static

Compute the update point for one iteration of the Dogleg algorithm, given an initial trust region radius $$\delta$$.

The trust region radius is adapted based on the error of a NonlinearFactorGraph $$f(x)$$, and depending on the update mode mode.

The update is computed using a quadratic approximation $$M(\delta x)$$ of an original nonlinear error function (a NonlinearFactorGraph) $$f(x)$$. The quadratic approximation is represented as a GaussianBayesNet $$\bayesNet$$, which is obtained by eliminating a GaussianFactorGraph resulting from linearizing the nonlinear factor graph $$f(x)$$. Thus, $$M(\delta x)$$ is

$M(\delta x) = (R \delta x - d)^T (R \delta x - d)$

where $$R$$ and $$d$$ together are a Bayes' net or Bayes' tree. $$R$$ is upper-triangular and $$d$$ is a vector, in GTSAM represented as a BayesNet<GaussianConditional> (GaussianBayesNet) or BayesTree<GaussianConditional>, containing GaussianConditional s.

Template Parameters
 M The type of the Bayes' net or tree, currently either BayesNet (or GaussianBayesNet) or BayesTree. F For normal usage this will be NonlinearFactorGraph. VALUES The Values or TupleValues to pass to F::error() to evaluate the error function.
Parameters
 delta The initial trust region radius. mode See DoglegOptimizerImpl::TrustRegionAdaptationMode Rd The Bayes' net or tree as described above. f The original nonlinear factor graph with which to evaluate the accuracy of $$M(\delta x)$$ to adjust $$\delta$$. x0 The linearization point about which $$\bayesNet$$ was created ordering The variable ordering used to create $$\bayesNet$$ f_error The result of f.error(x0).
Returns
A DoglegIterationResult containing the new delta, the linear update dx_d, and the resulting nonlinear error f_error.

The documentation for this struct was generated from the following files:
• /Users/dellaert/git/gtsam/gtsam/nonlinear/DoglegOptimizerImpl.h
• /Users/dellaert/git/gtsam/gtsam/nonlinear/DoglegOptimizerImpl.cpp