feat(3rdparty): add eigen and ceres

This commit is contained in:
John Zhao
2019-01-03 16:25:18 +08:00
parent c6fd9db827
commit 6773d8eb7a
747 changed files with 375754 additions and 1 deletions

View File

@@ -0,0 +1,19 @@
find_package(Sphinx REQUIRED)
# HTML output directory
set(SPHINX_HTML_DIR "${CMAKE_BINARY_DIR}/docs/html")
# Install documentation
install(DIRECTORY ${SPHINX_HTML_DIR}
DESTINATION share/doc/ceres
COMPONENT Doc
PATTERN "${SPHINX_HTML_DIR}/*")
# Building using 'make_docs.py' python script
add_custom_target(ceres_docs ALL
python
"${CMAKE_SOURCE_DIR}/scripts/make_docs.py"
"${CMAKE_SOURCE_DIR}"
"${CMAKE_BINARY_DIR}/docs"
"${SPHINX_EXECUTABLE}"
COMMENT "Building HTML documentation with Sphinx")

View File

@@ -0,0 +1,13 @@
{% extends "!layout.html" %}
{% block footer %}
{{ super() }}
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-49769510-1', 'ceres-solver.org');
ga('send', 'pageview');
</script>
{% endblock %}

View File

@@ -0,0 +1,12 @@
.. _chapter-api:
=============
API Reference
=============
.. toctree::
:maxdepth: 3
nnls_modeling
nnls_solving
gradient_solver

View File

@@ -0,0 +1,128 @@
.. _sec-bibliography:
============
Bibliography
============
.. [Agarwal] S. Agarwal, N. Snavely, S. M. Seitz and R. Szeliski,
**Bundle Adjustment in the Large**, *Proceedings of the European
Conference on Computer Vision*, pp. 29--42, 2010.
.. [Bjorck] A. Bjorck, **Numerical Methods for Least Squares
Problems**, SIAM, 1996
.. [Brown] D. C. Brown, **A solution to the general problem of
multiple station analytical stereo triangulation**, Technical
Report 43, Patrick Airforce Base, Florida, 1958.
.. [ByrdNocedal] R. H. Byrd, J. Nocedal, R. B. Schanbel,
**Representations of Quasi-Newton Matrices and their use in Limited
Memory Methods**, *Mathematical Programming* 63(4):129-156, 1994.
.. [ByrdSchnabel] R.H. Byrd, R.B. Schnabel, and G.A. Shultz, **Approximate
solution of the trust region problem by minimization over
two dimensional subspaces**, *Mathematical programming*,
40(1):247263, 1988.
.. [Chen] Y. Chen, T. A. Davis, W. W. Hager, and
S. Rajamanickam, **Algorithm 887: CHOLMOD, Supernodal Sparse
Cholesky Factorization and Update/Downdate**, *TOMS*, 35(3), 2008.
.. [Conn] A.R. Conn, N.I.M. Gould, and P.L. Toint, **Trust region
methods**, *Society for Industrial Mathematics*, 2000.
.. [GolubPereyra] G.H. Golub and V. Pereyra, **The differentiation of
pseudo-inverses and nonlinear least squares problems whose
variables separate**, *SIAM Journal on numerical analysis*,
10(2):413432, 1973.
.. [HartleyZisserman] R.I. Hartley & A. Zisserman, **Multiview
Geometry in Computer Vision**, Cambridge University Press, 2004.
.. [KanataniMorris] K. Kanatani and D. D. Morris, **Gauges and gauge
transformations for uncertainty description of geometric structure
with indeterminacy**, *IEEE Transactions on Information Theory*
47(5):2017-2028, 2001.
.. [Keys] R. G. Keys, **Cubic convolution interpolation for digital
image processing**, *IEEE Trans. on Acoustics, Speech, and Signal
Processing*, 29(6), 1981.
.. [KushalAgarwal] A. Kushal and S. Agarwal, **Visibility based
preconditioning for bundle adjustment**, *In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition*, 2012.
.. [Kanzow] C. Kanzow, N. Yamashita and M. Fukushima,
**LevenbergMarquardt methods with strong local convergence
properties for solving nonlinear equations with convex
constraints**, *Journal of Computational and Applied Mathematics*,
177(2):375397, 2005.
.. [Levenberg] K. Levenberg, **A method for the solution of certain
nonlinear problems in least squares**, *Quart. Appl. Math*,
2(2):164168, 1944.
.. [LiSaad] Na Li and Y. Saad, **MIQR: A multilevel incomplete qr
preconditioner for large sparse least squares problems**, *SIAM
Journal on Matrix Analysis and Applications*, 28(2):524550, 2007.
.. [Madsen] K. Madsen, H.B. Nielsen, and O. Tingleff, **Methods for
nonlinear least squares problems**, 2004.
.. [Mandel] J. Mandel, **On block diagonal and Schur complement
preconditioning**, *Numer. Math.*, 58(1):7993, 1990.
.. [Marquardt] D.W. Marquardt, **An algorithm for least squares
estimation of nonlinear parameters**, *J. SIAM*, 11(2):431441,
1963.
.. [Mathew] T.P.A. Mathew, **Domain decomposition methods for the
numerical solution of partial differential equations**, Springer
Verlag, 2008.
.. [NashSofer] S.G. Nash and A. Sofer, **Assessing a search direction
within a truncated newton method**, *Operations Research Letters*,
9(4):219221, 1990.
.. [Nocedal] J. Nocedal, **Updating Quasi-Newton Matrices with Limited
Storage**, *Mathematics of Computation*, 35(151): 773--782, 1980.
.. [NocedalWright] J. Nocedal & S. Wright, **Numerical Optimization**,
Springer, 2004.
.. [Oren] S. S. Oren, **Self-scaling Variable Metric (SSVM) Algorithms
Part II: Implementation and Experiments**, Management Science,
20(5), 863-874, 1974.
.. [Ridders] C. J. F. Ridders, **Accurate computation of F'(x) and
F'(x) F"(x)**, Advances in Engineering Software 4(2), 75-76, 1978.
.. [RuheWedin] A. Ruhe and P.Å. Wedin, **Algorithms for separable
nonlinear least squares problems**, Siam Review, 22(3):318337,
1980.
.. [Saad] Y. Saad, **Iterative methods for sparse linear
systems**, SIAM, 2003.
.. [Stigler] S. M. Stigler, **Gauss and the invention of least
squares**, *The Annals of Statistics*, 9(3):465-474, 1981.
.. [TenenbaumDirector] J. Tenenbaum & B. Director, **How Gauss
Determined the Orbit of Ceres**.
.. [TrefethenBau] L.N. Trefethen and D. Bau, **Numerical Linear
Algebra**, SIAM, 1997.
.. [Triggs] B. Triggs, P. F. Mclauchlan, R. I. Hartley &
A. W. Fitzgibbon, **Bundle Adjustment: A Modern Synthesis**,
Proceedings of the International Workshop on Vision Algorithms:
Theory and Practice, pp. 298-372, 1999.
.. [Wiberg] T. Wiberg, **Computation of principal components when data
are missing**, In Proc. *Second Symp. Computational Statistics*,
pages 229236, 1976.
.. [WrightHolt] S. J. Wright and J. N. Holt, **An Inexact
Levenberg Marquardt Method for Large Sparse Nonlinear Least
Squares**, *Journal of the Australian Mathematical Society Series
B*, 26(4):387403, 1985.

View File

@@ -0,0 +1,937 @@
.. _chapter-building:
=======================
Building & Installation
=======================
Getting the source code
=======================
.. _section-source:
You can start with the `latest stable release
<http://ceres-solver.org/ceres-solver-1.11.0.tar.gz>`_ . Or if you want
the latest version, you can clone the git repository
.. code-block:: bash
git clone https://ceres-solver.googlesource.com/ceres-solver
.. _section-dependencies:
Dependencies
============
Ceres relies on a number of open source libraries, some of which are
optional. For details on customizing the build process, see
:ref:`section-customizing` .
- `Eigen <http://eigen.tuxfamily.org/index.php?title=Main_Page>`_
3.2.2 or later **strongly** recommended, 3.1.0 or later **required**.
.. NOTE ::
Ceres can also use Eigen as a sparse linear algebra
library. Please see the documentation for ``EIGENSPARSE`` for
more details.
- `CMake <http://www.cmake.org>`_ 2.8.0 or later.
**Required on all platforms except for Android.**
- `Google Log <http://code.google.com/p/google-glog>`_ 0.3.1 or
later. **Recommended**
.. NOTE::
Ceres has a minimal replacement of ``glog`` called ``miniglog``
that can be enabled with the ``MINIGLOG`` build
option. ``miniglog`` is needed on Android as ``glog`` currently
does not build using the NDK. It can however be used on other
platforms too.
**We do not advise using** ``miniglog`` **on platforms other than
Android due to the various performance and functionality
compromises in** ``miniglog``.
.. NOTE ::
If you are compiling ``glog`` from source, please note that currently,
the unit tests for ``glog`` (which are enabled by default) do not compile
against a default build of ``gflags`` 2.1 as the gflags namespace changed
from ``google::`` to ``gflags::``. A patch to fix this is available from
`here <https://code.google.com/p/google-glog/issues/detail?id=194>`_.
- `Google Flags <http://code.google.com/p/gflags>`_. Needed to build
examples and tests.
- `SuiteSparse
<http://faculty.cse.tamu.edu/davis/suitesparse.html>`_. Needed for
solving large sparse linear systems. **Optional; strongly recomended
for large scale bundle adjustment**
- `CXSparse <http://faculty.cse.tamu.edu/davis/suitesparse.html>`_.
Similar to ``SuiteSparse`` but simpler and slower. CXSparse has
no dependencies on ``LAPACK`` and ``BLAS``. This makes for a simpler
build process and a smaller binary. **Optional**
- `BLAS <http://www.netlib.org/blas/>`_ and `LAPACK
<http://www.netlib.org/lapack/>`_ routines are needed by
``SuiteSparse``, and optionally used by Ceres directly for some
operations.
On ``UNIX`` OSes other than Mac OS X we recommend `ATLAS
<http://math-atlas.sourceforge.net/>`_, which includes ``BLAS`` and
``LAPACK`` routines. It is also possible to use `OpenBLAS
<https://github.com/xianyi/OpenBLAS>`_ . However, one needs to be
careful to `turn off the threading
<https://github.com/xianyi/OpenBLAS/wiki/faq#wiki-multi-threaded>`_
inside ``OpenBLAS`` as it conflicts with use of threads in Ceres.
Mac OS X ships with an optimized ``LAPACK`` and ``BLAS``
implementation as part of the ``Accelerate`` framework. The Ceres
build system will automatically detect and use it.
For Windows things are much more complicated. `LAPACK For
Windows <http://icl.cs.utk.edu/lapack-for-windows/lapack/>`_
has detailed instructions..
**Optional but required for** ``SuiteSparse``.
.. _section-linux:
Linux
=====
We will use `Ubuntu <http://www.ubuntu.com>`_ as our example linux
distribution.
.. NOTE::
Up to at least Ubuntu 14.04, the SuiteSparse package in the official
package repository (built from SuiteSparse v3.4.0) **cannot** be used
to build Ceres as a *shared* library. Thus if you want to build
Ceres as a shared library using SuiteSparse, you must perform a
source install of SuiteSparse or use an external PPA (see
`bug report here <https://bugs.launchpad.net/ubuntu/+source/suitesparse/+bug/1333214>`_).
It is recommended that you use the current version of SuiteSparse
(4.2.1 at the time of writing).
Start by installing all the dependencies.
.. code-block:: bash
# CMake
sudo apt-get install cmake
# google-glog + gflags
sudo apt-get install libgoogle-glog-dev
# BLAS & LAPACK
sudo apt-get install libatlas-base-dev
# Eigen3
sudo apt-get install libeigen3-dev
# SuiteSparse and CXSparse (optional)
# - If you want to build Ceres as a *static* library (the default)
# you can use the SuiteSparse package in the main Ubuntu package
# repository:
sudo apt-get install libsuitesparse-dev
# - However, if you want to build Ceres as a *shared* library, you must
# add the following PPA:
sudo add-apt-repository ppa:bzindovic/suitesparse-bugfix-1319687
sudo apt-get update
sudo apt-get install libsuitesparse-dev
We are now ready to build, test, and install Ceres.
.. code-block:: bash
tar zxf ceres-solver-1.11.0.tar.gz
mkdir ceres-bin
cd ceres-bin
cmake ../ceres-solver-1.11.0
make -j3
make test
# Optionally install Ceres, it can also be exported using CMake which
# allows Ceres to be used without requiring installation, see the documentation
# for the EXPORT_BUILD_DIR option for more information.
make install
You can also try running the command line bundling application with one of the
included problems, which comes from the University of Washington's BAL
dataset [Agarwal]_.
.. code-block:: bash
bin/simple_bundle_adjuster ../ceres-solver-1.11.0/data/problem-16-22106-pre.txt
This runs Ceres for a maximum of 10 iterations using the
``DENSE_SCHUR`` linear solver. The output should look something like
this.
.. code-block:: bash
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 4.185660e+06 0.00e+00 1.09e+08 0.00e+00 0.00e+00 1.00e+04 0 7.59e-02 3.37e-01
1 1.062590e+05 4.08e+06 8.99e+06 5.36e+02 9.82e-01 3.00e+04 1 1.65e-01 5.03e-01
2 4.992817e+04 5.63e+04 8.32e+06 3.19e+02 6.52e-01 3.09e+04 1 1.45e-01 6.48e-01
3 1.899774e+04 3.09e+04 1.60e+06 1.24e+02 9.77e-01 9.26e+04 1 1.43e-01 7.92e-01
4 1.808729e+04 9.10e+02 3.97e+05 6.39e+01 9.51e-01 2.78e+05 1 1.45e-01 9.36e-01
5 1.803399e+04 5.33e+01 1.48e+04 1.23e+01 9.99e-01 8.33e+05 1 1.45e-01 1.08e+00
6 1.803390e+04 9.02e-02 6.35e+01 8.00e-01 1.00e+00 2.50e+06 1 1.50e-01 1.23e+00
Ceres Solver v1.11.0 Solve Report
----------------------------------
Original Reduced
Parameter blocks 22122 22122
Parameters 66462 66462
Residual blocks 83718 83718
Residual 167436 167436
Minimizer TRUST_REGION
Dense linear algebra library EIGEN
Trust region strategy LEVENBERG_MARQUARDT
Given Used
Linear solver DENSE_SCHUR DENSE_SCHUR
Threads 1 1
Linear solver threads 1 1
Linear solver ordering AUTOMATIC 22106, 16
Cost:
Initial 4.185660e+06
Final 1.803390e+04
Change 4.167626e+06
Minimizer iterations 6
Successful steps 6
Unsuccessful steps 0
Time (in seconds):
Preprocessor 0.261
Residual evaluation 0.082
Jacobian evaluation 0.412
Linear solver 0.442
Minimizer 1.051
Postprocessor 0.002
Total 1.357
Termination: CONVERGENCE (Function tolerance reached. |cost_change|/cost: 1.769766e-09 <= 1.000000e-06)
.. section-osx:
Mac OS X
========
.. NOTE::
Ceres will not compile using Xcode 4.5.x (Clang version 4.1) due to a
bug in that version of Clang. If you are running Xcode 4.5.x, please
update to Xcode >= 4.6.x before attempting to build Ceres.
On OS X, you can either use `MacPorts <https://www.macports.org/>`_ or
`Homebrew <http://mxcl.github.com/homebrew/>`_ to install Ceres Solver.
If using `MacPorts <https://www.macports.org/>`_, then
.. code-block:: bash
sudo port install ceres-solver
will install the latest version.
If using `Homebrew <http://mxcl.github.com/homebrew/>`_ and assuming
that you have the ``homebrew/science`` [#f1]_ tap enabled, then
.. code-block:: bash
brew install ceres-solver
will install the latest stable version along with all the required
dependencies and
.. code-block:: bash
brew install ceres-solver --HEAD
will install the latest version in the git repo.
You can also install each of the dependencies by hand using `Homebrew
<http://mxcl.github.com/homebrew/>`_. There is no need to install
``BLAS`` or ``LAPACK`` separately as OS X ships with optimized
``BLAS`` and ``LAPACK`` routines as part of the `vecLib
<https://developer.apple.com/library/mac/#documentation/Performance/Conceptual/vecLib/Reference/reference.html>`_
framework.
.. code-block:: bash
# CMake
brew install cmake
# google-glog and gflags
brew install glog
# Eigen3
brew install eigen
# SuiteSparse and CXSparse
brew install suite-sparse
We are now ready to build, test, and install Ceres.
.. code-block:: bash
tar zxf ceres-solver-1.11.0.tar.gz
mkdir ceres-bin
cd ceres-bin
cmake ../ceres-solver-1.11.0
make -j3
make test
# Optionally install Ceres, it can also be exported using CMake which
# allows Ceres to be used without requiring installation, see the
# documentation for the EXPORT_BUILD_DIR option for more information.
make install
Like the Linux build, you should now be able to run
``bin/simple_bundle_adjuster``.
.. rubric:: Footnotes
.. [#f1] Ceres and many of its dependencies are in `homebrew/science
<https://github.com/Homebrew/homebrew-science>`_ tap. So, if you
don't have this tap enabled, then you will need to enable it as
follows before executing any of the commands in this section.
.. code-block:: bash
brew tap homebrew/science
.. _section-windows:
Windows
=======
.. NOTE::
If you find the following CMake difficult to set up, then you may
be interested in a `Microsoft Visual Studio wrapper
<https://github.com/tbennun/ceres-windows>`_ for Ceres Solver by Tal
Ben-Nun.
On Windows, we support building with Visual Studio 2010 or newer. Note
that the Windows port is less featureful and less tested than the
Linux or Mac OS X versions due to the lack of an officially supported
way of building SuiteSparse and CXSparse. There are however a number
of unofficial ways of building these libraries. Building on Windows
also a bit more involved since there is no automated way to install
dependencies.
.. NOTE:: Using ``google-glog`` & ``miniglog`` with windows.h.
The windows.h header if used with GDI (Graphics Device Interface)
defines ``ERROR``, which conflicts with the definition of ``ERROR``
as a LogSeverity level in ``google-glog`` and ``miniglog``. There
are at least two possible fixes to this problem:
#. Use ``google-glog`` and define ``GLOG_NO_ABBREVIATED_SEVERITIES``
when building Ceres and your own project, as documented
`here <http://google-glog.googlecode.com/svn/trunk/doc/glog.html>`__.
Note that this fix will not work for ``miniglog``,
but use of ``miniglog`` is strongly discouraged on any platform for which
``google-glog`` is available (which includes Windows).
#. If you do not require GDI, then define ``NOGDI`` **before** including
windows.h. This solution should work for both ``google-glog`` and
``miniglog`` and is documented for ``google-glog``
`here <https://code.google.com/p/google-glog/issues/detail?id=33>`__.
#. Make a toplevel directory for deps & build & src somewhere: ``ceres/``
#. Get dependencies; unpack them as subdirectories in ``ceres/``
(``ceres/eigen``, ``ceres/glog``, etc)
#. ``Eigen`` 3.1 (needed on Windows; 3.0.x will not work). There is
no need to build anything; just unpack the source tarball.
#. ``google-glog`` Open up the Visual Studio solution and build it.
#. ``gflags`` Open up the Visual Studio solution and build it.
#. (Experimental) ``SuiteSparse`` Previously SuiteSparse was not available
on Windows, recently it has become possible to build it on Windows using
the `suitesparse-metis-for-windows <https://github.com/jlblancoc/suitesparse-metis-for-windows>`_
project. If you wish to use ``SuiteSparse``, follow their instructions
for obtaining and building it.
#. (Experimental) ``CXSparse`` Previously CXSparse was not available on
Windows, there are now several ports that enable it to be, including:
`[1] <https://github.com/PetterS/CXSparse>`_ and
`[2] <https://github.com/TheFrenchLeaf/CXSparse>`_. If you wish to use
``CXSparse``, follow their instructions for obtaining and building it.
#. Unpack the Ceres tarball into ``ceres``. For the tarball, you
should get a directory inside ``ceres`` similar to
``ceres-solver-1.3.0``. Alternately, checkout Ceres via ``git`` to
get ``ceres-solver.git`` inside ``ceres``.
#. Install ``CMake``,
#. Make a dir ``ceres/ceres-bin`` (for an out-of-tree build)
#. Run ``CMake``; select the ``ceres-solver-X.Y.Z`` or
``ceres-solver.git`` directory for the CMake file. Then select the
``ceres-bin`` for the build dir.
#. Try running ``Configure``. It won't work. It'll show a bunch of options.
You'll need to set:
#. ``EIGEN_INCLUDE_DIR_HINTS``
#. ``GLOG_INCLUDE_DIR_HINTS``
#. ``GLOG_LIBRARY_DIR_HINTS``
#. ``GFLAGS_INCLUDE_DIR_HINTS``
#. ``GFLAGS_LIBRARY_DIR_HINTS``
#. (Optional) ``SUITESPARSE_INCLUDE_DIR_HINTS``
#. (Optional) ``SUITESPARSE_LIBRARY_DIR_HINTS``
#. (Optional) ``CXSPARSE_INCLUDE_DIR_HINTS``
#. (Optional) ``CXSPARSE_LIBRARY_DIR_HINTS``
to the appropriate directories where you unpacked/built them. If any of
the variables are not visible in the ``CMake`` GUI, create a new entry
for them. We recommend using the ``<NAME>_(INCLUDE/LIBRARY)_DIR_HINTS``
variables rather than setting the ``<NAME>_INCLUDE_DIR`` &
``<NAME>_LIBRARY`` variables directly to keep all of the validity
checking, and to avoid having to specify the library files manually.
#. You may have to tweak some more settings to generate a MSVC
project. After each adjustment, try pressing Configure & Generate
until it generates successfully.
#. Open the solution and build it in MSVC
To run the tests, select the ``RUN_TESTS`` target and hit **Build
RUN_TESTS** from the build menu.
Like the Linux build, you should now be able to run
``bin/simple_bundle_adjuster``.
Notes:
#. The default build is Debug; consider switching it to release mode.
#. Currently ``system_test`` is not working properly.
#. CMake puts the resulting test binaries in ``ceres-bin/examples/Debug``
by default.
#. The solvers supported on Windows are ``DENSE_QR``, ``DENSE_SCHUR``,
``CGNR``, and ``ITERATIVE_SCHUR``.
#. We're looking for someone to work with upstream ``SuiteSparse`` to
port their build system to something sane like ``CMake``, and get a
fully supported Windows port.
.. _section-android:
Android
=======
Download the ``Android NDK`` version ``r9d`` or later. Run
``ndk-build`` from inside the ``jni`` directory. Use the
``libceres.a`` that gets created.
.. _section-ios:
iOS
===
.. NOTE::
You need iOS version 7.0 or higher to build Ceres Solver.
To build Ceres for iOS, we need to force ``CMake`` to find the toolchains from
the iOS SDK instead of using the standard ones. For example:
.. code-block:: bash
cmake \
-DCMAKE_TOOLCHAIN_FILE=../ceres-solver/cmake/iOS.cmake \
-DEIGEN_INCLUDE_DIR=/path/to/eigen/header \
-DIOS_PLATFORM=<PLATFORM> \
<PATH_TO_CERES_SOURCE>
``PLATFORM`` can be: ``OS``, ``SIMULATOR`` or ``SIMULATOR64``. You can
build for ``OS`` (``armv7``, ``armv7s``, ``arm64``), ``SIMULATOR`` (``i386``) or
``SIMULATOR64`` (``x86_64``) separately and use ``lipo`` to merge them into
one static library. See ``cmake/iOS.cmake`` for more options.
After building, you will get a ``libceres.a`` library, which you will need to
add to your Xcode project.
The default CMake configuration builds a bare bones version of Ceres
Solver that only depends on Eigen (``MINIGLOG`` is compiled into Ceres if it is
used), this should be sufficient for solving small to moderate sized problems
(No ``SPARSE_SCHUR``, ``SPARSE_NORMAL_CHOLESKY`` linear solvers and no
``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL`` preconditioners).
If you decide to use ``LAPACK`` and ``BLAS``, then you also need to add
``Accelerate.framework`` to your Xcode project's linking dependency.
.. _section-customizing:
Customizing the build
=====================
It is possible to reduce the libraries needed to build Ceres and
customize the build process by setting the appropriate options in
``CMake``. These options can either be set in the ``CMake`` GUI,
or via ``-D<OPTION>=<ON/OFF>`` when running ``CMake`` from the
command line. In general, you should only modify these options from
their defaults if you know what you are doing.
.. NOTE::
If you are setting variables via ``-D<VARIABLE>=<VALUE>`` when calling
``CMake``, it is important to understand that this forcibly **overwrites** the
variable ``<VARIABLE>`` in the ``CMake`` cache at the start of *every configure*.
This can lead to confusion if you are invoking the ``CMake``
`curses <http://www.gnu.org/software/ncurses/ncurses.html>`_ terminal GUI
(via ``ccmake``, e.g. ```ccmake -D<VARIABLE>=<VALUE> <PATH_TO_SRC>``).
In this case, even if you change the value of ``<VARIABLE>`` in the ``CMake``
GUI, your changes will be **overwritten** with the value passed via
``-D<VARIABLE>=<VALUE>`` (if one exists) at the start of each configure.
As such, it is generally easier not to pass values to ``CMake`` via ``-D``
and instead interactively experiment with their values in the ``CMake`` GUI.
If they are not present in the *Standard View*, toggle to the *Advanced View*
with ``<t>``.
Options controlling Ceres configuration
---------------------------------------
#. ``LAPACK [Default: ON]``: By default Ceres will use ``LAPACK`` (&
``BLAS``) if they are found. Turn this ``OFF`` to build Ceres
without ``LAPACK``. Turning this ``OFF`` also disables
``SUITESPARSE`` as it depends on ``LAPACK``.
#. ``SUITESPARSE [Default: ON]``: By default, Ceres will link to
``SuiteSparse`` if it and all of its dependencies are present. Turn
this ``OFF`` to build Ceres without ``SuiteSparse``. Note that
``LAPACK`` must be ``ON`` in order to build with ``SuiteSparse``.
#. ``CXSPARSE [Default: ON]``: By default, Ceres will link to
``CXSparse`` if all its dependencies are present. Turn this ``OFF``
to build Ceres without ``CXSparse``.
#. ``EIGENSPARSE [Default: OFF]``: By default, Ceres will not use
Eigen's sparse Cholesky factorization. The is because this part of
the code is licensed under the ``LGPL`` and since ``Eigen`` is a
header only library, including this code will result in an ``LGPL``
licensed version of Ceres.
.. NOTE::
For good performance, use Eigen version 3.2.2 or later.
#. ``GFLAGS [Default: ON]``: Turn this ``OFF`` to build Ceres without
``gflags``. This will also prevent some of the example code from
building.
#. ``MINIGLOG [Default: OFF]``: Ceres includes a stripped-down,
minimal implementation of ``glog`` which can optionally be used as
a substitute for ``glog``, thus removing ``glog`` as a required
dependency. Turn this ``ON`` to use this minimal ``glog``
implementation.
#. ``SCHUR_SPECIALIZATIONS [Default: ON]``: If you are concerned about
binary size/compilation time over some small (10-20%) performance
gains in the ``SPARSE_SCHUR`` solver, you can disable some of the
template specializations by turning this ``OFF``.
#. ``OPENMP [Default: ON]``: On certain platforms like Android,
multi-threading with ``OpenMP`` is not supported. Turn this ``OFF``
to disable multi-threading.
#. ``CXX11 [Default: OFF]`` *Non-Windows platforms only*.
Although Ceres does not currently use C++11, it does use ``shared_ptr``
(required) and ``unordered_map`` (if available); both of which existed in the
previous iterations of what became the C++11 standard: TR1 & C++0x. As such,
Ceres can compile on pre-C++11 compilers, using the TR1/C++0x versions of
``shared_ptr`` & ``unordered_map``.
Note that on Linux (GCC & Clang), compiling against the TR1/C++0x versions:
``CXX11=OFF`` (the default) *does not* require ``-std=c++11`` when compiling
Ceres, *nor* does it require that any client code using Ceres use
``-std=c++11``. However, this will cause compile errors if any client code
that uses Ceres also uses C++11 (mismatched versions of ``shared_ptr`` &
``unordered_map``).
Enabling this option: ``CXX11=ON`` forces Ceres to use the C++11
versions of ``shared_ptr`` & ``unordered_map`` if they are available, and
thus imposes the requirement that all client code using Ceres also
compile with ``-std=c++11``. This requirement is handled automatically
through CMake target properties on the exported Ceres target for CMake >=
2.8.12 (when it was introduced). Thus, any client code which uses CMake will
automatically be compiled with ``-std=c++11``. **On CMake versions <
2.8.12, you are responsible for ensuring that any code which uses Ceres is
compiled with** ``-std=c++11``.
On OS X 10.9+, Clang will use the C++11 versions of ``shared_ptr`` &
``unordered_map`` without ``-std=c++11`` and so this option does not change
the versions detected, although enabling it *will* require that client code
compile with ``-std=c++11``.
The following table summarises the effects of the ``CXX11`` option:
=================== ========== ================ ======================================
OS CXX11 Detected Version Ceres & client code require ``-std=c++11``
=================== ========== ================ ======================================
Linux (GCC & Clang) OFF tr1 **No**
Linux (GCC & Clang) ON std **Yes**
OS X 10.9+ OFF std **No**
OS X 10.9+ ON std **Yes**
=================== ========== ================ ======================================
The ``CXX11`` option does does not exist for Windows, as there any new C++
features available are enabled by default, and there is no analogue of
``-std=c++11``.
#. ``BUILD_SHARED_LIBS [Default: OFF]``: By default Ceres is built as
a static library, turn this ``ON`` to instead build Ceres as a
shared library.
#. ``EXPORT_BUILD_DIR [Default: OFF]``: By default Ceres is configured solely
for installation, and so must be installed in order for clients to use it.
Turn this ``ON`` to export Ceres' build directory location into the
`user's local CMake package registry <http://www.cmake.org/cmake/help/v3.2/manual/cmake-packages.7.html#user-package-registry>`_
where it will be detected **without requiring installation** in a client
project using CMake when `find_package(Ceres) <http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
is invoked.
#. ``BUILD_DOCUMENTATION [Default: OFF]``: Use this to enable building
the documentation, requires `Sphinx <http://sphinx-doc.org/>`_ and the
`sphinx_rtd_theme <https://pypi.python.org/pypi/sphinx_rtd_theme>`_
package available from the Python package index. In addition,
``make ceres_docs`` can be used to build only the documentation.
#. ``MSVC_USE_STATIC_CRT [Default: OFF]`` *Windows Only*: By default
Ceres will use the Visual Studio default, *shared* C-Run Time (CRT) library.
Turn this ``ON`` to use the *static* C-Run Time library instead.
Options controlling Ceres dependency locations
----------------------------------------------
Ceres uses the ``CMake``
`find_package <http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
function to find all of its dependencies using
``Find<DEPENDENCY_NAME>.cmake`` scripts which are either included in Ceres
(for most dependencies) or are shipped as standard with ``CMake``
(for ``LAPACK`` & ``BLAS``). These scripts will search all of the "standard"
install locations for various OSs for each dependency. However, particularly
for Windows, they may fail to find the library, in this case you will have to
manually specify its installed location. The ``Find<DEPENDENCY_NAME>.cmake``
scripts shipped with Ceres support two ways for you to do this:
#. Set the *hints* variables specifying the *directories* to search in
preference, but in addition, to the search directories in the
``Find<DEPENDENCY_NAME>.cmake`` script:
- ``<DEPENDENCY_NAME (CAPS)>_INCLUDE_DIR_HINTS``
- ``<DEPENDENCY_NAME (CAPS)>_LIBRARY_DIR_HINTS``
These variables should be set via ``-D<VAR>=<VALUE>``
``CMake`` arguments as they are not visible in the GUI.
#. Set the variables specifying the *explicit* include directory
and library file to use:
- ``<DEPENDENCY_NAME (CAPS)>_INCLUDE_DIR``
- ``<DEPENDENCY_NAME (CAPS)>_LIBRARY``
This bypasses *all* searching in the
``Find<DEPENDENCY_NAME>.cmake`` script, but validation is still
performed.
These variables are available to set in the ``CMake`` GUI. They
are visible in the *Standard View* if the library has not been
found (but the current Ceres configuration requires it), but
are always visible in the *Advanced View*. They can also be
set directly via ``-D<VAR>=<VALUE>`` arguments to ``CMake``.
Building using custom BLAS & LAPACK installs
----------------------------------------------
If the standard find package scripts for ``BLAS`` & ``LAPACK`` which ship with
``CMake`` fail to find the desired libraries on your system, try setting
``CMAKE_LIBRARY_PATH`` to the path(s) to the directories containing the
``BLAS`` & ``LAPACK`` libraries when invoking ``CMake`` to build Ceres via
``-D<VAR>=<VALUE>``. This should result in the libraries being found for any
common variant of each.
If you are building on an exotic system, or setting ``CMAKE_LIBRARY_PATH``
does not work, or is not appropriate for some other reason, one option would be
to write your own custom versions of ``FindBLAS.cmake`` &
``FindLAPACK.cmake`` specific to your environment. In this case you must set
``CMAKE_MODULE_PATH`` to the directory containing these custom scripts when
invoking ``CMake`` to build Ceres and they will be used in preference to the
default versions. However, in order for this to work, your scripts must provide
the full set of variables provided by the default scripts. Also, if you are
building Ceres with ``SuiteSparse``, the versions of ``BLAS`` & ``LAPACK``
used by ``SuiteSparse`` and Ceres should be the same.
.. _section-using-ceres:
Using Ceres with CMake
======================
In order to use Ceres in client code with CMake using
`find_package() <http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
then either:
#. Ceres must have been installed with ``make install``.
If the install location is non-standard (i.e. is not in CMake's default
search paths) then it will not be detected by default, see:
:ref:`section-local-installations`.
Note that if you are using a non-standard install location you should
consider exporting Ceres instead, as this will not require any extra
information to be provided in client code for Ceres to be detected.
#. Or Ceres' build directory must have been exported
by enabling the ``EXPORT_BUILD_DIR`` option when Ceres was configured.
As an example of how to use Ceres, to compile `examples/helloworld.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
in a separate standalone project, the following CMakeList.txt can be used:
.. code-block:: cmake
cmake_minimum_required(VERSION 2.8)
project(helloworld)
find_package(Ceres REQUIRED)
include_directories(${CERES_INCLUDE_DIRS})
# helloworld
add_executable(helloworld helloworld.cc)
target_link_libraries(helloworld ${CERES_LIBRARIES})
Irrespective of whether Ceres was installed or exported, if multiple versions
are detected, set: ``Ceres_DIR`` to control which is used. If Ceres was
installed ``Ceres_DIR`` should be the path to the directory containing the
installed ``CeresConfig.cmake`` file (e.g. ``/usr/local/share/Ceres``). If
Ceres was exported, then ``Ceres_DIR`` should be the path to the exported
Ceres build directory.
Specify Ceres version
---------------------
Additionally, when CMake has found Ceres it can optionally check the package
version, if it has been specified in the `find_package()
<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
call. For example:
.. code-block:: cmake
find_package(Ceres 1.2.3 REQUIRED)
.. _section-local-installations:
Local installations
-------------------
If Ceres was installed in a non-standard path by specifying
``-DCMAKE_INSTALL_PREFIX="/some/where/local"``, then the user should add
the **PATHS** option to the ``find_package()`` command, e.g.,
.. code-block:: cmake
find_package(Ceres REQUIRED PATHS "/some/where/local/")
Note that this can be used to have multiple versions of Ceres
installed. However, particularly if you have only a single version of Ceres
which you want to use but do not wish to install to a system location, you
should consider exporting Ceres using the ``EXPORT_BUILD_DIR`` option instead
of a local install, as exported versions of Ceres will be automatically detected
by CMake, irrespective of their location.
Understanding the CMake Package System
----------------------------------------
Although a full tutorial on CMake is outside the scope of this guide, here
we cover some of the most common CMake misunderstandings that crop up
when using Ceres. For more detailed CMake usage, the following references are
very useful:
- The `official CMake tutorial <http://www.cmake.org/cmake-tutorial/>`_
Provides a tour of the core features of CMake.
- `ProjectConfig tutorial <http://www.cmake.org/Wiki/CMake/Tutorials/How_to_create_a_ProjectConfig.cmake_file>`_ and the `cmake-packages documentation <http://www.cmake.org/cmake/help/git-master/manual/cmake-packages.7.html>`_
Cover how to write a ``ProjectConfig.cmake`` file, discussed below, for
your own project when installing or exporting it using CMake. It also covers
how these processes in conjunction with ``find_package()`` are actually
handled by CMake. The
`ProjectConfig tutorial <http://www.cmake.org/Wiki/CMake/Tutorials/How_to_create_a_ProjectConfig.cmake_file>`_
is the older style, currently used by Ceres for compatibility with older
versions of CMake.
.. NOTE :: **Targets in CMake.**
All libraries and executables built using CMake are represented as
*targets* created using
`add_library()
<http://www.cmake.org/cmake/help/v3.2/command/add_library.html>`_
and
`add_executable()
<http://www.cmake.org/cmake/help/v3.2/command/add_executable.html>`_.
Targets encapsulate the rules and dependencies (which can be other targets)
required to build or link against an object. This allows CMake to
implicitly manage dependency chains. Thus it is sufficient to tell CMake
that a library target: ``B`` depends on a previously declared library target
``A``, and CMake will understand that this means that ``B`` also depends on
all of the public dependencies of ``A``.
When a project like Ceres is installed using CMake, or its build directory is
exported into the local CMake package registry
(see :ref:`section-install-vs-export`), in addition to the public
headers and compiled libraries, a set of CMake-specific project configuration
files are also installed to: ``<INSTALL_ROOT>/share/Ceres`` (if Ceres is
installed), or created in the build directory (if Ceres' build directory is
exported). When `find_package
<http://www.cmake.org/cmake/help/v3.2/command/find_package.html>`_
is invoked, CMake checks various standard install locations (including
``/usr/local`` on Linux & UNIX systems), and the local CMake package registry
for CMake configuration files for the project to be found (i.e. Ceres in the
case of ``find_package(Ceres)``). Specifically it looks for:
- ``<PROJECT_NAME>Config.cmake`` (or ``<lower_case_project_name>-config.cmake``)
Which is written by the developers of the project, and is configured with
the selected options and installed locations when the project is built and
defines the CMake variables: ``<PROJECT_NAME>_INCLUDE_DIRS`` &
``<PROJECT_NAME>_LIBRARIES`` which are used by the caller to import
the project.
The ``<PROJECT_NAME>Config.cmake`` typically includes a second file installed to
the same location:
- ``<PROJECT_NAME>Targets.cmake``
Which is autogenerated by CMake as part of the install process and defines
**imported targets** for the project in the caller's CMake scope.
An **imported target** contains the same information about a library as a CMake
target that was declared locally in the current CMake project using
``add_library()``. However, imported targets refer to objects that have already
been built by a different CMake project. Principally, an imported
target contains the location of the compiled object and all of its public
dependencies required to link against it. Any locally declared target can
depend on an imported target, and CMake will manage the dependency chain, just
as if the imported target had been declared locally by the current project.
Crucially, just like any locally declared CMake target, an imported target is
identified by its **name** when adding it as a dependency to another target.
Thus, if in a project using Ceres you had the following in your CMakeLists.txt:
.. code-block:: cmake
find_package(Ceres REQUIRED)
message("CERES_LIBRARIES = ${CERES_LIBRARIES}")
You would see the output: ``CERES_LIBRARIES = ceres``. **However**, here
``ceres`` is an **imported target** created when ``CeresTargets.cmake`` was
read as part of ``find_package(Ceres REQUIRED)``. It does **not** refer
(directly) to the compiled Ceres library: ``libceres.a/so/dylib/lib``. This
distinction is important, as depending on the options selected when it was
built, Ceres can have public link dependencies which are encapsulated in the
imported target and automatically added to the link step when Ceres is added
as a dependency of another target by CMake. In this case, linking only against
``libceres.a/so/dylib/lib`` without these other public dependencies would
result in a linker error.
Note that this description applies both to projects that are **installed**
using CMake, and to those whose **build directory is exported** using
`export() <http://www.cmake.org/cmake/help/v3.2/command/export.html>`_
(instead of
`install() <http://www.cmake.org/cmake/help/v3.2/command/install.html>`_).
Ceres supports both installation and export of its build directory if the
``EXPORT_BUILD_DIR`` option is enabled, see :ref:`section-customizing`.
.. _section-install-vs-export:
Installing a project with CMake vs Exporting its build directory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When a project is **installed**, the compiled libraries and headers are copied
from the source & build directory to the install location, and it is these
copied files that are used by any client code. When a project's build directory
is **exported**, instead of copying the compiled libraries and headers, CMake
creates an entry for the project in the
`user's local CMake package registry <http://www.cmake.org/cmake/help/v3.2/manual/cmake-packages.7.html#user-package-registry>`_,
``<USER_HOME>/.cmake/packages`` on Linux & OS X, which contains the path to
the project's build directory which will be checked by CMake during a call to
``find_package()``. The effect of which is that any client code uses the
compiled libraries and headers in the build directory directly, **thus not
requiring the project to be installed to be used**.
Installing / Exporting a project that uses Ceres
--------------------------------------------------
As described in `Understanding the CMake Package System`_, the contents of
the ``CERES_LIBRARIES`` variable is the **name** of an imported target which
represents Ceres. If you are installing / exporting your *own* project which
*uses* Ceres, it is important to understand that:
**imported targets are not (re)exported when a project which imported them is
exported**.
Thus, when a project ``Foo`` which uses Ceres is exported, its list of
dependencies as seen by another project ``Bar`` which imports ``Foo`` via:
``find_package(Foo REQUIRED)`` will contain: ``ceres``. However, the
definition of ``ceres`` as an imported target is **not (re)exported** when Foo
is exported. Hence, without any additional steps, when processing ``Bar``,
``ceres`` will not be defined as an imported target. Thus, when processing
``Bar``, CMake will assume that ``ceres`` refers only to:
``libceres.a/so/dylib/lib`` (the compiled Ceres library) directly if it is on
the current list of search paths. In which case, no CMake errors will occur,
but ``Bar`` will not link properly, as it does not have the required public link
dependencies of Ceres, which are stored in the imported target defintion.
The solution to this is for ``Foo`` (i.e., the project that uses Ceres) to
invoke ``find_package(Ceres)`` in ``FooConfig.cmake``, thus ``ceres`` will be
defined as an imported target when CMake processes ``Bar``. An example of the
required modifications to ``FooConfig.cmake`` are show below:
.. code-block:: cmake
# Importing Ceres in FooConfig.cmake using CMake 2.8.x style.
#
# When configure_file() is used to generate FooConfig.cmake from
# FooConfig.cmake.in, @Ceres_DIR@ will be replaced with the current
# value of Ceres_DIR being used by Foo. This should be passed as a hint
# when invoking find_package(Ceres) to ensure that the same install of
# Ceres is used as was used to build Foo.
set(CERES_DIR_HINTS @Ceres_DIR@)
# Forward the QUIET / REQUIRED options.
if (Foo_FIND_QUIETLY)
find_package(Ceres QUIET HINTS ${CERES_DIR_HINTS})
elseif (Foo_FIND_REQUIRED)
find_package(Ceres REQUIRED HINTS ${CERES_DIR_HINTS})
else ()
find_package(Ceres HINTS ${CERES_DIR_HINTS})
endif()
.. code-block:: cmake
# Importing Ceres in FooConfig.cmake using CMake 3.x style.
#
# In CMake v3.x, the find_dependency() macro exists to forward the REQUIRED
# / QUIET parameters to find_package() when searching for dependencies.
#
# Note that find_dependency() does not take a path hint, so if Ceres was
# installed in a non-standard location, that location must be added to
# CMake's search list before this call.
include(CMakeFindDependencyMacro)
find_dependency(Ceres)

View File

@@ -0,0 +1,243 @@
# -*- coding: utf-8 -*-
#
# Ceres Solver documentation build configuration file, created by
# sphinx-quickstart on Sun Jan 20 20:34:07 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.todo', 'sphinx.ext.mathjax', 'sphinx.ext.ifconfig']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Ceres Solver'
copyright = u'2014 Google Inc'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.11'
# The full version, including alpha/beta/rc tags.
release = '1.11.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ["_themes",]
import sphinx_rtd_theme
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
html_title = "Ceres Solver"
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_domain_indices = True
# If false, no index is generated.
html_use_index = True
# If true, the index is split into individual pages for each letter.
html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'CeresSolverdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'CeresSolver.tex', u'Ceres Solver',
u'Sameer Agarwal \\& Keir Mierle', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'ceressolver', u'Ceres Solver',
[u'Sameer Agarwal & Keir Mierle'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'CeresSolver', u'Ceres Solver',
u'Sameer Agarwal & Keir Mierle', 'CeresSolver', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'

View File

@@ -0,0 +1,142 @@
.. _chapter-contributing:
============
Contributing
============
We welcome contributions to Ceres, whether they are new features, bug
fixes or tests. The Ceres `mailing
<http://groups.google.com/group/ceres-solver>`_ list is the best place
for all development related discussions. Please consider joining
it. If you have ideas on how you would like to contribute to Ceres, it
is a good idea to let us know on the mailing list before you start
development. We may have suggestions that will save effort when trying
to merge your work into the main branch. If you are looking for ideas,
please let us know about your interest and skills and we will be happy
to make a suggestion or three.
We follow Google's `C++ Style Guide
<http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml>`_ and
use `git <http://git-scm.com/>`_ for version control. We use the
`Gerrit <https://ceres-solver-review.googlesource.com/>`_ to collaborate and
review changes to Ceres. Gerrit enables pre-commit reviews so that
Ceres can maintain a linear history with clean, reviewed commits, and
no merges.
We now describe how to set up your development environment and submit
a change list for review via Gerrit.
Setting up your Environment
===========================
1. Download and configure ``git``.
* Mac ``brew install git``.
* Linux ``sudo apt-get install git``.
* Windows. Download `msysgit
<https://code.google.com/p/msysgit/>`_, which includes a minimal
`Cygwin <http://www.cygwin.com/>`_ install.
2. Sign up for `Gerrit
<https://ceres-solver-review.googlesource.com/>`_. You will also
need to sign the Contributor License Agreement (CLA) with Google,
which gives Google a royalty-free unlimited license to use your
contributions. You retain copyright.
3. Clone the Ceres Solver ``git`` repository from Gerrit.
.. code-block:: bash
git clone https://ceres-solver.googlesource.com/ceres-solver
4. Build Ceres, following the instructions in
:ref:`chapter-building`.
On Mac and Linux, the ``CMake`` build will download and enable
the Gerrit pre-commit hook automatically. This pre-submit hook
creates `Change-Id: ...` lines in your commits.
If this does not work OR you are on Windows, execute the
following in the root directory of the local ``git`` repository:
.. code-block:: bash
curl -o .git/hooks/commit-msg https://ceres-solver-review.googlesource.com/tools/hooks/commit-msg
chmod +x .git/hooks/commit-msg
5. Configure your Gerrit password with a ``.netrc`` (Mac and Linux)
or ``_netrc`` (Windows) which allows pushing to Gerrit without
having to enter a very long random password every time:
* Sign into `http://ceres-solver-review.googlesource.com
<http://ceres-solver-review.googlesource.com>`_.
* Click ``Settings -> HTTP Password -> Obtain Password``.
* (maybe) Select an account for multi-login. This should be the
same as your Gerrit login.
* Click ``Allow access`` when the page requests access to your
``git`` repositories.
* Copy the contents of the ``netrc`` into the clipboard.
- On Mac and Linux, paste the contents into ``~/.netrc``.
- On Windows, by default users do not have a ``%HOME%``
setting.
Executing ``setx HOME %USERPROFILE%`` in a terminal will set up
the ``%HOME%`` environment variable persistently, and is used
by ``git`` to find ``%HOME%\_netrc``.
Then, create a new text file named ``_netrc`` and put it in
e.g. ``C:\Users\username`` where ``username`` is your user
name.
Submitting a change
===================
1. Make your changes against master or whatever branch you
like. Commit your changes as one patch. When you commit, the Gerrit
hook will add a `Change-Id:` line as the last line of the commit.
Make sure that your commit message is formatted in the `50/72 style
<http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html>`_.
2. Push your changes to the Ceres Gerrit instance:
.. code-block:: bash
git push origin HEAD:refs/for/master
When the push succeeds, the console will display a URL showing the
address of the review. Go to the URL and add at least one of the
maintainers (Sameer Agarwal, Keir Mierle, or Alex Stewart) as reviewers.
3. Wait for a review.
4. Once review comments come in, address them. Please reply to each
comment in Gerrit, which makes the re-review process easier. After
modifying the code in your ``git`` instance, *don't make a new
commit*. Instead, update the last commit using a command like the
following:
.. code-block:: bash
git commit --amend -a
This will update the last commit, so that it has both the original
patch and your updates as a single commit. You will have a chance
to edit the commit message as well. Push the new commit to Gerrit
as before.
Gerrit will use the ``Change-Id:`` to match the previous commit
with the new one. The review interface retains your original patch,
but also shows the new patch.
Publish your responses to the comments, and wait for a new round
of reviews.

View File

@@ -0,0 +1,292 @@
.. _chapter-tricks:
===================
FAQS, Tips & Tricks
===================
Answers to frequently asked questions, tricks of the trade and general
wisdom.
Building
========
#. Use `google-glog <http://code.google.com/p/google-glog>`_.
Ceres has extensive support for logging detailed information about
memory allocations and time consumed in various parts of the solve,
internal error conditions etc. This is done logging using the
`google-glog <http://code.google.com/p/google-glog>`_ library. We
use it extensively to observe and analyze Ceres's
performance. `google-glog <http://code.google.com/p/google-glog>`_
allows you to control its behaviour from the command line `flags
<http://google-glog.googlecode.com/svn/trunk/doc/glog.html>`_. Starting
with ``-logtostderr`` you can add ``-v=N`` for increasing values
of ``N`` to get more and more verbose and detailed information
about Ceres internals.
In an attempt to reduce dependencies, it is tempting to use
`miniglog` - a minimal implementation of the ``glog`` interface
that ships with Ceres. This is a bad idea. ``miniglog`` was written
primarily for building and using Ceres on Android because the
current version of `google-glog
<http://code.google.com/p/google-glog>`_ does not build using the
NDK. It has worse performance than the full fledged glog library
and is much harder to control and use.
Modeling
========
#. Use analytical/automatic derivatives.
This is the single most important piece of advice we can give to
you. It is tempting to take the easy way out and use numeric
differentiation. This is a bad idea. Numeric differentiation is
slow, ill-behaved, hard to get right, and results in poor
convergence behaviour.
Ceres allows the user to define templated functors which will
be automatically differentiated. For most situations this is enough
and we recommend using this facility. In some cases the derivatives
are simple enough or the performance considerations are such that
the overhead of automatic differentiation is too much. In such
cases, analytic derivatives are recommended.
The use of numerical derivatives should be a measure of last
resort, where it is simply not possible to write a templated
implementation of the cost function.
In many cases it is not possible to do analytic or automatic
differentiation of the entire cost function, but it is generally
the case that it is possible to decompose the cost function into
parts that need to be numerically differentiated and parts that can
be automatically or analytically differentiated.
To this end, Ceres has extensive support for mixing analytic,
automatic and numeric differentiation. See
:class:`CostFunctionToFunctor`.
#. Putting `Inverse Function Theorem
<http://en.wikipedia.org/wiki/Inverse_function_theorem>`_ to use.
Every now and then we have to deal with functions which cannot be
evaluated analytically. Computing the Jacobian in such cases is
tricky. A particularly interesting case is where the inverse of the
function is easy to compute analytically. An example of such a
function is the Coordinate transformation between the `ECEF
<http://en.wikipedia.org/wiki/ECEF>`_ and the `WGS84
<http://en.wikipedia.org/wiki/World_Geodetic_System>`_ where the
conversion from WGS84 to ECEF is analytic, but the conversion
back to WGS84 uses an iterative algorithm. So how do you compute the
derivative of the ECEF to WGS84 transformation?
One obvious approach would be to numerically
differentiate the conversion function. This is not a good idea. For
one, it will be slow, but it will also be numerically quite
bad.
Turns out you can use the `Inverse Function Theorem
<http://en.wikipedia.org/wiki/Inverse_function_theorem>`_ in this
case to compute the derivatives more or less analytically.
The key result here is. If :math:`x = f^{-1}(y)`, and :math:`Df(x)`
is the invertible Jacobian of :math:`f` at :math:`x`. Then the
Jacobian :math:`Df^{-1}(y) = [Df(x)]^{-1}`, i.e., the Jacobian of
the :math:`f^{-1}` is the inverse of the Jacobian of :math:`f`.
Algorithmically this means that given :math:`y`, compute :math:`x =
f^{-1}(y)` by whatever means you can. Evaluate the Jacobian of
:math:`f` at :math:`x`. If the Jacobian matrix is invertible, then
its inverse is the Jacobian of :math:`f^{-1}(y)` at :math:`y`.
One can put this into practice with the following code fragment.
.. code-block:: c++
Eigen::Vector3d ecef; // Fill some values
// Iterative computation.
Eigen::Vector3d lla = ECEFToLLA(ecef);
// Analytic derivatives
Eigen::Matrix3d lla_to_ecef_jacobian = LLAToECEFJacobian(lla);
bool invertible;
Eigen::Matrix3d ecef_to_lla_jacobian;
lla_to_ecef_jacobian.computeInverseWithCheck(ecef_to_lla_jacobian, invertible);
#. When using Quaternions, use :class:`QuaternionParameterization`.
TBD
#. How to choose a parameter block size?
TBD
Solving
=======
#. Choosing a linear solver.
When using the ``TRUST_REGION`` minimizer, the choice of linear
solver is an important decision. It affects solution quality and
runtime. Here is a simple way to reason about it.
1. For small (a few hundred parameters) or dense problems use
``DENSE_QR``.
2. For general sparse problems (i.e., the Jacobian matrix has a
substantial number of zeros) use
``SPARSE_NORMAL_CHOLESKY``. This requires that you have
``SuiteSparse`` or ``CXSparse`` installed.
3. For bundle adjustment problems with up to a hundred or so
cameras, use ``DENSE_SCHUR``.
4. For larger bundle adjustment problems with sparse Schur
Complement/Reduced camera matrices use ``SPARSE_SCHUR``. This
requires that you build Ceres with support for ``SuiteSparse``,
``CXSparse`` or Eigen's sparse linear algebra libraries.
If you do not have access to these libraries for whatever
reason, ``ITERATIVE_SCHUR`` with ``SCHUR_JACOBI`` is an
excellent alternative.
5. For large bundle adjustment problems (a few thousand cameras or
more) use the ``ITERATIVE_SCHUR`` solver. There are a number of
preconditioner choices here. ``SCHUR_JACOBI`` offers an
excellent balance of speed and accuracy. This is also the
recommended option if you are solving medium sized problems for
which ``DENSE_SCHUR`` is too slow but ``SuiteSparse`` is not
available.
.. NOTE::
If you are solving small to medium sized problems, consider
setting ``Solver::Options::use_explicit_schur_complement`` to
``true``, it can result in a substantial performance boost.
If you are not satisfied with ``SCHUR_JACOBI``'s performance try
``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL`` in that
order. They require that you have ``SuiteSparse``
installed. Both of these preconditioners use a clustering
algorithm. Use ``SINGLE_LINKAGE`` before ``CANONICAL_VIEWS``.
#. Use :func:`Solver::Summary::FullReport` to diagnose performance problems.
When diagnosing Ceres performance issues - runtime and convergence,
the first place to start is by looking at the output of
``Solver::Summary::FullReport``. Here is an example
.. code-block:: bash
./bin/bundle_adjuster --input ../data/problem-16-22106-pre.txt
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 4.185660e+06 0.00e+00 2.16e+07 0.00e+00 0.00e+00 1.00e+04 0 7.50e-02 3.58e-01
1 1.980525e+05 3.99e+06 5.34e+06 2.40e+03 9.60e-01 3.00e+04 1 1.84e-01 5.42e-01
2 5.086543e+04 1.47e+05 2.11e+06 1.01e+03 8.22e-01 4.09e+04 1 1.53e-01 6.95e-01
3 1.859667e+04 3.23e+04 2.87e+05 2.64e+02 9.85e-01 1.23e+05 1 1.71e-01 8.66e-01
4 1.803857e+04 5.58e+02 2.69e+04 8.66e+01 9.93e-01 3.69e+05 1 1.61e-01 1.03e+00
5 1.803391e+04 4.66e+00 3.11e+02 1.02e+01 1.00e+00 1.11e+06 1 1.49e-01 1.18e+00
Ceres Solver v1.11.0 Solve Report
----------------------------------
Original Reduced
Parameter blocks 22122 22122
Parameters 66462 66462
Residual blocks 83718 83718
Residual 167436 167436
Minimizer TRUST_REGION
Sparse linear algebra library SUITE_SPARSE
Trust region strategy LEVENBERG_MARQUARDT
Given Used
Linear solver SPARSE_SCHUR SPARSE_SCHUR
Threads 1 1
Linear solver threads 1 1
Linear solver ordering AUTOMATIC 22106, 16
Cost:
Initial 4.185660e+06
Final 1.803391e+04
Change 4.167626e+06
Minimizer iterations 5
Successful steps 5
Unsuccessful steps 0
Time (in seconds):
Preprocessor 0.283
Residual evaluation 0.061
Jacobian evaluation 0.361
Linear solver 0.382
Minimizer 0.895
Postprocessor 0.002
Total 1.220
Termination: NO_CONVERGENCE (Maximum number of iterations reached.)
Let us focus on run-time performance. The relevant lines to look at
are
.. code-block:: bash
Time (in seconds):
Preprocessor 0.283
Residual evaluation 0.061
Jacobian evaluation 0.361
Linear solver 0.382
Minimizer 0.895
Postprocessor 0.002
Total 1.220
Which tell us that of the total 1.2 seconds, about .3 seconds was
spent in the linear solver and the rest was mostly spent in
preprocessing and jacobian evaluation.
The preprocessing seems particularly expensive. Looking back at the
report, we observe
.. code-block:: bash
Linear solver ordering AUTOMATIC 22106, 16
Which indicates that we are using automatic ordering for the
``SPARSE_SCHUR`` solver. This can be expensive at times. A straight
forward way to deal with this is to give the ordering manually. For
``bundle_adjuster`` this can be done by passing the flag
``-ordering=user``. Doing so and looking at the timing block of the
full report gives us
.. code-block:: bash
Time (in seconds):
Preprocessor 0.051
Residual evaluation 0.053
Jacobian evaluation 0.344
Linear solver 0.372
Minimizer 0.854
Postprocessor 0.002
Total 0.935
The preprocessor time has gone down by more than 5.5x!.
Further Reading
===============
For a short but informative introduction to the subject we recommend
the booklet by [Madsen]_ . For a general introduction to non-linear
optimization we recommend [NocedalWright]_. [Bjorck]_ remains the
seminal reference on least squares problems. [TrefethenBau]_ book is
our favorite text on introductory numerical linear algebra. [Triggs]_
provides a thorough coverage of the bundle adjustment problem.

View File

@@ -0,0 +1,86 @@
========
Features
========
.. _chapter-features:
* **Code Quality** - Ceres Solver has been used in production at
Google for more than four years now. It is clean, extensively tested
and well documented code that is actively developed and supported.
* **Modeling API** - It is rarely the case that one starts with the
exact and complete formulation of the problem that one is trying to
solve. Ceres's modeling API has been designed so that the user can
easily build and modify the objective function, one term at a
time. And to do so without worrying about how the solver is going to
deal with the resulting changes in the sparsity/structure of the
underlying problem.
- **Derivatives** Supplying derivatives is perhaps the most tedious
and error prone part of using an optimization library. Ceres
ships with `automatic`_ and `numeric`_ differentiation. So you
never have to compute derivatives by hand (unless you really want
to). Not only this, Ceres allows you to mix automatic, numeric and
analytical derivatives in any combination that you want.
- **Robust Loss Functions** Most non-linear least squares problems
involve data. If there is data, there will be outliers. Ceres
allows the user to *shape* their residuals using a
:class:`LossFunction` to reduce the influence of outliers.
- **Local Parameterization** In many cases, some parameters lie on a
manifold other than Euclidean space, e.g., rotation matrices. In
such cases, the user can specify the geometry of the local tangent
space by specifying a :class:`LocalParameterization` object.
* **Solver Choice** Depending on the size, sparsity structure, time &
memory budgets, and solution quality requiremnts, different
optimization algorithms will suit different needs. To this end,
Ceres Solver comes with a variety of optimization algorithms:
- **Trust Region Solvers** - Ceres supports Levenberg-Marquardt,
Powell's Dogleg, and Subspace dogleg methods. The key
computational cost in all of these methods is the solution of a
linear system. To this end Ceres ships with a variety of linear
solvers - dense QR and dense Cholesky factorization (using
`Eigen`_ or `LAPACK`_) for dense problems, sparse Cholesky
factorization (`SuiteSparse`_, `CXSparse`_ or `Eigen`_) for large
sparse problems custom Schur complement based dense, sparse, and
iterative linear solvers for `bundle adjustment`_ problems.
- **Line Search Solvers** - When the problem size is so large that
storing and factoring the Jacobian is not feasible or a low
accuracy solution is required cheaply, Ceres offers a number of
line search based algorithms. This includes a number of variants
of Non-linear Conjugate Gradients, BFGS and LBFGS.
* **Speed** - Ceres Solver has been extensively optimized, with C++
templating, hand written linear algebra routines and OpenMP based
multithreading of the Jacobian evaluation and the linear solvers.
* **Solution Quality** Ceres is the `best performing`_ solver on the NIST
problem set used by Mondragon and Borchers for benchmarking
non-linear least squares solvers.
* **Covariance estimation** - Evaluate the sensitivity/uncertainty of
the solution by evaluating all or part of the covariance
matrix. Ceres is one of the few solvers that allows you to to do
this analysis at scale.
* **Community** Since its release as an open source software, Ceres
has developed an active developer community that contributes new
features, bug fixes and support.
* **Portability** - Runs on *Linux*, *Windows*, *Mac OS X*, *Android*
*and iOS*.
* **BSD Licensed** The BSD license offers the flexibility to ship your
application
.. _best performing: https://groups.google.com/forum/#!topic/ceres-solver/UcicgMPgbXw
.. _bundle adjustment: http://en.wikipedia.org/wiki/Bundle_adjustment
.. _SuiteSparse: http://www.cise.ufl.edu/research/sparse/SuiteSparse/
.. _Eigen: http://eigen.tuxfamily.org/
.. _LAPACK: http://www.netlib.org/lapack/
.. _CXSparse: https://www.cise.ufl.edu/research/sparse/CXSparse/
.. _automatic: http://en.wikipedia.org/wiki/Automatic_differentiation
.. _numeric: http://en.wikipedia.org/wiki/Numerical_differentiation

View File

@@ -0,0 +1,494 @@
.. highlight:: c++
.. default-domain:: cpp
.. _chapter-gradient_problem_solver:
==================================
General Unconstrained Minimization
==================================
Modeling
========
:class:`FirstOrderFunction`
---------------------------
.. class:: FirstOrderFunction
Instances of :class:`FirstOrderFunction` implement the evaluation of
a function and its gradient.
.. code-block:: c++
class FirstOrderFunction {
public:
virtual ~FirstOrderFunction() {}
virtual bool Evaluate(const double* const parameters,
double* cost,
double* gradient) const = 0;
virtual int NumParameters() const = 0;
};
.. function:: bool FirstOrderFunction::Evaluate(const double* const parameters, double* cost, double* gradient) const
Evaluate the cost/value of the function. If ``gradient`` is not
``NULL`` then evaluate the gradient too. If evaluation is
successful return, ``true`` else return ``false``.
``cost`` guaranteed to be never ``NULL``, ``gradient`` can be ``NULL``.
.. function:: int FirstOrderFunction::NumParameters() const
Number of parameters in the domain of the function.
:class:`GradientProblem`
------------------------
.. class:: GradientProblem
.. code-block:: c++
class GradientProblem {
public:
explicit GradientProblem(FirstOrderFunction* function);
GradientProblem(FirstOrderFunction* function,
LocalParameterization* parameterization);
int NumParameters() const;
int NumLocalParameters() const;
bool Evaluate(const double* parameters, double* cost, double* gradient) const;
bool Plus(const double* x, const double* delta, double* x_plus_delta) const;
};
Instances of :class:`GradientProblem` represent general non-linear
optimization problems that must be solved using just the value of the
objective function and its gradient. Unlike the :class:`Problem`
class, which can only be used to model non-linear least squares
problems, instances of :class:`GradientProblem` not restricted in the
form of the objective function.
Structurally :class:`GradientProblem` is a composition of a
:class:`FirstOrderFunction` and optionally a
:class:`LocalParameterization`.
The :class:`FirstOrderFunction` is responsible for evaluating the cost
and gradient of the objective function.
The :class:`LocalParameterization` is responsible for going back and
forth between the ambient space and the local tangent space. When a
:class:`LocalParameterization` is not provided, then the tangent space
is assumed to coincide with the ambient Euclidean space that the
gradient vector lives in.
The constructor takes ownership of the :class:`FirstOrderFunction` and
:class:`LocalParamterization` objects passed to it.
.. function:: void Solve(const GradientProblemSolver::Options& options, const GradientProblem& problem, double* parameters, GradientProblemSolver::Summary* summary)
Solve the given :class:`GradientProblem` using the values in
``parameters`` as the initial guess of the solution.
Solving
=======
:class:`GradientProblemSolver::Options`
---------------------------------------
.. class:: GradientProblemSolver::Options
:class:`GradientProblemSolver::Options` controls the overall
behavior of the solver. We list the various settings and their
default values below.
.. function:: bool GradientProblemSolver::Options::IsValid(string* error) const
Validate the values in the options struct and returns true on
success. If there is a problem, the method returns false with
``error`` containing a textual description of the cause.
.. member:: LineSearchDirectionType GradientProblemSolver::Options::line_search_direction_type
Default: ``LBFGS``
Choices are ``STEEPEST_DESCENT``, ``NONLINEAR_CONJUGATE_GRADIENT``,
``BFGS`` and ``LBFGS``.
.. member:: LineSearchType GradientProblemSolver::Options::line_search_type
Default: ``WOLFE``
Choices are ``ARMIJO`` and ``WOLFE`` (strong Wolfe conditions).
Note that in order for the assumptions underlying the ``BFGS`` and
``LBFGS`` line search direction algorithms to be guaranteed to be
satisifed, the ``WOLFE`` line search should be used.
.. member:: NonlinearConjugateGradientType GradientProblemSolver::Options::nonlinear_conjugate_gradient_type
Default: ``FLETCHER_REEVES``
Choices are ``FLETCHER_REEVES``, ``POLAK_RIBIERE`` and
``HESTENES_STIEFEL``.
.. member:: int GradientProblemSolver::Options::max_lbfs_rank
Default: 20
The L-BFGS hessian approximation is a low rank approximation to the
inverse of the Hessian matrix. The rank of the approximation
determines (linearly) the space and time complexity of using the
approximation. Higher the rank, the better is the quality of the
approximation. The increase in quality is however is bounded for a
number of reasons.
1. The method only uses secant information and not actual
derivatives.
2. The Hessian approximation is constrained to be positive
definite.
So increasing this rank to a large number will cost time and space
complexity without the corresponding increase in solution
quality. There are no hard and fast rules for choosing the maximum
rank. The best choice usually requires some problem specific
experimentation.
.. member:: bool GradientProblemSolver::Options::use_approximate_eigenvalue_bfgs_scaling
Default: ``false``
As part of the ``BFGS`` update step / ``LBFGS`` right-multiply
step, the initial inverse Hessian approximation is taken to be the
Identity. However, [Oren]_ showed that using instead :math:`I *
\gamma`, where :math:`\gamma` is a scalar chosen to approximate an
eigenvalue of the true inverse Hessian can result in improved
convergence in a wide variety of cases. Setting
``use_approximate_eigenvalue_bfgs_scaling`` to true enables this
scaling in ``BFGS`` (before first iteration) and ``LBFGS`` (at each
iteration).
Precisely, approximate eigenvalue scaling equates to
.. math:: \gamma = \frac{y_k' s_k}{y_k' y_k}
With:
.. math:: y_k = \nabla f_{k+1} - \nabla f_k
.. math:: s_k = x_{k+1} - x_k
Where :math:`f()` is the line search objective and :math:`x` the
vector of parameter values [NocedalWright]_.
It is important to note that approximate eigenvalue scaling does
**not** *always* improve convergence, and that it can in fact
*significantly* degrade performance for certain classes of problem,
which is why it is disabled by default. In particular it can
degrade performance when the sensitivity of the problem to different
parameters varies significantly, as in this case a single scalar
factor fails to capture this variation and detrimentally downscales
parts of the Jacobian approximation which correspond to
low-sensitivity parameters. It can also reduce the robustness of the
solution to errors in the Jacobians.
.. member:: LineSearchIterpolationType GradientProblemSolver::Options::line_search_interpolation_type
Default: ``CUBIC``
Degree of the polynomial used to approximate the objective
function. Valid values are ``BISECTION``, ``QUADRATIC`` and
``CUBIC``.
.. member:: double GradientProblemSolver::Options::min_line_search_step_size
The line search terminates if:
.. math:: \|\Delta x_k\|_\infty < \text{min_line_search_step_size}
where :math:`\|\cdot\|_\infty` refers to the max norm, and
:math:`\Delta x_k` is the step change in the parameter values at
the :math:`k`-th iteration.
.. member:: double GradientProblemSolver::Options::line_search_sufficient_function_decrease
Default: ``1e-4``
Solving the line search problem exactly is computationally
prohibitive. Fortunately, line search based optimization algorithms
can still guarantee convergence if instead of an exact solution,
the line search algorithm returns a solution which decreases the
value of the objective function sufficiently. More precisely, we
are looking for a step size s.t.
.. math:: f(\text{step_size}) \le f(0) + \text{sufficient_decrease} * [f'(0) * \text{step_size}]
This condition is known as the Armijo condition.
.. member:: double GradientProblemSolver::Options::max_line_search_step_contraction
Default: ``1e-3``
In each iteration of the line search,
.. math:: \text{new_step_size} \geq \text{max_line_search_step_contraction} * \text{step_size}
Note that by definition, for contraction:
.. math:: 0 < \text{max_step_contraction} < \text{min_step_contraction} < 1
.. member:: double GradientProblemSolver::Options::min_line_search_step_contraction
Default: ``0.6``
In each iteration of the line search,
.. math:: \text{new_step_size} \leq \text{min_line_search_step_contraction} * \text{step_size}
Note that by definition, for contraction:
.. math:: 0 < \text{max_step_contraction} < \text{min_step_contraction} < 1
.. member:: int GradientProblemSolver::Options::max_num_line_search_step_size_iterations
Default: ``20``
Maximum number of trial step size iterations during each line
search, if a step size satisfying the search conditions cannot be
found within this number of trials, the line search will stop.
As this is an 'artificial' constraint (one imposed by the user, not
the underlying math), if ``WOLFE`` line search is being used, *and*
points satisfying the Armijo sufficient (function) decrease
condition have been found during the current search (in :math:`\leq`
``max_num_line_search_step_size_iterations``). Then, the step size
with the lowest function value which satisfies the Armijo condition
will be returned as the new valid step, even though it does *not*
satisfy the strong Wolfe conditions. This behaviour protects
against early termination of the optimizer at a sub-optimal point.
.. member:: int GradientProblemSolver::Options::max_num_line_search_direction_restarts
Default: ``5``
Maximum number of restarts of the line search direction algorithm
before terminating the optimization. Restarts of the line search
direction algorithm occur when the current algorithm fails to
produce a new descent direction. This typically indicates a
numerical failure, or a breakdown in the validity of the
approximations used.
.. member:: double GradientProblemSolver::Options::line_search_sufficient_curvature_decrease
Default: ``0.9``
The strong Wolfe conditions consist of the Armijo sufficient
decrease condition, and an additional requirement that the
step size be chosen s.t. the *magnitude* ('strong' Wolfe
conditions) of the gradient along the search direction
decreases sufficiently. Precisely, this second condition
is that we seek a step size s.t.
.. math:: \|f'(\text{step_size})\| \leq \text{sufficient_curvature_decrease} * \|f'(0)\|
Where :math:`f()` is the line search objective and :math:`f'()` is the derivative
of :math:`f` with respect to the step size: :math:`\frac{d f}{d~\text{step size}}`.
.. member:: double GradientProblemSolver::Options::max_line_search_step_expansion
Default: ``10.0``
During the bracketing phase of a Wolfe line search, the step size
is increased until either a point satisfying the Wolfe conditions
is found, or an upper bound for a bracket containing a point
satisfying the conditions is found. Precisely, at each iteration
of the expansion:
.. math:: \text{new_step_size} \leq \text{max_step_expansion} * \text{step_size}
By definition for expansion
.. math:: \text{max_step_expansion} > 1.0
.. member:: int GradientProblemSolver::Options::max_num_iterations
Default: ``50``
Maximum number of iterations for which the solver should run.
.. member:: double GradientProblemSolver::Options::max_solver_time_in_seconds
Default: ``1e6``
Maximum amount of time for which the solver should run.
.. member:: double GradientProblemSolver::Options::function_tolerance
Default: ``1e-6``
Solver terminates if
.. math:: \frac{|\Delta \text{cost}|}{\text{cost}} \leq \text{function_tolerance}
where, :math:`\Delta \text{cost}` is the change in objective
function value (up or down) in the current iteration of the line search.
.. member:: double GradientProblemSolver::Options::gradient_tolerance
Default: ``1e-10``
Solver terminates if
.. math:: \|x - \Pi \boxplus(x, -g(x))\|_\infty \leq \text{gradient_tolerance}
where :math:`\|\cdot\|_\infty` refers to the max norm, :math:`\Pi`
is projection onto the bounds constraints and :math:`\boxplus` is
Plus operation for the overall local parameterization associated
with the parameter vector.
.. member:: double GradientProblemSolver::Options::parameter_tolerance
Default: ``1e-8``
Solver terminates if
.. math:: \|\Delta x\| \leq (\|x\| + \text{parameter_tolerance}) * \text{parameter_tolerance}
where :math:`\Delta x` is the step computed by the linear solver in
the current iteration of the line search.
.. member:: LoggingType GradientProblemSolver::Options::logging_type
Default: ``PER_MINIMIZER_ITERATION``
.. member:: bool GradientProblemSolver::Options::minimizer_progress_to_stdout
Default: ``false``
By default the :class:`Minimizer` progress is logged to ``STDERR``
depending on the ``vlog`` level. If this flag is set to true, and
:member:`GradientProblemSolver::Options::logging_type` is not
``SILENT``, the logging output is sent to ``STDOUT``.
The progress display looks like
.. code-block:: bash
0: f: 2.317806e+05 d: 0.00e+00 g: 3.19e-01 h: 0.00e+00 s: 0.00e+00 e: 0 it: 2.98e-02 tt: 8.50e-02
1: f: 2.312019e+05 d: 5.79e+02 g: 3.18e-01 h: 2.41e+01 s: 1.00e+00 e: 1 it: 4.54e-02 tt: 1.31e-01
2: f: 2.300462e+05 d: 1.16e+03 g: 3.17e-01 h: 4.90e+01 s: 2.54e-03 e: 1 it: 4.96e-02 tt: 1.81e-01
Here
#. ``f`` is the value of the objective function.
#. ``d`` is the change in the value of the objective function if
the step computed in this iteration is accepted.
#. ``g`` is the max norm of the gradient.
#. ``h`` is the change in the parameter vector.
#. ``s`` is the optimal step length computed by the line search.
#. ``it`` is the time take by the current iteration.
#. ``tt`` is the total time taken by the minimizer.
.. member:: vector<IterationCallback> GradientProblemSolver::Options::callbacks
Callbacks that are executed at the end of each iteration of the
:class:`Minimizer`. They are executed in the order that they are
specified in this vector. See the documentation for
:class:`IterationCallback` for more details.
The solver does NOT take ownership of these pointers.
:class:`GradientProblemSolver::Summary`
---------------------------------------
.. class:: GradientProblemSolver::Summary
Summary of the various stages of the solver after termination.
.. function:: string GradientProblemSolver::Summary::BriefReport() const
A brief one line description of the state of the solver after
termination.
.. function:: string GradientProblemSolver::Summary::FullReport() const
A full multiline description of the state of the solver after
termination.
.. function:: bool GradientProblemSolver::Summary::IsSolutionUsable() const
Whether the solution returned by the optimization algorithm can be
relied on to be numerically sane. This will be the case if
`GradientProblemSolver::Summary:termination_type` is set to `CONVERGENCE`,
`USER_SUCCESS` or `NO_CONVERGENCE`, i.e., either the solver
converged by meeting one of the convergence tolerances or because
the user indicated that it had converged or it ran to the maximum
number of iterations or time.
.. member:: TerminationType GradientProblemSolver::Summary::termination_type
The cause of the minimizer terminating.
.. member:: string GradientProblemSolver::Summary::message
Reason why the solver terminated.
.. member:: double GradientProblemSolver::Summary::initial_cost
Cost of the problem (value of the objective function) before the
optimization.
.. member:: double GradientProblemSolver::Summary::final_cost
Cost of the problem (value of the objective function) after the
optimization.
.. member:: vector<IterationSummary> GradientProblemSolver::Summary::iterations
:class:`IterationSummary` for each minimizer iteration in order.
.. member:: double GradientProblemSolver::Summary::total_time_in_seconds
Time (in seconds) spent in the solver.
.. member:: double GradientProblemSolver::Summary::cost_evaluation_time_in_seconds
Time (in seconds) spent evaluating the cost vector.
.. member:: double GradientProblemSolver::Summary::gradient_evaluation_time_in_seconds
Time (in seconds) spent evaluating the gradient vector.
.. member:: int GradientProblemSolver::Summary::num_parameters
Number of parameters in the problem.
.. member:: int GradientProblemSolver::Summary::num_local_parameters
Dimension of the tangent space of the problem. This is different
from :member:`GradientProblemSolver::Summary::num_parameters` if a
:class:`LocalParameterization` object is used.
.. member:: LineSearchDirectionType GradientProblemSolver::Summary::line_search_direction_type
Type of line search direction used.
.. member:: LineSearchType GradientProblemSolver::Summary::line_search_type
Type of the line search algorithm used.
.. member:: LineSearchInterpolationType GradientProblemSolver::Summary::line_search_interpolation_type
When performing line search, the degree of the polynomial used to
approximate the objective function.
.. member:: NonlinearConjugateGradientType GradientProblemSolver::Summary::nonlinear_conjugate_gradient_type
If the line search direction is `NONLINEAR_CONJUGATE_GRADIENT`,
then this indicates the particular variant of non-linear conjugate
gradient used.
.. member:: int GradientProblemSolver::Summary::max_lbfgs_rank
If the type of the line search direction is `LBFGS`, then this
indicates the rank of the Hessian approximation.

View File

@@ -0,0 +1,138 @@
.. highlight:: c++
.. default-domain:: cpp
.. _chapter-gradient_tutorial:
==================================
General Unconstrained Minimization
==================================
While much of Ceres Solver is devoted to solving non-linear least
squares problems, internally it contains a solver that can solve
general unconstrained optimization problems using just their objective
function value and gradients. The ``GradientProblem`` and
``GradientProblemSolver`` objects give the user access to this solver.
So without much further ado, let us look at how one goes about using
them.
Rosenbrock's Function
=====================
We consider the minimization of the famous `Rosenbrock's function
<http://en.wikipedia.org/wiki/Rosenbrock_function>`_ [#f1]_.
We begin by defining an instance of the ``FirstOrderFunction``
interface. This is the object that is responsible for computing the
objective function value and the gradient (if required). This is the
analog of the :class:`CostFunction` when defining non-linear least
squares problems in Ceres.
.. code::
class Rosenbrock : public ceres::FirstOrderFunction {
public:
virtual bool Evaluate(const double* parameters,
double* cost,
double* gradient) const {
const double x = parameters[0];
const double y = parameters[1];
cost[0] = (1.0 - x) * (1.0 - x) + 100.0 * (y - x * x) * (y - x * x);
if (gradient != NULL) {
gradient[0] = -2.0 * (1.0 - x) - 200.0 * (y - x * x) * 2.0 * x;
gradient[1] = 200.0 * (y - x * x);
}
return true;
}
virtual int NumParameters() const { return 2; }
};
Minimizing it then is a straightforward matter of constructing a
:class:`GradientProblem` object and calling :func:`Solve` on it.
.. code::
double parameters[2] = {-1.2, 1.0};
ceres::GradientProblem problem(new Rosenbrock());
ceres::GradientProblemSolver::Options options;
options.minimizer_progress_to_stdout = true;
ceres::GradientProblemSolver::Summary summary;
ceres::Solve(options, problem, parameters, &summary);
std::cout << summary.FullReport() << "\n";
Executing this code results, solve the problem using limited memory
`BFGS
<http://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm>`_
algorithm.
.. code-block:: bash
0: f: 2.420000e+01 d: 0.00e+00 g: 2.16e+02 h: 0.00e+00 s: 0.00e+00 e: 0 it: 2.00e-05 tt: 2.00e-05
1: f: 4.280493e+00 d: 1.99e+01 g: 1.52e+01 h: 2.01e-01 s: 8.62e-04 e: 2 it: 7.32e-05 tt: 2.19e-04
2: f: 3.571154e+00 d: 7.09e-01 g: 1.35e+01 h: 3.78e-01 s: 1.34e-01 e: 3 it: 2.50e-05 tt: 2.68e-04
3: f: 3.440869e+00 d: 1.30e-01 g: 1.73e+01 h: 1.36e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 2.92e-04
4: f: 3.213597e+00 d: 2.27e-01 g: 1.55e+01 h: 1.06e-01 s: 4.59e-01 e: 1 it: 2.86e-06 tt: 3.14e-04
5: f: 2.839723e+00 d: 3.74e-01 g: 1.05e+01 h: 1.34e-01 s: 5.24e-01 e: 1 it: 2.86e-06 tt: 3.36e-04
6: f: 2.448490e+00 d: 3.91e-01 g: 1.29e+01 h: 3.04e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 3.58e-04
7: f: 1.943019e+00 d: 5.05e-01 g: 4.00e+00 h: 8.81e-02 s: 7.43e-01 e: 1 it: 4.05e-06 tt: 3.79e-04
8: f: 1.731469e+00 d: 2.12e-01 g: 7.36e+00 h: 1.71e-01 s: 4.60e-01 e: 2 it: 9.06e-06 tt: 4.06e-04
9: f: 1.503267e+00 d: 2.28e-01 g: 6.47e+00 h: 8.66e-02 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 4.33e-04
10: f: 1.228331e+00 d: 2.75e-01 g: 2.00e+00 h: 7.70e-02 s: 7.90e-01 e: 1 it: 3.81e-06 tt: 4.54e-04
11: f: 1.016523e+00 d: 2.12e-01 g: 5.15e+00 h: 1.39e-01 s: 3.76e-01 e: 2 it: 1.00e-05 tt: 4.82e-04
12: f: 9.145773e-01 d: 1.02e-01 g: 6.74e+00 h: 7.98e-02 s: 1.00e+00 e: 1 it: 3.10e-06 tt: 5.03e-04
13: f: 7.508302e-01 d: 1.64e-01 g: 3.88e+00 h: 5.76e-02 s: 4.93e-01 e: 1 it: 2.86e-06 tt: 5.25e-04
14: f: 5.832378e-01 d: 1.68e-01 g: 5.56e+00 h: 1.42e-01 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 5.47e-04
15: f: 3.969581e-01 d: 1.86e-01 g: 1.64e+00 h: 1.17e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 5.68e-04
16: f: 3.171557e-01 d: 7.98e-02 g: 3.84e+00 h: 1.18e-01 s: 3.97e-01 e: 2 it: 9.06e-06 tt: 5.94e-04
17: f: 2.641257e-01 d: 5.30e-02 g: 3.27e+00 h: 6.14e-02 s: 1.00e+00 e: 1 it: 3.10e-06 tt: 6.16e-04
18: f: 1.909730e-01 d: 7.32e-02 g: 5.29e-01 h: 8.55e-02 s: 6.82e-01 e: 1 it: 4.05e-06 tt: 6.42e-04
19: f: 1.472012e-01 d: 4.38e-02 g: 3.11e+00 h: 1.20e-01 s: 3.47e-01 e: 2 it: 1.00e-05 tt: 6.69e-04
20: f: 1.093558e-01 d: 3.78e-02 g: 2.97e+00 h: 8.43e-02 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 6.91e-04
21: f: 6.710346e-02 d: 4.23e-02 g: 1.42e+00 h: 9.64e-02 s: 8.85e-01 e: 1 it: 3.81e-06 tt: 7.12e-04
22: f: 3.993377e-02 d: 2.72e-02 g: 2.30e+00 h: 1.29e-01 s: 4.63e-01 e: 2 it: 9.06e-06 tt: 7.39e-04
23: f: 2.911794e-02 d: 1.08e-02 g: 2.55e+00 h: 6.55e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 7.62e-04
24: f: 1.457683e-02 d: 1.45e-02 g: 2.77e-01 h: 6.37e-02 s: 6.14e-01 e: 1 it: 3.81e-06 tt: 7.84e-04
25: f: 8.577515e-03 d: 6.00e-03 g: 2.86e+00 h: 1.40e-01 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.05e-04
26: f: 3.486574e-03 d: 5.09e-03 g: 1.76e-01 h: 1.23e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.27e-04
27: f: 1.257570e-03 d: 2.23e-03 g: 1.39e-01 h: 5.08e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.48e-04
28: f: 2.783568e-04 d: 9.79e-04 g: 6.20e-01 h: 6.47e-02 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 8.69e-04
29: f: 2.533399e-05 d: 2.53e-04 g: 1.68e-02 h: 1.98e-03 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 8.91e-04
30: f: 7.591572e-07 d: 2.46e-05 g: 5.40e-03 h: 9.27e-03 s: 1.00e+00 e: 1 it: 3.81e-06 tt: 9.12e-04
31: f: 1.902460e-09 d: 7.57e-07 g: 1.62e-03 h: 1.89e-03 s: 1.00e+00 e: 1 it: 2.86e-06 tt: 9.33e-04
32: f: 1.003030e-12 d: 1.90e-09 g: 3.50e-05 h: 3.52e-05 s: 1.00e+00 e: 1 it: 3.10e-06 tt: 9.54e-04
33: f: 4.835994e-17 d: 1.00e-12 g: 1.05e-07 h: 1.13e-06 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 9.81e-04
34: f: 1.885250e-22 d: 4.84e-17 g: 2.69e-10 h: 1.45e-08 s: 1.00e+00 e: 1 it: 4.05e-06 tt: 1.00e-03
Solver Summary (v 1.11.0-lapack-suitesparse-cxsparse-no_openmp)
Parameters 2
Line search direction LBFGS (20)
Line search type CUBIC WOLFE
Cost:
Initial 2.420000e+01
Final 1.885250e-22
Change 2.420000e+01
Minimizer iterations 35
Time (in seconds):
Cost evaluation 0.000
Gradient evaluation 0.000
Total 0.003
Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 9.032775e-13 <= 1.000000e-10)
.. rubric:: Footnotes
.. [#f1] `examples/rosenbrock.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/rosenbrock.cc>`_

View File

@@ -0,0 +1,79 @@
.. Ceres Solver documentation master file, created by
sphinx-quickstart on Sat Jan 19 00:07:33 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
============
Ceres Solver
============
.. toctree::
:maxdepth: 3
:hidden:
features
building
tutorial
api
faqs
users
contributing
version_history
bibliography
license
Ceres Solver [#f1]_ is an open source C++ library for modeling and
solving large, complicated optimization problems. It is a feature
rich, mature and performant library which has been used in production
at Google since 2010. Ceres Solver can solve two kinds of problems.
1. `Non-linear Least Squares`_ problems with bounds constraints.
2. General unconstrained optimization problems.
.. _Non-linear Least Squares: http://en.wikipedia.org/wiki/Non-linear_least_squares
Getting started
===============
* Download the `latest stable release
<http://ceres-solver.org/ceres-solver-1.11.0.tar.gz>`_ or clone the
Git repository for the latest development version.
.. code-block:: bash
git clone https://ceres-solver.googlesource.com/ceres-solver
* Read the :ref:`chapter-tutorial` and browse the :ref:`chapter-api`.
* Join the `mailing list
<https://groups.google.com/forum/?fromgroups#!forum/ceres-solver>`_
and ask questions.
* File bugs, feature requests on `GitHub
<https://github.com/ceres-solver/ceres-solver/issues>`_.
Cite Us
=======
If you use Ceres Solver for a publication, please cite it as::
@misc{ceres-solver,
author = "Sameer Agarwal and Keir Mierle and Others",
title = "Ceres Solver",
howpublished = "\url{http://ceres-solver.org}",
}
.. rubric:: Footnotes
.. [#f1] While there is some debate as to who invented the method of
Least Squares [Stigler]_, there is no questioning the fact
that it was `Carl Friedrich Gauss
<http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Gauss.html>`_
who brought it to the attention of the world. Using just 22
observations of the newly discovered asteroid `Ceres
<http://en.wikipedia.org/wiki/Ceres_(dwarf_planet)>`_, Gauss
used the method of least squares to correctly predict when
and where the asteroid will emerge from behind the Sun
[TenenbaumDirector]_. We named our solver after Ceres to
celebrate this seminal event in the history of astronomy,
statistics and optimization.

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -0,0 +1,30 @@
=======
License
=======
Ceres Solver is licensed under the New BSD license, whose terms are as follows.
Copyright 2015 Google Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of Google Inc., nor the names of its contributors may
be used to endorse or promote products derived from this software without
specific prior written permission.
This software is provided by the copyright holders and contributors "AS IS" and
any express or implied warranties, including, but not limited to, the implied
warranties of merchantability and fitness for a particular purpose are
disclaimed. In no event shall Google Inc. be liable for any direct, indirect,
incidental, special, exemplary, or consequential damages (including, but not
limited to, procurement of substitute goods or services; loss of use, data, or
profits; or business interruption) however caused and on any theory of
liability, whether in contract, strict liability, or tort (including negligence
or otherwise) arising in any way out of the use of this software, even if
advised of the possibility of such damage.

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,823 @@
.. highlight:: c++
.. default-domain:: cpp
.. _chapter-nnls_tutorial:
========================
Non-linear Least Squares
========================
Introduction
============
Ceres can solve bounds constrained robustified non-linear least
squares problems of the form
.. math:: :label: ceresproblem
\min_{\mathbf{x}} &\quad \frac{1}{2}\sum_{i} \rho_i\left(\left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2\right) \\
\text{s.t.} &\quad l_j \le x_j \le u_j
Problems of this form comes up in a broad range of areas across
science and engineering - from `fitting curves`_ in statistics, to
constructing `3D models from photographs`_ in computer vision.
.. _fitting curves: http://en.wikipedia.org/wiki/Nonlinear_regression
.. _3D models from photographs: http://en.wikipedia.org/wiki/Bundle_adjustment
In this chapter we will learn how to solve :eq:`ceresproblem` using
Ceres Solver. Full working code for all the examples described in this
chapter and more can be found in the `examples
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
directory.
The expression
:math:`\rho_i\left(\left\|f_i\left(x_{i_1},...,x_{i_k}\right)\right\|^2\right)`
is known as a ``ResidualBlock``, where :math:`f_i(\cdot)` is a
:class:`CostFunction` that depends on the parameter blocks
:math:`\left[x_{i_1},... , x_{i_k}\right]`. In most optimization
problems small groups of scalars occur together. For example the three
components of a translation vector and the four components of the
quaternion that define the pose of a camera. We refer to such a group
of small scalars as a ``ParameterBlock``. Of course a
``ParameterBlock`` can just be a single parameter. :math:`l_j` and
:math:`u_j` are bounds on the parameter block :math:`x_j`.
:math:`\rho_i` is a :class:`LossFunction`. A :class:`LossFunction` is
a scalar function that is used to reduce the influence of outliers on
the solution of non-linear least squares problems.
As a special case, when :math:`\rho_i(x) = x`, i.e., the identity
function, and :math:`l_j = -\infty` and :math:`u_j = \infty` we get
the more familiar `non-linear least squares problem
<http://en.wikipedia.org/wiki/Non-linear_least_squares>`_.
.. math:: \frac{1}{2}\sum_{i} \left\|f_i\left(x_{i_1}, ... ,x_{i_k}\right)\right\|^2.
:label: ceresproblem2
.. _section-hello-world:
Hello World!
============
To get started, consider the problem of finding the minimum of the
function
.. math:: \frac{1}{2}(10 -x)^2.
This is a trivial problem, whose minimum is located at :math:`x = 10`,
but it is a good place to start to illustrate the basics of solving a
problem with Ceres [#f1]_.
The first step is to write a functor that will evaluate this the
function :math:`f(x) = 10 - x`:
.. code-block:: c++
struct CostFunctor {
template <typename T>
bool operator()(const T* const x, T* residual) const {
residual[0] = T(10.0) - x[0];
return true;
}
};
The important thing to note here is that ``operator()`` is a templated
method, which assumes that all its inputs and outputs are of some type
``T``. The use of templating here allows Ceres to call
``CostFunctor::operator<T>()``, with ``T=double`` when just the value
of the residual is needed, and with a special type ``T=Jet`` when the
Jacobians are needed. In :ref:`section-derivatives` we will discuss the
various ways of supplying derivatives to Ceres in more detail.
Once we have a way of computing the residual function, it is now time
to construct a non-linear least squares problem using it and have
Ceres solve it.
.. code-block:: c++
int main(int argc, char** argv) {
google::InitGoogleLogging(argv[0]);
// The variable to solve for with its initial value.
double initial_x = 5.0;
double x = initial_x;
// Build the problem.
Problem problem;
// Set up the only cost function (also known as residual). This uses
// auto-differentiation to obtain the derivative (jacobian).
CostFunction* cost_function =
new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);
// Run the solver!
Solver::Options options;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
Solver::Summary summary;
Solve(options, &problem, &summary);
std::cout << summary.BriefReport() << "\n";
std::cout << "x : " << initial_x
<< " -> " << x << "\n";
return 0;
}
:class:`AutoDiffCostFunction` takes a ``CostFunctor`` as input,
automatically differentiates it and gives it a :class:`CostFunction`
interface.
Compiling and running `examples/helloworld.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
gives us
.. code-block:: bash
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 4.512500e+01 0.00e+00 9.50e+00 0.00e+00 0.00e+00 1.00e+04 0 5.33e-04 3.46e-03
1 4.511598e-07 4.51e+01 9.50e-04 9.50e+00 1.00e+00 3.00e+04 1 5.00e-04 4.05e-03
2 5.012552e-16 4.51e-07 3.17e-08 9.50e-04 1.00e+00 9.00e+04 1 1.60e-05 4.09e-03
Ceres Solver Report: Iterations: 2, Initial cost: 4.512500e+01, Final cost: 5.012552e-16, Termination: CONVERGENCE
x : 0.5 -> 10
Starting from a :math:`x=5`, the solver in two iterations goes to 10
[#f2]_. The careful reader will note that this is a linear problem and
one linear solve should be enough to get the optimal value. The
default configuration of the solver is aimed at non-linear problems,
and for reasons of simplicity we did not change it in this example. It
is indeed possible to obtain the solution to this problem using Ceres
in one iteration. Also note that the solver did get very close to the
optimal function value of 0 in the very first iteration. We will
discuss these issues in greater detail when we talk about convergence
and parameter settings for Ceres.
.. rubric:: Footnotes
.. [#f1] `examples/helloworld.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
.. [#f2] Actually the solver ran for three iterations, and it was
by looking at the value returned by the linear solver in the third
iteration, it observed that the update to the parameter block was too
small and declared convergence. Ceres only prints out the display at
the end of an iteration, and terminates as soon as it detects
convergence, which is why you only see two iterations here and not
three.
.. _section-derivatives:
Derivatives
===========
Ceres Solver like most optimization packages, depends on being able to
evaluate the value and the derivatives of each term in the objective
function at arbitrary parameter values. Doing so correctly and
efficiently is essential to getting good results. Ceres Solver
provides a number of ways of doing so. You have already seen one of
them in action --
Automatic Differentiation in `examples/helloworld.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld.cc>`_
We now consider the other two possibilities. Analytic and numeric
derivatives.
Numeric Derivatives
-------------------
In some cases, its not possible to define a templated cost functor,
for example when the evaluation of the residual involves a call to a
library function that you do not have control over. In such a
situation, numerical differentiation can be used. The user defines a
functor which computes the residual value and construct a
:class:`NumericDiffCostFunction` using it. e.g., for :math:`f(x) = 10 - x`
the corresponding functor would be
.. code-block:: c++
struct NumericDiffCostFunctor {
bool operator()(const double* const x, double* residual) const {
residual[0] = 10.0 - x[0];
return true;
}
};
Which is added to the :class:`Problem` as:
.. code-block:: c++
CostFunction* cost_function =
new NumericDiffCostFunction<NumericDiffCostFunctor, ceres::CENTRAL, 1, 1, 1>(
new NumericDiffCostFunctor)
problem.AddResidualBlock(cost_function, NULL, &x);
Notice the parallel from when we were using automatic differentiation
.. code-block:: c++
CostFunction* cost_function =
new AutoDiffCostFunction<CostFunctor, 1, 1>(new CostFunctor);
problem.AddResidualBlock(cost_function, NULL, &x);
The construction looks almost identical to the one used for automatic
differentiation, except for an extra template parameter that indicates
the kind of finite differencing scheme to be used for computing the
numerical derivatives [#f3]_. For more details see the documentation
for :class:`NumericDiffCostFunction`.
**Generally speaking we recommend automatic differentiation instead of
numeric differentiation. The use of C++ templates makes automatic
differentiation efficient, whereas numeric differentiation is
expensive, prone to numeric errors, and leads to slower convergence.**
Analytic Derivatives
--------------------
In some cases, using automatic differentiation is not possible. For
example, it may be the case that it is more efficient to compute the
derivatives in closed form instead of relying on the chain rule used
by the automatic differentiation code.
In such cases, it is possible to supply your own residual and jacobian
computation code. To do this, define a subclass of
:class:`CostFunction` or :class:`SizedCostFunction` if you know the
sizes of the parameters and residuals at compile time. Here for
example is ``SimpleCostFunction`` that implements :math:`f(x) = 10 -
x`.
.. code-block:: c++
class QuadraticCostFunction : public ceres::SizedCostFunction<1, 1> {
public:
virtual ~QuadraticCostFunction() {}
virtual bool Evaluate(double const* const* parameters,
double* residuals,
double** jacobians) const {
const double x = parameters[0][0];
residuals[0] = 10 - x;
// Compute the Jacobian if asked for.
if (jacobians != NULL && jacobians[0] != NULL) {
jacobians[0][0] = -1;
}
return true;
}
};
``SimpleCostFunction::Evaluate`` is provided with an input array of
``parameters``, an output array ``residuals`` for residuals and an
output array ``jacobians`` for Jacobians. The ``jacobians`` array is
optional, ``Evaluate`` is expected to check when it is non-null, and
if it is the case then fill it with the values of the derivative of
the residual function. In this case since the residual function is
linear, the Jacobian is constant [#f4]_ .
As can be seen from the above code fragments, implementing
:class:`CostFunction` objects is a bit tedious. We recommend that
unless you have a good reason to manage the jacobian computation
yourself, you use :class:`AutoDiffCostFunction` or
:class:`NumericDiffCostFunction` to construct your residual blocks.
More About Derivatives
----------------------
Computing derivatives is by far the most complicated part of using
Ceres, and depending on the circumstance the user may need more
sophisticated ways of computing derivatives. This section just
scratches the surface of how derivatives can be supplied to
Ceres. Once you are comfortable with using
:class:`NumericDiffCostFunction` and :class:`AutoDiffCostFunction` we
recommend taking a look at :class:`DynamicAutoDiffCostFunction`,
:class:`CostFunctionToFunctor`, :class:`NumericDiffFunctor` and
:class:`ConditionedCostFunction` for more advanced ways of
constructing and computing cost functions.
.. rubric:: Footnotes
.. [#f3] `examples/helloworld_numeric_diff.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_numeric_diff.cc>`_.
.. [#f4] `examples/helloworld_analytic_diff.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/helloworld_analytic_diff.cc>`_.
.. _section-powell:
Powell's Function
=================
Consider now a slightly more complicated example -- the minimization
of Powell's function. Let :math:`x = \left[x_1, x_2, x_3, x_4 \right]`
and
.. math::
\begin{align}
f_1(x) &= x_1 + 10x_2 \\
f_2(x) &= \sqrt{5} (x_3 - x_4)\\
f_3(x) &= (x_2 - 2x_3)^2\\
f_4(x) &= \sqrt{10} (x_1 - x_4)^2\\
F(x) &= \left[f_1(x),\ f_2(x),\ f_3(x),\ f_4(x) \right]
\end{align}
:math:`F(x)` is a function of four parameters, has four residuals
and we wish to find :math:`x` such that :math:`\frac{1}{2}\|F(x)\|^2`
is minimized.
Again, the first step is to define functors that evaluate of the terms
in the objective functor. Here is the code for evaluating
:math:`f_4(x_1, x_4)`:
.. code-block:: c++
struct F4 {
template <typename T>
bool operator()(const T* const x1, const T* const x4, T* residual) const {
residual[0] = T(sqrt(10.0)) * (x1[0] - x4[0]) * (x1[0] - x4[0]);
return true;
}
};
Similarly, we can define classes ``F1``, ``F2`` and ``F3`` to evaluate
:math:`f_1(x_1, x_2)`, :math:`f_2(x_3, x_4)` and :math:`f_3(x_2, x_3)`
respectively. Using these, the problem can be constructed as follows:
.. code-block:: c++
double x1 = 3.0; double x2 = -1.0; double x3 = 0.0; double x4 = 1.0;
Problem problem;
// Add residual terms to the problem using the using the autodiff
// wrapper to get the derivatives automatically.
problem.AddResidualBlock(
new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), NULL, &x1, &x2);
problem.AddResidualBlock(
new AutoDiffCostFunction<F2, 1, 1, 1>(new F2), NULL, &x3, &x4);
problem.AddResidualBlock(
new AutoDiffCostFunction<F3, 1, 1, 1>(new F3), NULL, &x2, &x3)
problem.AddResidualBlock(
new AutoDiffCostFunction<F4, 1, 1, 1>(new F4), NULL, &x1, &x4);
Note that each ``ResidualBlock`` only depends on the two parameters
that the corresponding residual object depends on and not on all four
parameters. Compiling and running `examples/powell.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_
gives us:
.. code-block:: bash
Initial x1 = 3, x2 = -1, x3 = 0, x4 = 1
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 1.075000e+02 0.00e+00 1.55e+02 0.00e+00 0.00e+00 1.00e+04 0 4.95e-04 2.30e-03
1 5.036190e+00 1.02e+02 2.00e+01 2.16e+00 9.53e-01 3.00e+04 1 4.39e-05 2.40e-03
2 3.148168e-01 4.72e+00 2.50e+00 6.23e-01 9.37e-01 9.00e+04 1 9.06e-06 2.43e-03
3 1.967760e-02 2.95e-01 3.13e-01 3.08e-01 9.37e-01 2.70e+05 1 8.11e-06 2.45e-03
4 1.229900e-03 1.84e-02 3.91e-02 1.54e-01 9.37e-01 8.10e+05 1 6.91e-06 2.48e-03
5 7.687123e-05 1.15e-03 4.89e-03 7.69e-02 9.37e-01 2.43e+06 1 7.87e-06 2.50e-03
6 4.804625e-06 7.21e-05 6.11e-04 3.85e-02 9.37e-01 7.29e+06 1 5.96e-06 2.52e-03
7 3.003028e-07 4.50e-06 7.64e-05 1.92e-02 9.37e-01 2.19e+07 1 5.96e-06 2.55e-03
8 1.877006e-08 2.82e-07 9.54e-06 9.62e-03 9.37e-01 6.56e+07 1 5.96e-06 2.57e-03
9 1.173223e-09 1.76e-08 1.19e-06 4.81e-03 9.37e-01 1.97e+08 1 7.87e-06 2.60e-03
10 7.333425e-11 1.10e-09 1.49e-07 2.40e-03 9.37e-01 5.90e+08 1 6.20e-06 2.63e-03
11 4.584044e-12 6.88e-11 1.86e-08 1.20e-03 9.37e-01 1.77e+09 1 6.91e-06 2.65e-03
12 2.865573e-13 4.30e-12 2.33e-09 6.02e-04 9.37e-01 5.31e+09 1 5.96e-06 2.67e-03
13 1.791438e-14 2.69e-13 2.91e-10 3.01e-04 9.37e-01 1.59e+10 1 7.15e-06 2.69e-03
Ceres Solver v1.11.0 Solve Report
----------------------------------
Original Reduced
Parameter blocks 4 4
Parameters 4 4
Residual blocks 4 4
Residual 4 4
Minimizer TRUST_REGION
Dense linear algebra library EIGEN
Trust region strategy LEVENBERG_MARQUARDT
Given Used
Linear solver DENSE_QR DENSE_QR
Threads 1 1
Linear solver threads 1 1
Cost:
Initial 1.075000e+02
Final 1.791438e-14
Change 1.075000e+02
Minimizer iterations 14
Successful steps 14
Unsuccessful steps 0
Time (in seconds):
Preprocessor 0.002
Residual evaluation 0.000
Jacobian evaluation 0.000
Linear solver 0.000
Minimizer 0.001
Postprocessor 0.000
Total 0.005
Termination: CONVERGENCE (Gradient tolerance reached. Gradient max norm: 3.642190e-11 <= 1.000000e-10)
Final x1 = 0.000292189, x2 = -2.92189e-05, x3 = 4.79511e-05, x4 = 4.79511e-05
It is easy to see that the optimal solution to this problem is at
:math:`x_1=0, x_2=0, x_3=0, x_4=0` with an objective function value of
:math:`0`. In 10 iterations, Ceres finds a solution with an objective
function value of :math:`4\times 10^{-12}`.
.. rubric:: Footnotes
.. [#f5] `examples/powell.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/powell.cc>`_.
.. _section-fitting:
Curve Fitting
=============
The examples we have seen until now are simple optimization problems
with no data. The original purpose of least squares and non-linear
least squares analysis was fitting curves to data. It is only
appropriate that we now consider an example of such a problem
[#f6]_. It contains data generated by sampling the curve :math:`y =
e^{0.3x + 0.1}` and adding Gaussian noise with standard deviation
:math:`\sigma = 0.2`. Let us fit some data to the curve
.. math:: y = e^{mx + c}.
We begin by defining a templated object to evaluate the
residual. There will be a residual for each observation.
.. code-block:: c++
struct ExponentialResidual {
ExponentialResidual(double x, double y)
: x_(x), y_(y) {}
template <typename T>
bool operator()(const T* const m, const T* const c, T* residual) const {
residual[0] = T(y_) - exp(m[0] * T(x_) + c[0]);
return true;
}
private:
// Observations for a sample.
const double x_;
const double y_;
};
Assuming the observations are in a :math:`2n` sized array called
``data`` the problem construction is a simple matter of creating a
:class:`CostFunction` for every observation.
.. code-block:: c++
double m = 0.0;
double c = 0.0;
Problem problem;
for (int i = 0; i < kNumObservations; ++i) {
CostFunction* cost_function =
new AutoDiffCostFunction<ExponentialResidual, 1, 1, 1>(
new ExponentialResidual(data[2 * i], data[2 * i + 1]));
problem.AddResidualBlock(cost_function, NULL, &m, &c);
}
Compiling and running `examples/curve_fitting.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
gives us:
.. code-block:: bash
iter cost cost_change |gradient| |step| tr_ratio tr_radius ls_iter iter_time total_time
0 1.211734e+02 0.00e+00 3.61e+02 0.00e+00 0.00e+00 1.00e+04 0 5.34e-04 2.56e-03
1 1.211734e+02 -2.21e+03 0.00e+00 7.52e-01 -1.87e+01 5.00e+03 1 4.29e-05 3.25e-03
2 1.211734e+02 -2.21e+03 0.00e+00 7.51e-01 -1.86e+01 1.25e+03 1 1.10e-05 3.28e-03
3 1.211734e+02 -2.19e+03 0.00e+00 7.48e-01 -1.85e+01 1.56e+02 1 1.41e-05 3.31e-03
4 1.211734e+02 -2.02e+03 0.00e+00 7.22e-01 -1.70e+01 9.77e+00 1 1.00e-05 3.34e-03
5 1.211734e+02 -7.34e+02 0.00e+00 5.78e-01 -6.32e+00 3.05e-01 1 1.00e-05 3.36e-03
6 3.306595e+01 8.81e+01 4.10e+02 3.18e-01 1.37e+00 9.16e-01 1 2.79e-05 3.41e-03
7 6.426770e+00 2.66e+01 1.81e+02 1.29e-01 1.10e+00 2.75e+00 1 2.10e-05 3.45e-03
8 3.344546e+00 3.08e+00 5.51e+01 3.05e-02 1.03e+00 8.24e+00 1 2.10e-05 3.48e-03
9 1.987485e+00 1.36e+00 2.33e+01 8.87e-02 9.94e-01 2.47e+01 1 2.10e-05 3.52e-03
10 1.211585e+00 7.76e-01 8.22e+00 1.05e-01 9.89e-01 7.42e+01 1 2.10e-05 3.56e-03
11 1.063265e+00 1.48e-01 1.44e+00 6.06e-02 9.97e-01 2.22e+02 1 2.60e-05 3.61e-03
12 1.056795e+00 6.47e-03 1.18e-01 1.47e-02 1.00e+00 6.67e+02 1 2.10e-05 3.64e-03
13 1.056751e+00 4.39e-05 3.79e-03 1.28e-03 1.00e+00 2.00e+03 1 2.10e-05 3.68e-03
Ceres Solver Report: Iterations: 13, Initial cost: 1.211734e+02, Final cost: 1.056751e+00, Termination: CONVERGENCE
Initial m: 0 c: 0
Final m: 0.291861 c: 0.131439
Starting from parameter values :math:`m = 0, c=0` with an initial
objective function value of :math:`121.173` Ceres finds a solution
:math:`m= 0.291861, c = 0.131439` with an objective function value of
:math:`1.05675`. These values are a bit different than the
parameters of the original model :math:`m=0.3, c= 0.1`, but this is
expected. When reconstructing a curve from noisy data, we expect to
see such deviations. Indeed, if you were to evaluate the objective
function for :math:`m=0.3, c=0.1`, the fit is worse with an objective
function value of :math:`1.082425`. The figure below illustrates the fit.
.. figure:: least_squares_fit.png
:figwidth: 500px
:height: 400px
:align: center
Least squares curve fitting.
.. rubric:: Footnotes
.. [#f6] `examples/curve_fitting.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/curve_fitting.cc>`_
Robust Curve Fitting
=====================
Now suppose the data we are given has some outliers, i.e., we have
some points that do not obey the noise model. If we were to use the
code above to fit such data, we would get a fit that looks as
below. Notice how the fitted curve deviates from the ground truth.
.. figure:: non_robust_least_squares_fit.png
:figwidth: 500px
:height: 400px
:align: center
To deal with outliers, a standard technique is to use a
:class:`LossFunction`. Loss functions reduce the influence of
residual blocks with high residuals, usually the ones corresponding to
outliers. To associate a loss function with a residual block, we change
.. code-block:: c++
problem.AddResidualBlock(cost_function, NULL , &m, &c);
to
.. code-block:: c++
problem.AddResidualBlock(cost_function, new CauchyLoss(0.5) , &m, &c);
:class:`CauchyLoss` is one of the loss functions that ships with Ceres
Solver. The argument :math:`0.5` specifies the scale of the loss
function. As a result, we get the fit below [#f7]_. Notice how the
fitted curve moves back closer to the ground truth curve.
.. figure:: robust_least_squares_fit.png
:figwidth: 500px
:height: 400px
:align: center
Using :class:`LossFunction` to reduce the effect of outliers on a
least squares fit.
.. rubric:: Footnotes
.. [#f7] `examples/robust_curve_fitting.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robust_curve_fitting.cc>`_
Bundle Adjustment
=================
One of the main reasons for writing Ceres was our need to solve large
scale bundle adjustment problems [HartleyZisserman]_, [Triggs]_.
Given a set of measured image feature locations and correspondences,
the goal of bundle adjustment is to find 3D point positions and camera
parameters that minimize the reprojection error. This optimization
problem is usually formulated as a non-linear least squares problem,
where the error is the squared :math:`L_2` norm of the difference between
the observed feature location and the projection of the corresponding
3D point on the image plane of the camera. Ceres has extensive support
for solving bundle adjustment problems.
Let us solve a problem from the `BAL
<http://grail.cs.washington.edu/projects/bal/>`_ dataset [#f8]_.
The first step as usual is to define a templated functor that computes
the reprojection error/residual. The structure of the functor is
similar to the ``ExponentialResidual``, in that there is an
instance of this object responsible for each image observation.
Each residual in a BAL problem depends on a three dimensional point
and a nine parameter camera. The nine parameters defining the camera
are: three for rotation as a Rodriques' axis-angle vector, three
for translation, one for focal length and two for radial distortion.
The details of this camera model can be found the `Bundler homepage
<http://phototour.cs.washington.edu/bundler/>`_ and the `BAL homepage
<http://grail.cs.washington.edu/projects/bal/>`_.
.. code-block:: c++
struct SnavelyReprojectionError {
SnavelyReprojectionError(double observed_x, double observed_y)
: observed_x(observed_x), observed_y(observed_y) {}
template <typename T>
bool operator()(const T* const camera,
const T* const point,
T* residuals) const {
// camera[0,1,2] are the angle-axis rotation.
T p[3];
ceres::AngleAxisRotatePoint(camera, point, p);
// camera[3,4,5] are the translation.
p[0] += camera[3]; p[1] += camera[4]; p[2] += camera[5];
// Compute the center of distortion. The sign change comes from
// the camera model that Noah Snavely's Bundler assumes, whereby
// the camera coordinate system has a negative z axis.
T xp = - p[0] / p[2];
T yp = - p[1] / p[2];
// Apply second and fourth order radial distortion.
const T& l1 = camera[7];
const T& l2 = camera[8];
T r2 = xp*xp + yp*yp;
T distortion = T(1.0) + r2 * (l1 + l2 * r2);
// Compute final projected point position.
const T& focal = camera[6];
T predicted_x = focal * distortion * xp;
T predicted_y = focal * distortion * yp;
// The error is the difference between the predicted and observed position.
residuals[0] = predicted_x - T(observed_x);
residuals[1] = predicted_y - T(observed_y);
return true;
}
// Factory to hide the construction of the CostFunction object from
// the client code.
static ceres::CostFunction* Create(const double observed_x,
const double observed_y) {
return (new ceres::AutoDiffCostFunction<SnavelyReprojectionError, 2, 9, 3>(
new SnavelyReprojectionError(observed_x, observed_y)));
}
double observed_x;
double observed_y;
};
Note that unlike the examples before, this is a non-trivial function
and computing its analytic Jacobian is a bit of a pain. Automatic
differentiation makes life much simpler. The function
:func:`AngleAxisRotatePoint` and other functions for manipulating
rotations can be found in ``include/ceres/rotation.h``.
Given this functor, the bundle adjustment problem can be constructed
as follows:
.. code-block:: c++
ceres::Problem problem;
for (int i = 0; i < bal_problem.num_observations(); ++i) {
ceres::CostFunction* cost_function =
SnavelyReprojectionError::Create(
bal_problem.observations()[2 * i + 0],
bal_problem.observations()[2 * i + 1]);
problem.AddResidualBlock(cost_function,
NULL /* squared loss */,
bal_problem.mutable_camera_for_observation(i),
bal_problem.mutable_point_for_observation(i));
}
Notice that the problem construction for bundle adjustment is very
similar to the curve fitting example -- one term is added to the
objective function per observation.
Since this large sparse problem (well large for ``DENSE_QR`` anyways),
one way to solve this problem is to set
:member:`Solver::Options::linear_solver_type` to
``SPARSE_NORMAL_CHOLESKY`` and call :member:`Solve`. And while this is
a reasonable thing to do, bundle adjustment problems have a special
sparsity structure that can be exploited to solve them much more
efficiently. Ceres provides three specialized solvers (collectively
known as Schur-based solvers) for this task. The example code uses the
simplest of them ``DENSE_SCHUR``.
.. code-block:: c++
ceres::Solver::Options options;
options.linear_solver_type = ceres::DENSE_SCHUR;
options.minimizer_progress_to_stdout = true;
ceres::Solver::Summary summary;
ceres::Solve(options, &problem, &summary);
std::cout << summary.FullReport() << "\n";
For a more sophisticated bundle adjustment example which demonstrates
the use of Ceres' more advanced features including its various linear
solvers, robust loss functions and local parameterizations see
`examples/bundle_adjuster.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
.. rubric:: Footnotes
.. [#f8] `examples/simple_bundle_adjuster.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/simple_bundle_adjuster.cc>`_
Other Examples
==============
Besides the examples in this chapter, the `example
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/>`_
directory contains a number of other examples:
#. `bundle_adjuster.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/bundle_adjuster.cc>`_
shows how to use the various features of Ceres to solve bundle
adjustment problems.
#. `circle_fit.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/circle_fit.cc>`_
shows how to fit data to a circle.
#. `ellipse_approximation.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/ellipse_approximation.cc>`_
fits points randomly distributed on an ellipse with an approximate
line segment contour. This is done by jointly optimizing the
control points of the line segment contour along with the preimage
positions for the data points. The purpose of this example is to
show an example use case for ``Solver::Options::dynamic_sparsity``,
and how it can benefit problems which are numerically dense but
dynamically sparse.
#. `denoising.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/denoising.cc>`_
implements image denoising using the `Fields of Experts
<http://www.gris.informatik.tu-darmstadt.de/~sroth/research/foe/index.html>`_
model.
#. `nist.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/nist.cc>`_
implements and attempts to solves the `NIST
<http://www.itl.nist.gov/div898/strd/nls/nls_main.shtm>`_
non-linear regression problems.
#. `nist.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/nist.cc>`_
implements and attempts to solves the `NIST
<http://www.itl.nist.gov/div898/strd/nls/nls_main.shtm>`_
non-linear regression problems.
#. `more_garbow_hillstrom.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/more_garbow_hillstrom.cc>`_
A subset of the test problems from the paper
Testing Unconstrained Optimization Software
Jorge J. More, Burton S. Garbow and Kenneth E. Hillstrom
ACM Transactions on Mathematical Software, 7(1), pp. 17-41, 1981
which were augmented with bounds and used for testing bounds
constrained optimization algorithms by
A Trust Region Approach to Linearly Constrained Optimization
David M. Gay
Numerical Analysis (Griffiths, D.F., ed.), pp. 72-105
Lecture Notes in Mathematics 1066, Springer Verlag, 1984.
#. `libmv_bundle_adjuster.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_bundle_adjuster.cc>`_
is the bundle adjustment algorithm used by `Blender <www.blender.org>`_/libmv.
#. `libmv_homography.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/libmv_homography.cc>`_
This file demonstrates solving for a homography between two sets of
points and using a custom exit criterion by having a callback check
for image-space error.
#. `robot_pose_mle.cc
<https://ceres-solver.googlesource.com/ceres-solver/+/master/examples/robot_pose_mle.cc>`_
This example demonstrates how to use the ``DynamicAutoDiffCostFunction``
variant of CostFunction. The ``DynamicAutoDiffCostFunction`` is meant to
be used in cases where the number of parameter blocks or the sizes are not
known at compile time.
This example simulates a robot traversing down a 1-dimension hallway with
noise odometry readings and noisy range readings of the end of the hallway.
By fusing the noisy odometry and sensor readings this example demonstrates
how to compute the maximum likelihood estimate (MLE) of the robot's pose at
each timestep.

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -0,0 +1,11 @@
.. _chapter-tutorial:
========
Tutorial
========
.. toctree::
:maxdepth: 3
nnls_tutorial
gradient_tutorial

View File

@@ -0,0 +1,67 @@
.. _chapter-users:
=====
Users
=====
* At `Google <http://www.google.com>`_, Ceres is used to:
* Estimate the pose of `Street View`_ cars, aircrafts, and satellites.
* Build 3D models for `PhotoTours`_.
* Estimate satellite image sensor characteristics.
* Stitch `panoramas`_ on Android and iOS.
* Apply `Lens Blur`_ on Android.
* Solve `bundle adjustment`_ and `SLAM`_ problems in `Project
Tango`_.
* `Willow Garage`_ uses Ceres to solve `SLAM`_ problems.
* `Southwest Research Institute <http://www.swri.org/>`_ uses Ceres for
`calibrating robot-camera systems`_.
* `Blender <http://www.blender.org>`_ uses Ceres for `planar
tracking`_ and `bundle adjustment`_.
* `OpenMVG <http://imagine.enpc.fr/~moulonp/openMVG/>`_ an open source
multi-view geometry library uses Ceres for `bundle adjustment`_.
* `Microsoft Research <http://research.microsoft.com/en-us/>`_ uses
Ceres for nonlinear optimization of objectives involving subdivision
surfaces under `skinned control meshes`_.
* `Matterport <http://www.matterport.com>`_, uses Ceres for global
alignment of 3D point clouds and for pose graph optimization.
* `Obvious Engineering <http://obviousengine.com/>`_ uses Ceres for
bundle adjustment for their 3D photography app `Seene
<http://seene.co/>`_.
* The `Autonomous Systems Lab <http://www.asl.ethz.ch/>`_ at ETH
Zurich uses Ceres for
* Camera and Camera/IMU Calibration.
* Large scale optimization of visual, inertial, gps and
wheel-odometry data for long term autonomy.
* `OpenPTrack <http://openptrack.org/>`_ uses Ceres for camera
calibration.
* The `Intelligent Autonomous System Lab <http://robotics.dei.unipd.it/>`_
at University of Padova, Italy, uses Ceres for
* Camera/depth sensors network calibration.
* Depth sensor distortion map estimation.
* `Theia <http://cs.ucsb.edu/~cmsweeney/theia>`_ is an open source
Structure from Motion library that uses Ceres for `bundle adjustment`_
and camera pose estimation.
* The `Applied Research Laboratory <https://www.arl.psu.edu/>`_ at
Pennsylvania State University uses in their synthetic aperture Sonar
beamforming engine, called ASASIN , for estimating platform
kinematics.
.. _bundle adjustment: http://en.wikipedia.org/wiki/Structure_from_motion
.. _Street View: http://youtu.be/z00ORu4bU-A
.. _PhotoTours: http://google-latlong.blogspot.com/2012/04/visit-global-landmarks-with-photo-tours.html
.. _panoramas: http://www.google.com/maps/about/contribute/photosphere/
.. _Project Tango: https://www.google.com/atap/projecttango/
.. _planar tracking: http://mango.blender.org/development/planar-tracking-preview/
.. _Willow Garage: https://www.willowgarage.com/blog/2013/08/09/enabling-robots-see-better-through-improved-camera-calibration
.. _Lens Blur: http://googleresearch.blogspot.com/2014/04/lens-blur-in-new-google-camera-app.html
.. _SLAM: http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
.. _calibrating robot-camera systems:
http://rosindustrial.org/news/2014/9/24/industrial-calibration-library-update-and-presentation
.. _skinned control meshes: http://research.microsoft.com/en-us/projects/handmodelingfrommonoculardepth/

File diff suppressed because it is too large Load Diff