Skip to content

Commit

Permalink
updating README for release 0.2.0
Browse files Browse the repository at this point in the history
  • Loading branch information
jwdinius committed Jul 28, 2020
1 parent 1ee75d1 commit 589a6d7
Show file tree
Hide file tree
Showing 7 changed files with 82 additions and 103 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
build
docker/deps/Dockerfile
output.json
54 changes: 41 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,57 @@
## Non-Minimal Sample Consensus _NMSAC_
## Non-Minimal Sample Consensus _`nmsac`

_11 July 2020 update: this repo is currently under construction. All unit tests pass (when using docker), but the documentation is currently outdated. I am working to update over the coming weeks._
This repo builds on the ideas presented in the paper [SDRSAC](https://arxiv.org/abs/1904.03483) from CVPR2019. Most of the framework comes from the original author's [matlab implementation](https://github.com/intellhave/SDRSAC), translated into C++ and using [armadillo](http://arma.sourceforge.net/) for working with matrices and vectors. At it's core, SDRSAC is about employing a sample-and-consensus strategy, like that of [RANSAC](https://en.wikipedia.org/wiki/Random_sample_consensus). However, with non-minimal subsampling, higher quality motion hypotheses are obtained much faster than those obtained from RANSAC.

This repo is based on the paper [SDRSAC](https://arxiv.org/abs/1904.03483) from CVPR2019. Most of the framework comes from the original author's [matlab implementation](https://github.com/intellhave/SDRSAC), translated into C++ and using [armadillo](http://arma.sourceforge.net/) for working with matrices and vectors. At it's core, SDRSAC is about employing a sample-and-consensus strategy, like that of [RANSAC](https://en.wikipedia.org/wiki/Random_sample_consensus). However, with non-minimal subsampling, higher quality motion hypotheses are obtained much faster than those obtained from random sampling.
The basic workflow to achieve non-minimal sample consensus between two point clouds, `src` and `tgt`, is:

Whereas SDRSAC uses _semidefinite programming_ methods to solve the correspondence problem on non-minimal subsets sampled from the source and target point clouds, NMSAC uses the approach from [here](https://github.com/jwdinius/point-registration-with-relaxation). When trying to translate the original matlab implementation into C++, I found it difficult to find an SDP solver that could be made to work for the problem at-hand so I created my own solution using a more flexible optimization framework ([IPOPT](https://github.com/coin-or/Ipopt)).
```
Algorithm 1: NMSAC
In: src, tgt, config
Out: H, homogeneous transformation that best maps src onto tgt, number of inliers, number of iterations
Initialize
loop:
sample a set of config.N points from src (and mark the points that have been chosen)
loop:
sample a set of config.N points from tgt (and mark the points that have been chosen)
Identify correspondences between subsampled src and subsampled tgt point sets (Algorithm 2)
Identify best fit transformation that maps subsampled src points onto subsampled tgt points using correspondences found (Algorithm 3a)
(optional) Perform iterative alignment of original src and tgt point sets using best fit transformation as starting point (Algorithm 3b)
count inliers and update if number is higher than all previous iterations
check for convergence, exit both loops if converged
```

Despite there being a solution provided for identifying correspondences from subsampled sets, the architecture for NMSAC is set up so that _any algorithm to solve the correspondence problem can be used as a drop-in replacement for the convex relaxation solver_. Regardless of how the correspondence problem is solved, I believe that the path to obtaining higher quality motion hypotheses between point clouds is through non-minimal subsampling.
This project is built with the following goal in mind:

Transformation utilities, like Kabsch's algorithm and the Iterative Closest Point algorithm, are contained [here](https://github.com/jwdinius/point-registration-with-relaxation). Check that repo's `README` for more details.
> Enable rapid experimentation and research by allowing users to easily integrate drop-in replacements for Algorithms 2, 3a, and 3b.
### About this Repo
The code is well-commented with unit tests provided for key functionality but some things may remain unclear. If this is the case, please to make an issue.

#### Static code checking
For desired formatting, please see the script [linter.sh](scripts/linter.sh) and the [`cpplint` docs](https://github.com/cpplint/cpplint).
This project is composed of four subprojects:

* [`nmsac` (Algorithm 1)](./nmsac)
* [`correspondences` (Algorithm 2)](./correspondences)
* [`transforms` (Algorithms 3)](./transforms)
* [`bindings`](./bindings) - *to call C++ algorithms from other languages/frameworks*

Within each subproject, you will find a separate `README` describing that subproject's particular details.

The code in each subproject follows a common standard and is well-commented. [Unit tests](./tests) are provided for key functionality but some things may remain unclear. If this is the case, please create an [issue](https://github.com/jwdinius/nmsac/issues).

For desired formatting, please see the script [linter.sh](scripts/linter.sh) and the [`cpplint` docs](https://github.com/cpplint/cpplint). I will eventually add Travis integration to check each PR, but until then I use the linter script.

I will eventually add Travis integration to check each PR, but until then I use the linter script.
## Quick Start

The recommended approach is to use [`docker`](https://docs.docker.com/install/linux/docker-ce/ubuntu/), however I realize that not everyone is familiar with it. For those users, check out the `RUN` steps in this [file](docker/deps/Dockerfile.std) to properly configure your workspace, including all steps needed to build dependencies from source.

### With `docker` (recommended)
#### Build images
You will first need to build the `qap-dependencies{-nvidia}` container. To do this, execute the following
#### Setup images
##### Grab pre-built images from `dockerhub` (recommended)
```shell
docker pull jdinius/nmsac:{desired-version} # or jdinius/nmsac-nvidia, if the nvidia runtime is available
```

##### Build images
If you want to build the images yourself, you will first need to build the `jdinius/qap-dependencies` container. To do this, execute the following

```shell
$ cd {repo-root-dir}/docker/deps
Expand All @@ -32,7 +60,7 @@ $ ln -s Dockerfile.nvidia Dockerfile ## NOTE: do this if you do have nvidia-doc
$ ./build-docker.sh {--no-cache} ## this will take awhile to build if you pass the `--no-cache` argument
```

Now that you have `qap-dependencies{-nvidia}` built locally, you can build the `nmsac{-nvidia}` image. This image is incremental, and basically just sets up a user environment for working with this repo. To build the `nmsac` image, execute the following:
Now that you have `qap-dependencies{-nvidia}` built locally, you can build the `nmsac{-nvidia}` image. This image is incremental, and basically just sets up a user environment for working with this repo. To build the `nmsac{-nvidia}` image, execute the following:

```shell
cd {repo-root-dir}/docker
Expand Down
6 changes: 6 additions & 0 deletions bindings/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# `nmsac::bindings`
Build bindings to call `nmsac` functions from different languages. Currently supported languages:

* [`python3`](./python)

_Note: Currently, only `nmsac::main` function (and type definition dependencies) have been wrapped._
103 changes: 13 additions & 90 deletions correspondences/README.md
Original file line number Diff line number Diff line change
@@ -1,95 +1,18 @@
# point-registration-with-relaxation
This repo implements a convex relaxation of the binary optimization problem discussed in this [paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.910&rep=rep1&type=pdf), Section 5.4. The repo's main contribution is to provide the core optimization engine for [NMSAC](https://github.com/jwdinius/nmsac), which seeks to employ the sample consensus strategy of [RANSAC](https://en.wikipedia.org/wiki/Random_sample_consensus), but with higher quality transformation hypotheses than random sampling.
# `nmsac::correspondences`
Implementation for Algorithm 2 from the [top-level README](../README.md). For a short discussion on correspondence identification, see this [blog post](https://jwdinius.github.io/blog/2019/point-match/).

This repo also implements helper utilities for aligning point clouds after point correspondences have been identified.
This subproject is organized in the following way:

## Brief Intro
### Correspondence Identification (Matching)
This work addresses the issue of point cloud matching using a convex relaxation of a \[0, 1\] optimization problem. There is one _important_ difference between the work presented in the [original paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.910&rep=rep1&type=pdf) and the work presented herein:
* [`common`](./common) - common utility code and type definitions (including `CorrespondencesBase` definition, which is used to wrap all algorithm implementations for computing correspondences)
* [`qap`](./qap) - implements an optimization-based solution to the correspondences problem.
* _insert new algorithm here! Submit a PR, if you dare!_

> The reference formulation restricts $$m$$, the number of source points, to be less than the number of target points, $$n$$ AND that _all_ $$m$$ source points must be matched with a target point. The present work allows for $$m \le n$$ AND the ability to match $$k \le m$$ source points to _unique_ target points.
Each new algorithm implemented should follow the organization of the [qap](./qap) subdirectory:

After solution of the fully relaxed optimization problem, the resulting optimum may not be a valid solution of the original \[0, 1\] problem, and so the linear programming solver from Google's [ORTools](https://developers.google.com/optimization) library is used to project that optimum onto the space of valid solutions.
* identify short, descriptive name for the implementation, i.e. `{alg}`
* create `include/{alg}` and `src` directories to hold the implementation
* create a `CMakeLists.txt` file to generate build environment for the algorithm implementation
* add appropriate unit tests in folder rooted [here](../tests)
* modify parent `CMakeLists.txt` files to make sure your implementation and tests get built

### Transformation Utilities
In addition to computing the optimal correspondences, the best homogeneous transformation between point clouds is computed using [Kabsch's algorithm](https://en.wikipedia.org/wiki/Kabsch_algorithm). For handling outliers and noise, an implementation of the [Iterative Closest Point](https://en.wikipedia.org/wiki/Iterative_closest_point) algorithm is included which allows the user the flexibility to remove a configurable amount of outliers.

### About the Repo
The code is well-commented with unit tests provided for key functionality but some things may remain unclear. If this is the case, please to make an issue.

#### Static code checking
For desired formatting, please see the script [linter.sh](scripts/linter.sh) and the [`cpplint` docs](https://github.com/cpplint/cpplint).

I will eventually add Travis integration to check each PR, but until then I use the linter script.

## Quick Start

The recommended approach is to use [`docker`](https://docs.docker.com/install/linux/docker-ce/ubuntu/), however I realize that not everyone is familiar with it. For those users, check out the `RUN` steps in this [file](docker/deps/Dockerfile.std) to properly configure your workspace, including all steps needed to build dependencies from source.

### With `docker` (recommended)
#### Build images
You will first need to build the `nmsac-deps` container. To do this, execute the following

```shell
$ cd {repo-root-dir}/docker/deps
$ #ln -s Dockerfile.std Dockerfile ## NOTE: do this if you do not have nvidia-docker2 installed
$ ln -s Dockerfile.nvidia Dockerfile ## NOTE: do this if you do have nvidia-docker2 installed AND you want to use the nvidia runtime
$ ./build-docker.sh ## this will take awhile to build
```

Now that you have `nmsac-deps` built locally, you can build the `qap-register` image. This image is incremental, and basically just sets up a user environment for working with this repo. To build the `qap-register` image, execute the following:

```shell
cd {repo-root-dir}/docker
$ ./build-docker.sh
```

#### Launch development container

```shell
cd {repo-root-dir}
$ ./docker/run-docker.sh {--runtime=nvidia} ## only add the optional command line arg if you have the nvidia runtime available
```

You should now have an interactive shell to work from.

#### Build the library

```shell
$ cd {repo-root-dir}
$ rm -rf build && mkdir build && cd build && cmake .. && make
```
#### Test

To run the unit tests:

```shell
$ cd {repo-root-dir}/build ## after following build steps above
$ make test
```

To see graphical output (from python 2.7):

```shell
$ cd {repo-root-dir}/build ## after following build steps above
$ export PYTHONPATH=$(pwd):$PYTHONPATH
$ cd ../scripts
$ python wrapper-test.py ## or wrapper-test-mincorr.py
```

Inside of these scripts, if the `run_optimization` and `make_plots` flags are set to `True` (and you have the right environment setup for GUI applications), you will see some plots.

## Sample Output (no noise)

### Plot 1: Optimal Solution
![](./figures/solution-nonoise.png)

_It's worth noting the quality of the solution is much better than results reported in the original paper; matches are much more prominent._

### Plot 2: Correspondences
![](./figures/correspondences-nonoise.png)

### Plot 3: Transform Source Points onto Target Set
![](./figures/transformation-nonoise.png)

_Note: The rate of correct correspondence matching is 100% for the example tested, hence the perfect overlap of the transformed source point set onto the target point set._
When in doubt, just follow the `qap` pattern!
7 changes: 7 additions & 0 deletions correspondences/qap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# `nmsac::correspondences::qap`
This subproject implements a convex relaxation of the binary optimization problem discussed in this [paper](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.910&rep=rep1&type=pdf), Section 5.4. For more details, check out some of these resources:

* Mathematical details: [Part 1](https://jwdinius.github.io/blog/2019/point-match/) and [Part 2](https://jwdinius.github.io/blog/2019/point-match-cont/)
* [Code design notes](https://jwdinius.github.io/blog/2019/point-match-sol/)

_Note: this algorithm is pretty slow, but is provided as a reference implementation for benchmarking and to demonstrate desired project structure._
5 changes: 5 additions & 0 deletions nmsac/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
## `nmsac::nmsac`

Implementation for Algorithm 1 (i.e. the main executive process) from the [top-level README](../README.md). This subproject implements and extends the ideas presented in the paper [SDRSAC](https://arxiv.org/abs/1904.03483) from CVPR2019. Most of the framework comes from the original author's [matlab implementation](https://github.com/intellhave/SDRSAC), translated into C++ and using [armadillo](http://arma.sourceforge.net/) for working with matrices and vectors.

There is a nice, end-to-end test of the `nmsac::main` algorithm [here](../tests/nmsac/main_test.cpp).
9 changes: 9 additions & 0 deletions transforms/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# `nmsac::transforms`

This subproject implements helper utilities for aligning point clouds after correspondences have been identified.

* [`common`](./common) - common utilities and definitions for the subproject
* [`icp` (Algorithm 3b)](./icp) - an implementation of the [Iterative Closest Point](https://en.wikipedia.org/wiki/Iterative_closest_point) algorithm that allows the user the flexibility to remove a configurable ratio of outliers
* [`svd` (Algorithm 3a)](./svd) - an implementation of [Kabsch's algorithm](https://en.wikipedia.org/wiki/Kabsch_algorithm) for finding the best rigid transformation between same-sized point sets with known correspondences

_See [top-level README](../README.md) for definitions of Algorithms 3a and 3b._

0 comments on commit 589a6d7

Please sign in to comment.