Skip to content

Commit

Permalink
Organizes documentation and profiles. Includes dynamic reconfigure in…
Browse files Browse the repository at this point in the history
…to RQt for convinience. (#53)

Signed-off-by: Agustin Alba Chicar <[email protected]>
  • Loading branch information
agalbachicar authored Sep 20, 2024
1 parent 01f2ab5 commit ab249b3
Show file tree
Hide file tree
Showing 3 changed files with 154 additions and 36 deletions.
117 changes: 90 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
TODO: not setting the DATASET_PATH environment variable when composing the training profile prevents it from succeding, we need to decide how to solve this issue.

# Fruit detection

# Requisites
Expand All @@ -16,7 +14,7 @@ We recommend reading this [article](https://docs.omniverse.nvidia.com/isaacsim/l

> **NOTE:** this project is disk savvy, make sure to have tens of GBs (~50GB) available of free disk space.
## Contributing
## Pre-commit configuration - contributors only

This projects uses pre-commit hooks for linting. To install and make sure they are run when committing:

Expand All @@ -31,7 +29,7 @@ If you want to run the linters but still not ready to commit you can run:
pre-commit run --all-files
```

# Using the different docker components
# Documentation

## Architecture

Expand All @@ -45,17 +43,38 @@ The available profiles are:
- `training`: trains a fasterrcnn_resnet50_fpn model based on a synthetic dataset.
- `detection`: loads the detection stack.
- `visualization`: loads RQt to visualize the input and output image processing.
- `test_camera`: loads the usb_cam driver that makes a connected webcam to publish. Useful when the Olive Camera is not available.
- `webcam`: loads the usb_cam driver that makes a connected webcam to publish. Useful when the Olive Camera is not available.
- `simulation`: loads the simulation NVidia Isaac Omniverse.
- `dataset_gen`: generates a training dataset using NVidia Isaac Omniverse.
> TBD

Compound profiles are:

- `test_real_pipeline`: loads `test_camera`,`visualization` and `detection`.
- `olive_pipeline`: loads `visualization` and `detection`, expects the Olive Camera to be connected.

```mermaid
graph TD
A[Olive Camera] --> B[Fruit Detection Node]
B[Fruit Detection Node] --> C[RQt Visualization]
A[Olive Camera] --> C[RQt Visualization]
```

- `webcam_pipeline`: loads `webcam`,`visualization` and `detection`.

```mermaid
graph TD
A[Webcam] --> B[Fruit Detection Node]
B[Fruit Detection Node] --> C[RQt Visualization]
A[Webcam] --> C[RQt Visualization]
```

- `simulated_pipeline`: loads `simulation`,`visualization` and `detection`.

> TBD
```mermaid
graph TD
A[NVidia Omniverse] --> B[Fruit Detection Node]
B[Fruit Detection Node] --> C[RQt Visualization]
A[NVidia Omniverse] --> C[RQt Visualization]
```

Testing profiles are:

Expand All @@ -64,17 +83,35 @@ Testing profiles are:

## Build the images

> TODO (#16): run `export DATASET_PATH=/tmp` and then override it after generating a dataset to bypass this issue.
To build all the docker images:

```bash
docker compose -f docker/docker-compose.yml --profile "*" build
```
## Dataset generation

## Training
It generates a dataset with 300 annotated pictures where many scene conditions are randomized such as lighting and object pose.

To train a model you need a NVidia Omniverse synthetic dataset. You first need to set up the following environment variable:
To generate a new dataset:

```bash
docker compose -f docker/docker-compose.yml --profile dataset_gen up
```
export DATASET_PATH=PATH/TO/TRAINING/DATA

The following .gif video shows pictures where the ground plane conditions color is randomized having a better dataset for the simulation.

![Dataset gen](./doc/dataset_gen.gif)

And once it finishes (note the scene does not evolve anymore) check the generated folder under `isaac_ws/datasets/YYYYMMDDHHMMSS_out_fruit_sdg` where `YYYYMMDDHHMMSS` is the stamp of the dataset creation.

## Training the model

To train a model you need a NVidia Omniverse synthetic dataset built in the previous step. You first need to set up the following environment variable:

```bash
export DATASET_PATH=$(pwd)/isaac_ws/datasets/YYYYMMDDHHMMSS_out_fruit_sdg
```

Then you can run the training using the training profile:
Expand All @@ -95,45 +132,71 @@ This will evaluate every image in the `DATASET_PATH` and generate annotated imag

To run the system you need to define which profile(s) to run. You can pile profiles by adding them one after the other to have a custom bring up of the system (e.g.`--profile detection --profile visualization`).

To load the test (camera) real system, you can:
### Running olive_pipeline

To load the system with the Olive Camera, detection and the visualization in RQt, you can do the following:

1. Connect the camera to the USB port of your computer.

2. Assuming you have already built a detection model, run the following command:

```bash
docker compose -f docker/docker-compose.yml --profile test_real_pipeline up
docker compose -f docker/docker-compose.yml --profile olive_pipeline up
```

To stop the system you can Ctrl-C or from another terminal call:
3. Verify you can see in the camera input and the processed images in RQt.

4. To stop the system you can Ctrl-C or from another terminal call:

```bash
docker compose -f docker/docker-compose.yml --profile test_real_pipeline down
docker compose -f docker/docker-compose.yml --profile olive_pipeline down
```

### Running test_real_pipeline
### Running webcam_pipeline

For running this pipeline is needed to have a trained model (.pth file) on the `model` folder. By default, the detection service will try to load a file called `model.pth`, but this can be override by changing the `model_path` parameter from `detection_ws/src/detection/launch/detection.launch.py`.
To load the system with a webcam, detection and the visualization in RQt, you can do the following:

## Test
1. Connect the camera to your computer or make sure the integrated webcam is working on your laptop.

### Detection stack
2. Assuming you have already built a detection model, run the following command:

```bash
docker compose -f docker/docker-compose.yml --profile detection_test build
docker compose -f docker/docker-compose.yml --profile webcam_pipeline up
```

## Dataset generation
3. Verify you can see in the camera input and the processed images in RQt.

It generates a dataset with 100 annotated pictures where the lighting conditions and the fruit pose is randomized.
4. To stop the system you can Ctrl-C or from another terminal call:

To generate a new dataset:
```bash
docker compose -f docker/docker-compose.yml --profile webcam_pipeline down
```

### Running simulated_pipeline

To load the system with the simulation, detection and the visualization in RQt, you can do the following:

1. Assuming you have already built a detection model, run the following command:

```bash
docker compose -f docker/docker-compose.yml --profile dataset_gen up
docker compose -f docker/docker-compose.yml --profile simulated_pipeline up
```
The following .gif video shows pictures where the ground plane conditions color is randomized having a better dataset for the simulation.

![Dataset gen](./doc/dataset_gen.gif)
3. Verify you can see in the camera input and the processed images in RQt as well as having the simulator window up.

And once it finishes (note the scene does not evolve anymore) check the generated folder under `isaac_ws/datasets/YYYYMMDDHHMMSS_out_fruit_sdg` where `YYYYMMDDHHMMSS` is the stamp of the dataset creation.
4. To stop the system you can Ctrl-C or from another terminal call:

```bash
docker compose -f docker/docker-compose.yml --profile simulated_pipeline down
```

## Test

### Detection stack

```bash
docker compose -f docker/docker-compose.yml --profile detection_test build
```

# FAQs

Expand Down
16 changes: 11 additions & 5 deletions docker/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,10 @@ services:
dockerfile: docker/detection.dockerfile
target: detection_prod
container_name: detection
profiles: ["detection", "test_real_pipeline", "simulated_pipeline"]
profiles: ["detection",
"webcam_pipeline",
"simulated_pipeline",
"olive_pipeline"]
ipc: host
network_mode: host
stdin_open: true
Expand Down Expand Up @@ -95,7 +98,10 @@ services:
context: ..
dockerfile: docker/visualization.dockerfile
container_name: visualization
profiles: ["visualization", "test_real_pipeline", "simulated_pipeline"]
profiles: ["visualization",
"webcam_pipeline",
"simulated_pipeline",
"olive_pipeline"]
ipc: host
network_mode: host
entrypoint: ["/root/visualization_ws/entrypoint.sh"]
Expand All @@ -118,12 +124,12 @@ services:
count: 1
capabilities:
- gpu
test_camera:
webcam:
build:
context: ..
dockerfile: docker/camera.dockerfile
container_name: test_camera
profiles: ["test_camera", "test_real_pipeline"]
container_name: webcam
profiles: ["webcam", "webcam_pipeline"]
stdin_open: true
stop_grace_period: 1s
privileged: true
Expand Down
57 changes: 53 additions & 4 deletions visualization_ws/config/image_view.perspective
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@
"pretty-print": " 7 % 7 % 7"
},
"state": {
"repr(QByteArray.hex)": "QtCore.QByteArray(b'000000ff00000000fd000000010000000300000780000003e9fc0100000002fc00000000000003b5000000c800fffffffc0200000002fb0000005a007200710074005f0069006d006100670065005f0076006900650077005f005f0049006d0061006700650056006900650077005f005f0031005f005f0049006d00610067006500560069006500770057006900640067006500740100000014000001d20000005000fffffffb0000004c007200710074005f0074006f007000690063005f005f0054006f0070006900630050006c007500670069006e005f005f0031005f005f0054006f00700069006300570069006400670065007401000001ec000002110000006c00fffffffb0000005a007200710074005f0069006d006100670065005f0076006900650077005f005f0049006d0061006700650056006900650077005f005f0032005f005f0049006d006100670065005600690065007700570069006400670065007401000003bb000003c5000000d300ffffff000007800000000000000004000000040000000800000008fc00000001000000030000000100000036004d0069006e0069006d0069007a006500640044006f0063006b00570069006400670065007400730054006f006f006c0062006100720000000000ffffffff0000000000000000')",
"repr(QByteArray.hex)": "QtCore.QByteArray(b'000000ff00000000fd000000010000000300000780000003e9fc0100000002fc00000000000003b50000019600fffffffc0200000002fb0000005a007200710074005f0069006d006100670065005f0076006900650077005f005f0049006d0061006700650056006900650077005f005f0031005f005f0049006d0061006700650056006900650077005700690064006700650074010000001400000252000000a500fffffffb0000004c007200710074005f0074006f007000690063005f005f0054006f0070006900630050006c007500670069006e005f005f0031005f005f0054006f007000690063005700690064006700650074010000026c000001910000006c00fffffffc000003bb000003c50000019600fffffffc0200000002fb0000005a007200710074005f0069006d006100670065005f0076006900650077005f005f0049006d0061006700650056006900650077005f005f0032005f005f0049006d0061006700650056006900650077005700690064006700650074010000001400000253000000a500fffffffb0000006c007200710074005f007200650063006f006e006600690067007500720065005f005f0050006100720061006d005f005f0031005f005f005f0070006c007500670069006e0063006f006e007400610069006e00650072005f0074006f0070005f007700690064006700650074010000026d000001900000010e00ffffff000007800000000000000004000000040000000800000008fc00000001000000030000000100000036004d0069006e0069006d0069007a006500640044006f0063006b00570069006400670065007400730054006f006f006c0062006100720000000000ffffffff0000000000000000')",
"type": "repr(QByteArray.hex)",
"pretty-print": " P l "
"pretty-print": " R l l Zrqt_image_view__ImageView__2__ImageViewWidget lrqt_reconfigure__Param__1___plugincontainer_top_widget 6MinimizedDockWidgetsToolbar "
}
},
"groups": {
Expand All @@ -29,7 +29,7 @@
"pluginmanager": {
"keys": {
"running-plugins": {
"repr": "{'rqt_image_view/ImageView': [1, 2], 'rqt_topic/TopicPlugin': [1]}",
"repr": "{'rqt_image_view/ImageView': [1, 2], 'rqt_reconfigure/Param': [1], 'rqt_topic/TopicPlugin': [1]}",
"type": "repr"
}
},
Expand Down Expand Up @@ -176,6 +176,55 @@
}
}
},
"plugin__rqt_reconfigure__Param__1": {
"keys": {},
"groups": {
"dock_widget___plugincontainer_top_widget": {
"keys": {
"dock_widget_title": {
"repr": "'Parameter Reconfigure'",
"type": "repr"
},
"dockable": {
"repr": "True",
"type": "repr"
},
"parent": {
"repr": "None",
"type": "repr"
}
},
"groups": {}
},
"plugin": {
"keys": {
"_splitter": {
"repr(QByteArray.hex)": "QtCore.QByteArray(b'000000ff00000001000000020000012c000000640100000009010000000200')",
"type": "repr(QByteArray.hex)",
"pretty-print": " , d "
},
"expanded_nodes": {
"repr": "[]",
"type": "repr"
},
"selected_nodes": {
"repr": "[]",
"type": "repr"
},
"splitter": {
"repr(QByteArray.hex)": "QtCore.QByteArray(b'000000ff0000000100000002000000ae0000006401ffffffff010000000100')",
"type": "repr(QByteArray.hex)",
"pretty-print": " d "
},
"text": {
"repr": "''",
"type": "repr"
}
},
"groups": {}
}
}
},
"plugin__rqt_topic__TopicPlugin__1": {
"keys": {},
"groups": {
Expand All @@ -199,7 +248,7 @@
"plugin": {
"keys": {
"tree_widget_header_state": {
"repr(QByteArray.hex)": "QtCore.QByteArray(b'000000ff000000000000000100000000000000050100000000000000000000000620000000010000000500000064000003a1000000060101000100000000000000000600000064ffffffff000000810000000300000006000000e500000001000000030000014200000001000000030000006200000001000000030000002b0000000100000003000000ed0000000100000003000000000000000100000003000003e80000000064')",
"repr(QByteArray.hex)": "QtCore.QByteArray(b'000000ff000000000000000100000000000000050100000000000000000000000620000000010000000500000064000003a1000000060101000100000000000000000600000064ffffffff000000810000000300000006000000a00000000100000003000000d900000001000000030000006200000001000000030000002b00000001000000030000019b0000000100000003000000000000000100000003000003e80000000064')",
"type": "repr(QByteArray.hex)",
"pretty-print": " d d"
}
Expand Down

0 comments on commit ab249b3

Please sign in to comment.