- bag_formatter: Select parts of data in 3D space and put a label on it
- extract_rosbags: Take in formatted bags and split in training/testing/validation sets
- Select bags of data in bag_formatter.h
- run bag_formatter.cpp
- Select formatted bags in extract_rosbags.py
- Run rosbag.py
'''bash $rosparam set use_sim_time true $roslaunch smoke_detection transforms.launch $rosrun smoke_detection scan_formatter $rosbag play whatever bag you want to predict '''
- LidarDatasetHC.py
- train_model.py
- topic_prediction.py
- Prepare data
- Run train_model.py
- Run topic_prediction.py
- LidarDataset.py
- train_smokenet.py
- smokenet_topic_prediction.py
- Choose a bag file with /velodyne_points_dual topic (e.g. smoke.bag)
- Run convert_rosbag_to_numpy.ipynb script on smoke.bag, it generates smoke.npy with [pointcloud,images]
- Run 1_labeling_pipeline_dual.py from Julian on smoke.npy, it generates smoke_labeled.npy which adds a columns with labels to each point of lidar pcl
- Run 2_labeling_pipeline_spaces.py on smoke_labeled.npy with the right planes to refine labeling and remove any mislabeled points (from moving objects most of the time), it generates smoke_labeled_spaces.npy
- You can visualise labeled_points on Rviz using visualise_labeled_lidar_pcl.ipynb on smoke_labeled_spaces.npy to make sure the labeling is right
- Run convert_julian_pcl.ipynb on smoke_labeled_spaces.npy to convert his label formatting to mine (one column with label number instead of a boolean column for each label), it generates smoke_labeled_spaces_converted.npy and smoke_imgs.npy
- Run generate_lidar_voxels.py on smoke_labeled_spaces_converted.npy to transform raw lidar scans into a set of voxels already arranged for smokenet tensors (smoke voxels, non-smoke voxels, and scans_voxels), this will be used by my custom dataset to train smokenet
- Run train_smokenet_eth.py with whatever prepared dataset
- Run evaluate_models.py to compare different trained models
- run visualise_prediction.py to visualise the prediction in Rviz /smokenet_prediction_pcl, /smokenet_prediction_vox, velodyne_points_labeled
- Update requirements.txt
- Make the data preparation process a bit more user friendly
- Integrate FCN classifier
- Run convert_rosbag_to_numpy.py
- Either run Julian classification on pcl.npy to obtain pcl_labeled_spaces.npy or copy files from FSR paper
- Run convert_julian_pcl_fsr.npy to obtain pcl_labeled_spaces_converted.npy
- (Optional) You can visualise these in Rviz using visualise_eth_labeled_dataset.py
- Run remove_empty_img_dataset.py to remove all black images and corresponding velodyne scans (this is from velodyne scans that don't have a corresponding multisense image in the rosbags)
- Run project_lidar_pts_in_images.py to obtain pcl_labeled_spaces_converted_projected.npy and pcl_labeled_spaces_converted_cropped.npy
- (Optional) You can now run visualise_pcl_projected_imgs.py to visualise lidar points in images
- Run generate_image_labels.py to obtain images/ and image_labels/ as well as image_labels.npy
- dataset_name/
- scan_pcls/
- 0.npy
- 1.npy
- ...
- M.npy
- scan_voxels/
- coords_0.npy
- coords_1.npy
- ...
- coords_M.npy
- labels_0.npy
- labels_1.npy
- ...
- labels_M.npy config.yaml scaler.pkl
- scan_pcls/
- dataset_name/
- scan_pcls/
- 0.npy
- 1.npy
- ...
- M.npy
- scan_voxels/
- coords_0.npy
- coords_1.npy
- ...
- coords_M.npy
- labels_0.npy
- labels_1.npy
- ...
- labels_M.npy config.yaml scaler.pkl
- scan_pcls/
- rosbag -> convert_rosbag_to_numpy.py -> data/dataset_name/multi_pcl.npy
- multi_pcl.npy -> sample_multisense_pcl.py -> multi_pcl_sampled.npy
- multi_pcl_sampled.npy -> labeling -> multi_pcl_sampled_labeled.npy
- multi_pcl_sampled_labeled.npy -> convert_julian_pcl_fsr.py -> multi_pcl_sampled_labeled_converted.npy
- multi_pcl_sampled_labeled_converted.npy -> project_lidar_pts_in_images.py -> (multi_pcl_sampled_labeled_converted_cropped.npy,multi_pcl_sampled_labeled_converted_projected.npy)