Machine learning environment for Siebren, Yitong & Fengyan.
Each run needs to establish objects
from more than one million points of point cloud files, and then train and test the objects dataset, which is very time-consuming. Therefore, store the generated 500 objects with normalized features
as dataset.txt
file and the corresponding ground truth label
as label.txt
file.
Thus we can directly read these two files for SVM
and Random Forest
algorithm.
All the .py
files are in the src folder.
-
In folder Dataset, dataset.txt and label.txt files should already exist.
-
If not, run src\Pre_main.py to build the
dataset.txt
andlabel.txt
. -
If files already exist, run src\ML_main.py to perform
SVM
andRandom Forest
classification. -
--> Features need to be changed?
-
--> Change features in src\Pre_main.py and run it,
.txt
files will be updated. Then use src\ML_main.py to perform classifications.
dataset.txt -- store the 500 objects file(with 6 normalized features)
label.txt -- store the ground truth labels of 500 objects file
Store the pictures/screenshots which may be used in report.
Relevant python files are entitled with "ML".
ML_dataset.py -- functions to store 500 objects and labels as .txt
files.
ML_main.py -- set this as startup project and run.
Relative path is used thus this project can be cloned and run directly without any modifications.
Before you run this project, you can find the packages needed: requirements.txt.
Use pip install -r requirements.txt
to install appropriate versions of all dependent packages if you haven't got them.