The Parking Slot Detection software is an advanced tool that helps users to detect whether a parking slot is empty or occupied. With this software, users can easily draw parking slots and detect their status in real-time.
The software comes with an intuitive user interface that allows users to easily draw parking slots with just a few clicks.
Once the parking slots are drawn, the software uses advanced algorithms (using custom CNN model) to detect the status of each slot, whether it is vacant or occupied.
With this software, users can monitor parking areas such as parking lots, garages, or on-street parking spaces in real-time. It provides an accurate and efficient way to manage parking spaces, helping to reduce congestion and improve traffic flow.
It has 3 modes of running.
- Using MIPI Camera
- Using USB Camera
- Using Video as input
Average FPS: 200/slot
Total FPS: 200/(num of slots)
example) 10 slots available,
FPS - 200/10 ->20
Note: The FPS may change based on the FPS of the input video.
- RZ/V2L Evaluation Board Kit
- USB Camera
- MIPI Camera
- USB Mouse
- USB Keyboard (Only required when removing added slots)
- USB Hub
- HDMI Monitor & Cable
Note: All external devices will be attached to the board and does not require any driver installation (Plug n Play Type)
- Ubuntu 20.04
- OpenCV 4.x
- C++11 or higher
Note: User can skip to the deploy stage if they don't want to build the application. All pre-built binaries are provided.
Note: This project expects the user to have completed Getting Startup Guide provided by Renesas.
After completion of the guide, the user is expected of following things.
- The Board Set Up and booted.
- SD Card Prepared
- The docker image and container for
rzv2l_ai_sdk_image
running on host machine.
Note: Docker container is required for building the sample application. By default the Renesas will provide the container named as
rzv2l_ai_sdk_container
. Please use the docker container name as assigned by the user when building the container.
-
Copy the repository from the GitHub to the desired location.
- It is recommended to copy/clone the repository on the
data
folder which is mounted on therzv2l_ai_sdk_container
docker container.
cd <path_to_data_folder_on_host> git clone https://github.com/renesas-rz/rzv_ai_sdk.git
Note 1: Please verify the git repository url if error occurs.
Note 2: This command will download whole repository, which include all other applications. If you have already downloaded the repository of the same version, you may not need to run this command.
- It is recommended to copy/clone the repository on the
-
Run the docker container and open the bash terminal on the container.
Note: All the build steps/commands listed below are executed on the docker container terminal.
- Assign path to the
data
directory mounted on therzv2l_ai_sdk_container
docker container
export PROJECT_PATH=/drp-ai_tvm/data/
- Go to the
src
directory of the application
cd ${PROJECT_PATH}/rzv_ai_sdk/Q03_smart_parking/src/
Note:
rzv_ai_sdk
is the repository name corresponding to the cloned repository. Please verify the repository name if error occurs.
- Build the application on docker environment by following the steps below
mkdir -p build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=./toolchain/runtime.cmake ..
make -j$(nproc)
The following application file would be generated in the src/build
directory
- parkinglot_detection
For ease of deployment all the files required for deployment are provided on the exe folder.
File | Details |
---|---|
parking_model | Model object files for deployment. |
parking_bg.jpg | Front image for the application. |
sample.mp4 | User sample input video. |
parkinglot_detection | Application file. |
Follow the steps mentioned below to deploy the project on RZ/V2L evaluation Board.
-
At the home/root/tvm directory of the rootfs (SD Card) for RZ/V2L evaluation board.
- Copy the files present in exe directory, which are listed in the table above
- Copy the generated parkinglot_detection application file, if the application file is built at build stage
-
Check if libtvm_runtime.so is there on
/usr/lib64/
directory of the rootfs (SD card) RZ/V2L board.
Note: For the video file to get executed, ensure that the video file is present inside the home/root/tvm directory of the rootfs of the board.
├── usr/
│ └── lib64/
│ └── libtvm_runtime.so
└── home/
└── root/
└── tvm/
├── parking_model/
│ ├── deploy.json
│ ├── deploy.params
│ └── deploy.so
├── sample.mp4
|── parkinglot_detection
└── parking_bg.jpg
-
For running the application, run the commands as shown below on the RZ/V2L Evaluation Board Kit console.
- Go to the
/home/root/tvm
directory of the rootfs
cd /home/root/tvm
- To run inference from video
./parkinglot_detection VIDEO <videofile_name.mp4>
- To run inference from the MIPI camera feed
./parkinglot_detection MIPI
- To run inference from the USB camera feed
./parkinglot_detection USB
- Go to the
Note: The application GUI is same for either of the sample video or the camera.
-
The application consists of two buttons when the application is run.
-
First click on
Edit Slots
button to add the slots on the parking slots. -
To add the slot, press
Add Slot
button.- When you see the camera(or the sample video) screen.
- The Bounding box needs to be drawn where the user would like to detect the occupancy.
- simply press and hold the mouse left click on the screen to start drawing the bounding boxes
- release the click to finish drawing boxes.
- Multiple bounding boxes can be drawn
-
After you have drawn the slots, press
Back
button on the window to go to the home screen -
Click on the
Start inference
button -
To close the running application, Double click on the window.
-
To remove the added slots
- Click on the
Remove Slot
button - Type the parking slot IDs that needs to be removed.
- Click on the
Back
button to go to the home screen
- Click on the
The runtime application will look something like this
- Each bounding boxes (BB) are the parking slots drawn by the user
- Green BB are the empty slots
- Red BB are the occupied slots
- Each slot identified by the user is assigned some ID. The IDs are assigned in the sequence the BB are drawn.
DRP-AI Processing Time(ms)
is also shown in the bottom right corner.
- Application can be terminated by Double clicking on the window.
- Alternatively, User can force close the application using
CTRL+c
on the board console.
The model used is the custom CNN model.
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 26, 26] 896
MaxPool2d-2 [-1, 32, 13, 13] 0
Conv2d-3 [-1, 64, 11, 11] 18,496
MaxPool2d-4 [-1, 64, 5, 5] 0
Conv2d-5 [-1, 128, 3, 3] 73,856
Flatten-6 [-1, 1152] 0
Linear-7 [-1, 2] 2,306
================================================================
Total params: 95,554
Trainable params: 95,554
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 0.30
Params size (MB): 0.36
Estimated Total Size (MB): 0.67
The network diagram will be as follows:
The dataset used is the custom datasets. Please contact on this email to access the dataset:
The AI inference time is 4-7 msec per slot.