-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot run inference video #3
Comments
i change type of aligned_norm from uint8 to float32 and it work |
i find out that invoke() takes up so much time. |
Hi HoangTienDuc, this is expected as the quantized model is optimized for mobile devices and it will run slow on computer CPU. If you have GPU, you can accelerate execution using TFlite GPU delegate. Otherwise you have to use keras model for CPU inference |
Hi @zye1996. Thank for your response.
Code is:
I am sure that, same code but difference result from my pc and jetson nano where my pc run ok. |
It looks like it might be a TensorFlow problem. I do not have jetson nano so I cannot reproduce that. Let me confirm it for you later when I borrow one next week |
by the way please use v1 models from pre-trained models if you are using CPU |
how about jetson nano? v1 model only contain model for TPU. I cannot run it with only jetson nano.
Cano you fix feature always has same value? |
Hi I made some modifications and please just run |
i seem that you are adding deep sort tracking with a trick. i think it is very interesting. |
I tested on PC and raspberry pi, and they resulted in the same outcomes. I will add requirement.txt later |
i try it on my jetson nano. difference input always has the same feature value. it is so ridiculous.
Thank you for your support. |
sorry I cannot help as I do not have jetson nano. if it works fine on the desktop then maybe it is related to Tensorflow itself |
I am asking in tf-github. I think this ridiculous come from tf-kernel |
Hi @zye1996 . Could you share me your origin model and how to convert origin model to quantized mode? |
Hi I put the original model in pretrained_model/training_model/inference_model.h5. You can use quantization code in utils folder for quantization. |
Hello @zye1996 . Thank for your awesome work.
I try to run your inference/inference_video.py, but got 2 error:
First is:
I decide comment
Mobilefacenet-TF2-coral_tpu/inference/FaceRecognizer.py
Line 90 in 53303e9
And change index of self.rec_output_index from 1 to 0
Mobilefacenet-TF2-coral_tpu/inference/FaceRecognizer.py
Line 89 in 53303e9
It work. But i got second error
Second is:
How to fix these error?
The text was updated successfully, but these errors were encountered: