-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] How does MobileNet.mlmodel compare to VGG16.mlmodel #5
Comments
The accuracy for MobileNet is pretty much the same as VGG16. But it's 30 times smaller and about 10 times faster. So I'd definitely use MobileNet over VGG16. |
Amazing, Definitely noticed the performance increase! I just found your blog article about Real time object detection using Yolo, and I have to say it's brilliant and exactly what I was trying to achieve. (currently able to do 1 classification / frame @60fps using Metal and MobileNet, but I want to detect multiple objects and also know where they were detected in the frame... so I think I NEED to use YOLO.. ) I'd like to know what are the limitations of using YOLO, for example how does the TinyYOLO model compare to MobileNet, and how would I go about expanding your TinyYOLO model to be able to classify more different categories, or maybe use another model EDIT: Which TinyYOLO model did you use to convert to .mlmodel? The VOC2007+2012 one or the COCO one? I'm very interested in the COCO one since it seems to be able to classify more categories (80 vs 20 of VOC) and it is a more recent challenge. Guess I will try to follow your conversion steps, but might be too much for my level of understanding (at the moment.. ;) Thanks in advance! Oh and BTW, I just noticed you're also from the Netherlands ;) I live in Utrecht myself. |
I converted the VOC one. Note that in the TensorFlow models repo is a version of SSD that runs on MobileNets. This is roughly as accurate as Tiny YOLO but runs much faster. (It's a fair bit of work to do the conversion, which I did for a client and therefore cannot share, but definitely worth it.) |
@hollance You can not share the converted model but Can you please post some guidelines which we can follow to convert MobileNet SSD for the CoreML? This would be a great Help to Community. And atleast you can post some links which you have come across for getting this issue solved. Btw, I am trying to convert TF SSD Mobilenet to CoreML but I am facing bit of a problem in finding right tools to be used. |
I haven't converted MobileNet+SSD to Core ML, so there may be issues I don't know about but one issue is that the model is in TensorFlow format so you have to write your own converter. Also, you have to replave the relu6 activations with something else, as Core ML does not support it. |
Hi! Nice work!
I'm interested in how this MobileNet.mlmodel compares to the ones provided by Apple on their download page. Specifically how it compares to the VGG16 model which I've been using.
Generally I'm on a quest to find the best (biggest) object classification model to use in my app.
Maybe you have some useful suggestions?
Anyway, thanks for providing this sample project!
The text was updated successfully, but these errors were encountered: