-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build succeed but crash #2
Comments
Nice! Are you running this on an actual device? I don't have an iOS 11 compatible device just yet so I only ran it in the simulator. But I was actually wondering if this would happen on the device. You see, MobileNets uses a "depthwise convolution" and Metal does not support this currently. The original model is trained in Caffe and that also doesn't support depthwise convolution. So they set the "groups" property of a regular convolution layer to be the same as the number of output channels, and then it works. However... in Metal the number of input channels in each group must be a multiple of 4 (as the error message says). But when you set So here we have a model that works OK on the simulator (where it uses Accelerate instead of Metal Performance Shaders) but not on the GPU. I will submit this as a bug report to Apple. Thanks for pointing this out! |
Crashed at the same place. Running on a iphone7 plus, ios 11.0. |
Crashed at the same place. ipad pro 9.7, ios 11.0. |
@austingg There is no way to convert TensorFlow models to Core ML at the moment (only Keras models). In addition, the mlmodel format does not support depthwise convolution, so even if it were possible to convert TensorFlow, Core ML wouldn't know what to do with these layers. |
@hollance thanks, that's pity, mlmodel is still limited to basic cnn classication application. we should still implemente some unspported layer by metal. |
I think CoreML can leave some interface or callback to allow developers to implement their own layer, and integrate into the CoreML pipeline. As long as they provide interface spec, and user provide kernels based on the spec, it should be doable. Otherwise, CoreML won't be very useful given the fact that DL field evolves so fast and new layers/networks coming out almost on a monthly basis. And also, the model size is a real problem. for the 4 models provided on Apple's website, non of them are smaller than 10MB. How could an app developer ship an App with a model about 50~100MB. |
@gwangsc I agree. Please file a feature request at https://bugreport.apple.com -- that's the only way Apple will listen... |
Any updates here? Could you please post a link to bugreport, I want to track it. Thanks! |
It works on the device now with beta 2, but I'm not sure yet if that's because the model now runs on CPU instead of the GPU, or that Metal uses a workaround for this issue. |
@hollance I think it's easy to check in Instruments |
@XBeg9 I haven't had much luck with Instruments and compute shaders. But with GPU Frame Capture in Xcode it should be possible to check. I just haven't had the time for it yet. |
This seems to be resolved in iOS 11 public beta too. I still have to check whether it's running on the CPU or GPU. I shall do that in a few hours. |
Is there any update for this problem? I still meet this bug now. |
The text was updated successfully, but these errors were encountered: