You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Make sure to only create an issue here for bugs in the coremltools Python package. If this is a bug with the Core ML Framework or Xcode, please submit your bug here: https://developer.apple.com/bug-reporting/
Provide a clear and consise description of the bug.
Stack Trace
ERROR - converting 'callmethod' op (located at: '0'):
Converting PyTorch Frontend ==> MIL Ops: 2%|█▌ | 857/50346 [00:00<00:15, 3185.51 ops/s]
Traceback (most recent call last):
File "/Users/daryl/tangia/E2TTS/lucidrain-fork/export_model.py", line 217, in <module>
model = coremltools.convert(
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py", line 635, in convert
mlmodel = mil_convert(
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 188, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 212, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 288, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/converter.py", line 108, in __call__
return load(*args, **kwargs)
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 87, in load
return _perform_torch_convert(converter, debug)
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 131, in _perform_torch_convert
raise e
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 123, in _perform_torch_convert
prog = converter.convert()
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 1293, in convert
convert_nodes(self.context, self.graph, early_exit=not has_states)
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 92, in convert_nodes
raise e # re-raise exception
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 87, in convert_nodes
convert_single_node(context, node)
File "/Users/daryl/opt/anaconda3/envs/tangia/lib/python3.10/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 117, in convert_single_node
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'callmethod' not implemented.
(tangia) Daryl-MBP:lucidrain-fork daryl$
To Reproduce
Please add a minimal code example that can reproduce the error when running it.
# model code:
x = nn.Linear()
x()
Seems to happen any time we call a function. My real issue is that I cannot tell from the error message which line of Python code corresponds to the error. I think it goes away in this case if i replace x with x.forward(..)
If the model conversion succeeds, but there is a numerical mismatch in predictions, please include the code used for comparisons.
System environment (please complete the following information):
coremltools version:
OS (e.g. MacOS version or Linux type):
Any other relevant version information (e.g. PyTorch or TensorFlow version):
Additional context
Add anything else about the problem here that you want to share.
The text was updated successfully, but these errors were encountered:
why wouldn't forward calls be supported? am i expected to have my entire graph be flat without any classes?
would there be any issue with having this exception in ops.py print str(node)? might make a PR, as node.get_scope_info() is kinda useless - (Pdb) node.get_scope_info()[0] ['0', '3934']
I'm not understanding the issue here. Please add a complete example to reproduce the issue. Please include all nesscessary import statements and the call to coremltools.convert.
🐞Describing the bug
Stack Trace
To Reproduce
Seems to happen any time we call a function. My real issue is that I cannot tell from the error message which line of Python code corresponds to the error. I think it goes away in this case if i replace x with x.forward(..)
System environment (please complete the following information):
Additional context
The text was updated successfully, but these errors were encountered: