You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Assumes the given torch model is in fp32 compute precision (i.e. weights and activations are all in fp32)
Converts torch model as is (i.e. no treatments such as promoting types)
Sandwichs ops with cast(fp16) -> op -> cast(fp32) then eliminate cancelling casts to obtain fp16 compute precision
This works in most cases: people usually can call torch_model.to(torch.float32) then invoke coremltools.convert. However, there are cases where developers request conversion support for fp16 or mixed fp16-fp32 torch models
As of now, our torch converter
cast(fp16) -> op -> cast(fp32)
then eliminate cancelling casts to obtain fp16 compute precisionThis works in most cases: people usually can call
torch_model.to(torch.float32)
then invokecoremltools.convert
. However, there are cases where developers request conversion support for fp16 or mixed fp16-fp32 torch modelsThe text was updated successfully, but these errors were encountered: