MPS Device for SpaCy Transformers #12713
-
When I run HuggingFace transformer models independently of SpaCy, I am able to leverage the power of the M1 GPU to speed up inference about 30x. Is there a way to do this within SpaCy? The command I am using when running the models in torch is mps_device = torch.device("mps") with the associated model.to(mps_device) and device calls within the code. This produces quite impressive speed ups in both training and inference and it would be useful to have those available within SpaCy. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hey awindsor, If you have thinc-apple-ops installed the PyTorch models should use MPS if you run your code with the |
Beta Was this translation helpful? Give feedback.
Hey awindsor,
If you have thinc-apple-ops installed the PyTorch models should use MPS if you run your code with the
--gpu-id
flag or you addspacy.require_gpu()
to your code. For more information we have written a blogpost about hardware acceleration on Apple https://explosion.ai/blog/metal-performance-shaders. You can skip to the last section for usage instructions. Hope this helps!