Spacy NER during Inference #9372
-
Suppose I have trained 4 different NER models, each model trained on a different set of entities. For my inference, I need to use all four models together to make predictions for different entities. Running all the models sequentially takes too long and I have a constraint of not being able to use GPU during inference. I have tried parallelizing using python's multiprocessing but it is not working. Any way I can parallelize running the models during inference? Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @baivadash! First off, does the order matter in your models, or are there any inter-model dependencies in your inference pipeline?
Later on, you can combine your annotations using a SpanGroup (maybe using |
Beta Was this translation helpful? Give feedback.
Hi @baivadash! First off, does the order matter in your models, or are there any inter-model dependencies in your inference pipeline?
Later on, you can combine your annotations using a SpanGroup (maybe using
SpanGroup.append
or.extend
) or take advantage of span categorization using SpanCategorizer. Here's a good intro on how span categorization relates to NER.