apply TTA during inference using sahi #633
-
Is there any example code to apply TTA? I only found an example for yolov5 using torch.hub.load. However, I could not find it for other frameworks (detectron, mmdetection, etc) |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments
-
Hello @ramdhan1989, can you provide a reproducible code example so that we see which functions and modules are called? |
Beta Was this translation helpful? Give feedback.
-
I apologize, my previous question seems not related with this repo. I changed my question if that would be ok. thanks |
Beta Was this translation helpful? Give feedback.
-
You can find TTA examples for Detectron2, MMDetection, HuggingFace and Torchvision models here: https://github.com/obss/sahi/tree/main/demo |
Beta Was this translation helpful? Give feedback.
-
I have checked the notebooks. However, I couldn't find the syntax that used to activate or deactivate TTA during inference. would you mind elaborating more about the TTA option in the code? thanks |
Beta Was this translation helpful? Give feedback.
-
What do you mean by TTA? By default, the SAHI predict function combines standard prediction and sliced prediction results. Set Line 302 in 287accf Set Line 303 in 287accf Similarly, Set Line 130 in 287accf |
Beta Was this translation helpful? Give feedback.
-
Ok noted |
Beta Was this translation helpful? Give feedback.
What do you mean by TTA?
By default, the SAHI predict function combines standard prediction and sliced prediction results.
Set
no_standard_prediction: bool = True
if you want to disable standard predictionsahi/sahi/predict.py
Line 302 in 287accf
Set
no_sliced_prediction: bool = True
if you want to disable sliced predictionsahi/sahi/predict.py
Line 303 in 287accf
Similarly,
Set
perform_standard_pred: bool = False
if you want to disable standard prediction inget_sliced_prediction
functionsahi/sahi/predict.py
Line 130 in 287accf