Prediction time is increased exponentially when I predict 8 k * 8 k image #616
Replies: 15 comments 3 replies
-
It should scale linearly, are you able to provide any sample model or image so that we can recreate the issue? |
Beta Was this translation helpful? Give feedback.
-
this is the link for 8 k * 8 k Image |
Beta Was this translation helpful? Give feedback.
-
I can not provide the model that we used for prediction , so you can test images using any sample model you provide in demo. |
Beta Was this translation helpful? Give feedback.
-
@aymanaboghonim what is your model confidence threshold? |
Beta Was this translation helpful? Give feedback.
-
@aymanaboghonim this was not an actual test (I just called the function once), but overall it should give insight, this is from colab and with the resources you provided (4k by 4k and 8k by 8k) img0_0.jpeg is 4k x 4k it's clear that the runtime is increased almost linearly. My thoughts are, have you had any other cpu intensive tasks running in parallel, so cpu is shared a long time between other tasks ? Perhaps your resource usage was not the same when you were running this prediction tasks ? Can you confirm that anyhow ? |
Beta Was this translation helpful? Give feedback.
-
No
Same Could do you mention which detection model you used?? We used Mask RCNN for instance segmentation purpose. |
Beta Was this translation helpful? Give feedback.
-
We recently merged a PR that reduces instance segmentation model memory by %80 when pycocotools is installed. Have you updated your SAHI to latest? |
Beta Was this translation helpful? Give feedback.
-
since the model and task is not mentioned, I tried on yolov5 & detection, so the results I provided is subject to change for segmentation. |
Beta Was this translation helpful? Give feedback.
-
I have not update it yet but what do you mean by reducing model memory ?? |
Beta Was this translation helpful? Give feedback.
-
yes, so I mentioned mask r cnn and asked you to mention the model you used. |
Beta Was this translation helpful? Give feedback.
-
Mask predictions from instance segmentation models (as mask-rcnn) take %80 less memory when pycocotools is installed in the env. This may increase prediction speed with instance segmentation models. |
Beta Was this translation helpful? Give feedback.
-
It should require much less ram with the latest sahi version. |
Beta Was this translation helpful? Give feedback.
-
Great! I hope this really increase the speed of prediction and decrease the required cost. |
Beta Was this translation helpful? Give feedback.
-
We have been updated the sahi to the latest version 0.10.07 and no Ram Consumption is improved . `2048x2048 - 25 slices - 41.24 seconds 4096x4096 - 100 slices - 495.07 seconds 4096 taking 12x the time with only 4x the number of slices` keep note that we used a custom detectron2 (mask r cnn ) model and a custom config file . |
Beta Was this translation helpful? Give feedback.
-
I used similar code to this code snippet
result = get_sliced_prediction( "large_image/330112-330144-224882-224914-19.jpeg", detection_model, slice_height = 512, slice_width = 512, overlap_height_ratio = 0.1, overlap_width_ratio = 0.1, postprocess_type = 'NMS' )
when I run prediction on the 8 k * 8 k image, it takes about 26 minutes but it takes only 2.3 minutes when I used a 4 k * 4 k part of the large image (8 k * 8 K ) .
does this suggest that the time is increasing exponential ( non linear ) with increasing the image size ??
if it the answer is yes, is there any remedy for this ???
Beta Was this translation helpful? Give feedback.
All reactions