Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nms is not compiled with GPU support #173

Open
MichaelTiemannOSC opened this issue Jul 10, 2022 · 1 comment
Open

nms is not compiled with GPU support #173

MichaelTiemannOSC opened this issue Jul 10, 2022 · 1 comment
Assignees
Labels
bug Something isn't working nlp-internal Indicates that the issue exists to improve the internal NLP model and it's code

Comments

@MichaelTiemannOSC
Copy link
Contributor

Describe the bug
Running pdf_table_extraction.ipynb leads to the following error:


RuntimeError Traceback (most recent call last)
Input In [14], in <cell line: 2>()
1 # get bounding box coordinates for tables
----> 2 temp = inference_detector(table_extractor.model, np.array(images[image_num]))
3 # temp = inference_detector(table_extractor.model, './demo.png')
4 print("Coordinates and probabilities of bordered tables\n", (temp[0][0]))

File /opt/app-root/lib64/python3.8/site-packages/mmdet/apis/inference.py:150, in inference_detector(model, imgs)
148 # forward the model
149 with torch.no_grad():
--> 150 results = model(return_loss=False, rescale=True, **data)
152 if not is_batch:
153 return results[0]

File /opt/app-root/lib64/python3.8/site-packages/torch/nn/modules/module.py:722, in Module._call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
725 self._forward_hooks.values()):
726 hook_result = hook(self, input, result)

File /opt/app-root/lib64/python3.8/site-packages/mmcv/runner/fp16_utils.py:98, in auto_fp16..auto_fp16_wrapper..new_func(*args, **kwargs)
95 raise TypeError('@auto_fp16 can only be used to decorate the '
96 'method of nn.Module')
97 if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled):
---> 98 return old_func(*args, **kwargs)
100 # get the arg spec of the decorated method
101 args_info = getfullargspec(old_func)

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/detectors/base.py:174, in BaseDetector.forward(self, img, img_metas, return_loss, **kwargs)
172 return self.forward_train(img, img_metas, **kwargs)
173 else:
--> 174 return self.forward_test(img, img_metas, **kwargs)

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/detectors/base.py:147, in BaseDetector.forward_test(self, imgs, img_metas, **kwargs)
145 if 'proposals' in kwargs:
146 kwargs['proposals'] = kwargs['proposals'][0]
--> 147 return self.simple_test(imgs[0], img_metas[0], **kwargs)
148 else:
149 assert imgs[0].size(0) == 1, 'aug test does not support '
150 'inference with batch size '
151 f'{imgs[0].size(0)}'

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/detectors/two_stage.py:179, in TwoStageDetector.simple_test(self, img, img_metas, proposals, rescale)
177 x = self.extract_feat(img)
178 if proposals is None:
--> 179 proposal_list = self.rpn_head.simple_test_rpn(x, img_metas)
180 else:
181 proposal_list = proposals

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/dense_heads/dense_test_mixins.py:130, in BBoxTestMixin.simple_test_rpn(self, x, img_metas)
117 """Test without augmentation, only for RPNHead and its variants,
118 e.g., GARPNHead, etc.
119
(...)
127 where 5 represent (tl_x, tl_y, br_x, br_y, score).
128 """
129 rpn_outs = self(x)
--> 130 proposal_list = self.get_bboxes(*rpn_outs, img_metas=img_metas)
131 return proposal_list

File /opt/app-root/lib64/python3.8/site-packages/mmcv/runner/fp16_utils.py:186, in force_fp32..force_fp32_wrapper..new_func(*args, **kwargs)
183 raise TypeError('@force_fp32 can only be used to decorate the '
184 'method of nn.Module')
185 if not (hasattr(args[0], 'fp16_enabled') and args[0].fp16_enabled):
--> 186 return old_func(*args, **kwargs)
187 # get the arg spec of the decorated method
188 args_info = getfullargspec(old_func)

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/dense_heads/base_dense_head.py:102, in BaseDenseHead.get_bboxes(self, cls_scores, bbox_preds, score_factors, img_metas, cfg, rescale, with_nms, **kwargs)
99 else:
100 score_factor_list = [None for _ in range(num_levels)]
--> 102 results = self._get_bboxes_single(cls_score_list, bbox_pred_list,
103 score_factor_list, mlvl_priors,
104 img_meta, cfg, rescale, with_nms,
105 **kwargs)
106 result_list.append(results)
107 return result_list

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/dense_heads/rpn_head.py:185, in RPNHead._get_bboxes_single(self, cls_score_list, bbox_pred_list, score_factor_list, mlvl_anchors, img_meta, cfg, rescale, with_nms, **kwargs)
179 mlvl_valid_anchors.append(anchors)
180 level_ids.append(
181 scores.new_full((scores.size(0), ),
182 level_idx,
183 dtype=torch.long))
--> 185 return self._bbox_post_process(mlvl_scores, mlvl_bbox_preds,
186 mlvl_valid_anchors, level_ids, cfg,
187 img_shape)

File /opt/app-root/lib64/python3.8/site-packages/mmdet/models/dense_heads/rpn_head.py:231, in RPNHead._bbox_post_process(self, mlvl_scores, mlvl_bboxes, mlvl_valid_anchors, level_ids, cfg, img_shape, **kwargs)
228 ids = ids[valid_mask]
230 if proposals.numel() > 0:
--> 231 dets, _ = batched_nms(proposals, scores, ids, cfg.nms)
232 else:
233 return proposals.new_zeros(0, 5)

File /opt/app-root/lib64/python3.8/site-packages/mmcv/ops/nms.py:307, in batched_nms(boxes, scores, idxs, nms_cfg, class_agnostic)
305 # Won't split to multiple nms nodes when exporting to onnx
306 if boxes_for_nms.shape[0] < split_thr or torch.onnx.is_in_onnx_export():
--> 307 dets, keep = nms_op(boxes_for_nms, scores, **nms_cfg_)
308 boxes = boxes[keep]
309 # -1 indexing works abnormal in TensorRT
310 # This assumes dets has 5 dimensions where
311 # the last dimension is score.
312 # TODO: more elegant way to handle the dimension issue.
313 # Some type of nms would reweight the score, such as SoftNMS

File /opt/app-root/lib64/python3.8/site-packages/mmcv/utils/misc.py:340, in deprecated_api_warning..api_warning_wrapper..new_func(*args, **kwargs)
337 kwargs[dst_arg_name] = kwargs.pop(src_arg_name)
339 # apply converted arguments to the decorated method
--> 340 output = old_func(*args, **kwargs)
341 return output

File /opt/app-root/lib64/python3.8/site-packages/mmcv/ops/nms.py:171, in nms(boxes, scores, iou_threshold, offset, score_threshold, max_num)
169 inds = ext_module.nms(*indata_list, **indata_dict)
170 else:
--> 171 inds = NMSop.apply(boxes, scores, iou_threshold, offset,
172 score_threshold, max_num)
173 dets = torch.cat((boxes[inds], scores[inds].reshape(-1, 1)), dim=1)
174 if is_numpy:

File /opt/app-root/lib64/python3.8/site-packages/mmcv/ops/nms.py:26, in NMSop.forward(ctx, bboxes, scores, iou_threshold, offset, score_threshold, max_num)
22 bboxes, scores = bboxes[valid_mask], scores[valid_mask]
23 valid_inds = torch.nonzero(
24 valid_mask, as_tuple=False).squeeze(dim=1)
---> 26 inds = ext_module.nms(
27 bboxes, scores, iou_threshold=float(iou_threshold), offset=offset)
29 if max_num > 0:
30 inds = inds[:max_num]

RuntimeError: nms is not compiled with GPU support

To Reproduce
Steps to reproduce the behavior:

  1. I am working from this fork of the aicoe-osc-demo: https://github.com/MichaelTiemannOSC/aicoe-osc-demo/tree/cdp-experiments
  2. After setting up pdfs and annotations for EXPERIMENT_NAME=test_cdp2, I run pdf_text_extract.ipynb, which works fine. My attempt to run pdf_table_extract.ipynb works until cell eleven:

get bounding box coordinates for tables

temp = inference_detector(table_extractor.model, np.array(images[image_num]))

temp = inference_detector(table_extractor.model, './demo.png')

print("Coordinates and probabilities of bordered tables\n", (temp[0][0]))
print("Coordinates and probabilities of borderless tables\n", (temp[0][2]))

Expected behavior
I expect the function to complete without error.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
I have a GPU, and it works (according to confirm_gpu_available.ipynb). I've inserted this text to confirm, and it is confirmed:

import torch
torch.cuda.is_available() # returns true

I therefore expect that nms would go ahead and use the GPU.

@MichaelTiemannOSC MichaelTiemannOSC added the bug Something isn't working label Jul 10, 2022
@Shreyanand
Copy link
Member

pdf_table_extraction is not currently a part of the demo. In our initial conversation with IDS folks, we found out that the table extraction and their models do not give good results, so we decided to focus on the text extraction notebooks.
The table extraction and curation are still a part of the demo2 directory but we should add it to the deprecated directory to avoid confusion. Created #175 for the task.

@Shreyanand Shreyanand added the nlp-internal Indicates that the issue exists to improve the internal NLP model and it's code label Jul 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working nlp-internal Indicates that the issue exists to improve the internal NLP model and it's code
Projects
None yet
Development

No branches or pull requests

3 participants