Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rtdertv2有可视化pr曲线的代码吗 #523

Open
xiaocongxin opened this issue Dec 20, 2024 · 8 comments
Open

rtdertv2有可视化pr曲线的代码吗 #523

xiaocongxin opened this issue Dec 20, 2024 · 8 comments
Assignees

Comments

@xiaocongxin
Copy link

Star RTDETR
请先在RTDETR主页点击star以支持本项目
Star RTDETR to help more people discover this project.


Describe the bug
A clear and concise description of what the bug is.
If applicable, add screenshots to help explain your problem.

To Reproduce
Steps to reproduce the behavior.
请问rtdertv2有可视化pr曲线的代码吗

@lyuwenyu
Copy link
Owner

目前没有相关的代码的;

@xiaocongxin
Copy link
Author

目前没有相关的代码的;

好的,谢谢。想要绘制混淆矩阵,能否指点一下可以修改哪部分代码

@lyuwenyu
Copy link
Owner

lyuwenyu commented Dec 20, 2024

需要重写一下accumulate;这个函数内部有一个_summarize;可以看到self.eval['precision'], self.eval['recall']怎么使用;不过还是建议使用第三方库直接可以画的那种


https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L411-L418

@philippschw
Copy link

Hi, I am struggling with the same issue, did you find a solution already?

@xiaocongxin
Copy link
Author

Hi, I am struggling with the same issue, did you find a solution already?

我没有直接画出来,而是将数组保存为npz文件,可能需要你自己修改一下,下面是我修改的部分:
rtdetrv2_pytorch/src/data/dataset/coco_eval.py中的summarize_classwise
` def summarize_classwise(self, coco_eval, iou_type):
precisions = coco_eval.eval['precision']
recalls = coco_eval.eval['recall']
assert len(self.coco_gt.getCatIds()) == precisions.shape[2]

    results_per_category = []
    for idx, cat_id in enumerate(self.coco_gt.getCatIds()):
        np.savez(f'/D/rtdetrv2_pytorch/runs/valnpz/rtdetr_testmetrics_class_{idx}.npz', 
             precisions=precisions[0, :, idx, 0, 2], 
             recalls=np.arange(0.0, 1.01, 0.01))
        t = []
        nm = self.coco_gt.loadCats(cat_id)[0]
        precision = precisions[:, :, idx, 0, -1]
        precision = precision[precision > -1]
        if precision.size:
            ap = np.mean(precision)
        else:
            ap = float('nan')
        t.append(f'{nm["name"]}')
        t.append(f'{round(ap, 3)}')  # 所有阈值下的平均精度

        # indexes of IoU @50 and @50-95

        precision_50 = precisions[0, :, idx, 0, -1]
        precision_50 = precision_50[precision_50 > -1]
        if precision_50.size:
            ap_50 = np.mean(precision_50)
        else:
            ap_50 = float('nan')
        t.append(f'{round(ap_50, 3)}')


        precision_50_95 = precisions[:, :, idx, 0, -1]
        precision_50_95 = precision_50_95[precision_50_95 > -1]
        if precision_50_95.size:
            ap_50_95 = np.mean(precision_50_95)
        else:
            ap_50_95 = float('nan')
        t.append(f'{round(ap_50_95, 3)}')

        # indexes of area of small, median and large
        for area in [1, 2, 3]:
            precision = precisions[:, :, idx, area, -1]
            precision = precision[precision > -1]
            if precision.size:
                ap = np.mean(precision)
            else:
                ap = float('nan')
            t.append(f'{round(ap, 3)}')
        results_per_category.append(tuple(t))

    num_columns = len(results_per_category[0])
    results_flatten = list(itertools.chain(*results_per_category))
    headers = [
        'category', 'mAP', 'mAP_50', 'mAP50-95', 'mAP_s',
        'mAP_m', 'mAP_l'
    ]
    results_2d = itertools.zip_longest(*[
        results_flatten[i::num_columns]
        for i in range(num_columns)
    ])
    table_data = [headers]
    table_data += [result for result in results_2d]
    table = AsciiTable(table_data)
    print(f"\n{iou_type} evaluation per category:\n" + table.table)   `

@xiaocongxin
Copy link
Author

Hi, I am struggling with the same issue, did you find a solution already?

要画的话应该就是直接
fig, ax = plt.subplots(1, 1, figsize=(10, 8), tight_layout=True) ax.plot(recalls=np.arange(0.0, 1.01, 0.01), precisions=precisions[0, :, idx, 0, 2], linewidth=3,color="red", label=f"rtdetrv2 {idx} classes {ap_d} [email protected]")

@philippschw
Copy link

Thanks for your answer @xiaocongxin .
Because of the different timezones, I already come up with my own solution, maybe it is helpful for someone still. As I did not understand how to call it from inside coco_eval, I load the eval result and from here compute and draw the pr curve and save the [email protected] for each class in a dict for saving it to the model tracker.

def draw_pr_curve(all_precision, save_dir="pr_curve.png"):
    """
    Draws the Precision-Recall curve for each class and calculates [email protected].

    Args:
        all_precision: Array containing precision values for all classes, IoU thresholds, areas, and detection thresholds.
        save_dir: Path to save the plot.

    Returns:
        A dictionary where keys are class names and values are corresponding [email protected] values.

    """
    num_classes = all_precision.shape[2]
    iou_thres_index = 0  # IoU threshold index for [email protected]
    area_range_index = 0  # All Area ranges
    max_detect_thres_index = 2  # Maximal Detection Threshold 100

    fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
    colors = plt.colormaps['tab20']

    # Initialize a dictionary to store [email protected] values
    map_05_dict = {}

    for class_id in range(num_classes):
        pr_curve = all_precision[iou_thres_index, :, class_id, area_range_index, max_detect_thres_index]
        x = np.arange(0, 1.01, 0.01)

        # Calculate [email protected]
        map_05 = np.mean(pr_curve)  # Assuming precision is calculated at 101 recall points
        map_05_dict[mscoco_category2name.get(class_id + 1, f'{class_id} not specified')] = map_05

        # Add [email protected] to the legend label
        label = f"{mscoco_category2name.get(class_id + 1, f'{class_id} not specified')} {map_05:.3f}" 
        ax.plot(x, pr_curve, label=label, color=colors(class_id / num_classes))

    # Calculate and store all classes [email protected]
    all_classes_map_05 = np.mean(list(map_05_dict.values()))
    map_05_dict['all_classes'] = all_classes_map_05

    # Add legend
    ax.legend(title='Classes [email protected]', bbox_to_anchor=(1.05, 1), loc='upper left')

    # Add axis descriptions
    ax.set_xlabel('Recall')
    ax.set_ylabel('Precision')
    ax.set_title('Precision-Recall Curve')

    fig.savefig(save_dir, dpi=250)
    plt.close(fig)

    return map_05_dict 
coco_eval = torch.load('output/rtdetrv2_r18vd_120e_coco/eval.pth')
map05 = draw_pr_curve(coco_eval['precision'])

@xiaocongxin
Copy link
Author

谢谢你的回答。由于时区不同,我已经想出了自己的解决方案,也许它对某人仍然有帮助。由于我不明白如何从 coco_eval 内部调用它,因此我加载了评估结果,然后从这里计算并绘制 pr 曲线,并将每个类的 [email protected] 保存在一个字典中,以便将其保存到模型跟踪器中。

def draw_pr_curve(all_precision, save_dir="pr_curve.png"):
    """
    Draws the Precision-Recall curve for each class and calculates [email protected].

    Args:
        all_precision: Array containing precision values for all classes, IoU thresholds, areas, and detection thresholds.
        save_dir: Path to save the plot.

    Returns:
        A dictionary where keys are class names and values are corresponding [email protected] values.

    """
    num_classes = all_precision.shape[2]
    iou_thres_index = 0  # IoU threshold index for [email protected]
    area_range_index = 0  # All Area ranges
    max_detect_thres_index = 2  # Maximal Detection Threshold 100

    fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True)
    colors = plt.colormaps['tab20']

    # Initialize a dictionary to store [email protected] values
    map_05_dict = {}

    for class_id in range(num_classes):
        pr_curve = all_precision[iou_thres_index, :, class_id, area_range_index, max_detect_thres_index]
        x = np.arange(0, 1.01, 0.01)

        # Calculate [email protected]
        map_05 = np.mean(pr_curve)  # Assuming precision is calculated at 101 recall points
        map_05_dict[mscoco_category2name.get(class_id + 1, f'{class_id} not specified')] = map_05

        # Add [email protected] to the legend label
        label = f"{mscoco_category2name.get(class_id + 1, f'{class_id} not specified')} {map_05:.3f}" 
        ax.plot(x, pr_curve, label=label, color=colors(class_id / num_classes))

    # Calculate and store all classes [email protected]
    all_classes_map_05 = np.mean(list(map_05_dict.values()))
    map_05_dict['all_classes'] = all_classes_map_05

    # Add legend
    ax.legend(title='Classes [email protected]', bbox_to_anchor=(1.05, 1), loc='upper left')

    # Add axis descriptions
    ax.set_xlabel('Recall')
    ax.set_ylabel('Precision')
    ax.set_title('Precision-Recall Curve')

    fig.savefig(save_dir, dpi=250)
    plt.close(fig)

    return map_05_dict 
coco_eval = torch.load('output/rtdetrv2_r18vd_120e_coco/eval.pth')
map05 = draw_pr_curve(coco_eval['precision'])

好厉害,我是根据
#302
这个问题下Godk02给的代码改的,我不清楚为什么可以这么调用

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants