We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Based on README, the performace of ViT-B/16+ICS is 75.1%. But I got 71.2% with MSMT17_V2 dataset. ViT-B/16+ICS is evaluated on MSMT17_V2?
INPUT: SIZE_TRAIN: [384, 128] SIZE_TEST: [384, 128] PROB: 0.5 # random horizontal flip RE_PROB: 0.5 # random erasing PADDING: 10 PIXEL_MEAN: [0.5, 0.5, 0.5] PIXEL_STD: [0.5, 0.5, 0.5]
DATASETS: NAMES: ('MSMT17_V2') ROOT_DIR: ('/home/hpds/Repositories/ml-models/dataset')
DATALOADER: SAMPLER: 'softmax_triplet' NUM_INSTANCE: 4 NUM_WORKERS: 8
SOLVER: OPTIMIZER_NAME: 'SGD' MAX_EPOCHS: 120 BASE_LR: 0.0004 WARMUP_EPOCHS: 20 IMS_PER_BATCH: 64 WARMUP_METHOD: 'cosine' LARGE_FC_LR: False CHECKPOINT_PERIOD: 120 LOG_PERIOD: 20 EVAL_PERIOD: 120 WEIGHT_DECAY: 1e-4 WEIGHT_DECAY_BIAS: 1e-4 BIAS_LR_FACTOR: 2
TEST: EVAL: True IMS_PER_BATCH: 256 RE_RANKING: False WEIGHT: '/home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/transformer_120.pth' NECK_FEAT: 'before' FEAT_NORM: 'yes'
OUTPUT_DIR: '../../log/transreid/msmt17/vit_base_ics_cfs_lup_384'
2023-07-05 13:54:03 transreid INFO: Running with config: DATALOADER: NUM_INSTANCE: 4 NUM_WORKERS: 8 REMOVE_TAIL: 0 SAMPLER: softmax_triplet DATASETS: NAMES: MSMT17_V2 ROOT_DIR: /home/hpds/Repositories/ml-models/dataset ROOT_TRAIN_DIR: ../data ROOT_VAL_DIR: ../data INPUT: PADDING: 10 PIXEL_MEAN: [0.5, 0.5, 0.5] PIXEL_STD: [0.5, 0.5, 0.5] PROB: 0.5 RE_PROB: 0.5 SIZE_TEST: [384, 128] SIZE_TRAIN: [384, 128] MODEL: ATT_DROP_RATE: 0.0 COS_LAYER: False DEVICE: cuda DEVICE_ID: 0 DEVIDE_LENGTH: 4 DIST_TRAIN: False DROPOUT_RATE: 0.0 DROP_OUT: 0.0 DROP_PATH: 0.1 FEAT_DIM: 512 GEM_POOLING: False ID_LOSS_TYPE: softmax ID_LOSS_WEIGHT: 1.0 IF_LABELSMOOTH: off IF_WITH_CENTER: no JPM: False LAST_STRIDE: 1 METRIC_LOSS_TYPE: triplet NAME: transformer NECK: bnneck NO_MARGIN: True PRETRAIN_CHOICE: imagenet PRETRAIN_HW_RATIO: 2 PRETRAIN_PATH: /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/vit_base_ics_cfs_lup.pth REDUCE_FEAT_DIM: False RE_ARRANGE: True SHIFT_NUM: 5 SHUFFLE_GROUP: 2 SIE_CAMERA: False SIE_COE: 3.0 SIE_VIEW: False STEM_CONV: True STRIDE_SIZE: [16, 16] TRANSFORMER_TYPE: vit_base_patch16_224_TransReID TRIPLET_LOSS_WEIGHT: 1.0 OUTPUT_DIR: ../../log/transreid/msmt17/vit_base_ics_cfs_lup_384 SOLVER: BASE_LR: 0.0004 BIAS_LR_FACTOR: 2 CENTER_LOSS_WEIGHT: 0.0005 CENTER_LR: 0.5 CHECKPOINT_PERIOD: 120 COSINE_MARGIN: 0.5 COSINE_SCALE: 30 EVAL_PERIOD: 120 GAMMA: 0.1 IMS_PER_BATCH: 64 LARGE_FC_LR: False LOG_PERIOD: 20 MARGIN: 0.3 MAX_EPOCHS: 120 MOMENTUM: 0.9 OPTIMIZER_NAME: SGD SEED: 1234 STEPS: (40, 70) TRP_L2: False WARMUP_EPOCHS: 20 WARMUP_FACTOR: 0.01 WARMUP_METHOD: cosine WEIGHT_DECAY: 0.0001 WEIGHT_DECAY_BIAS: 0.0001 TEST: DIST_MAT: dist_mat.npy EVAL: True FEAT_NORM: yes IMS_PER_BATCH: 256 NECK_FEAT: before RE_RANKING: False WEIGHT: /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/transformer_120.pth MSMT17_V2 /home/hpds/Repositories/ml-models/dataset {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container => MSMT17 loaded 2023-07-05 13:54:03 transreid.check INFO: Dataset statistics: 2023-07-05 13:54:03 transreid.check INFO: ---------------------------------------- 2023-07-05 13:54:03 transreid.check INFO: subset | # ids | # images | # cameras 2023-07-05 13:54:03 transreid.check INFO: ---------------------------------------- 2023-07-05 13:54:03 transreid.check INFO: train | 1041 | 32621 | 15 2023-07-05 13:54:03 transreid.check INFO: query | 3060 | 11659 | 15 2023-07-05 13:54:03 transreid.check INFO: gallery | 3060 | 82161 | 15 2023-07-05 13:54:03 transreid.check INFO: ---------------------------------------- using img_triplet sampler using Transformer_type: vit_base_patch16_224_TransReID as a backbone using stride: [16, 16], and patch number is num_y24 * num_x8 Resized position embedding from size:torch.Size([1, 129, 768]) to size: torch.Size([1, 193, 768]) with height:24 width: 8 Load 172 / 174 layers. Loading pretrained ImageNet model......from /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/vit_base_ics_cfs_lup.pth ===========building transformer=========== Loading pretrained model from /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/transformer_120.pth 2023-07-05 13:54:05 transreid.test INFO: Enter inferencing True torch.cuda.device_count() 1
The test feature is normalized => Computing DistMat with euclidean_distance /home/hpds/Repositories/ml-models/proto/TransReID-SSL/transreid_pytorch/utils/metrics.py:12: UserWarning: This overload of addmm_ is deprecated: addmm_(Number beta, Number alpha, Tensor mat1, Tensor mat2) Consider using one of the following signatures instead: addmm_(Tensor mat1, Tensor mat2, *, Number beta, Number alpha) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:1485.) dist_mat.addmm_(1, -2, qf, gf.t()) distmat (11659, 82161) <class 'numpy.ndarray'> 2023-07-05 14:04:03 transreid.test INFO: Validation Results 2023-07-05 14:04:03 transreid.test INFO: mAP: 71.2% 2023-07-05 14:04:03 transreid.test INFO: CMC curve, Rank-1 :87.9% 2023-07-05 14:04:03 transreid.test INFO: CMC curve, Rank-5 :93.6% 2023-07-05 14:04:03 transreid.test INFO: CMC curve, Rank-10 :95.1%
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Based on README, the performace of ViT-B/16+ICS is 75.1%. But I got 71.2% with MSMT17_V2 dataset. ViT-B/16+ICS is evaluated on MSMT17_V2?
``` 023-07-05 13:54:03 transreid INFO: Namespace(config_file='configs/msmt17/vit_base_ics_384.yml', opts=['MODEL.DEVICE_ID', "('0')"]) 2023-07-05 13:54:03 transreid INFO: Loaded configuration file configs/msmt17/vit_base_ics_384.yml 2023-07-05 13:54:03 transreid INFO: MODEL: PRETRAIN_PATH: '/home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/vit_base_ics_cfs_lup.pth' PRETRAIN_HW_RATIO: 2 METRIC_LOSS_TYPE: 'triplet' IF_LABELSMOOTH: 'off' IF_WITH_CENTER: 'no' NAME: 'transformer' NO_MARGIN: True DEVICE_ID: ('2') TRANSFORMER_TYPE: 'vit_base_patch16_224_TransReID' STRIDE_SIZE: [16, 16] STEM_CONV: True # False for vanilla ViT-S # DIST_TRAIN: TrueINPUT:
SIZE_TRAIN: [384, 128]
SIZE_TEST: [384, 128]
PROB: 0.5 # random horizontal flip
RE_PROB: 0.5 # random erasing
PADDING: 10
PIXEL_MEAN: [0.5, 0.5, 0.5]
PIXEL_STD: [0.5, 0.5, 0.5]
DATASETS:
NAMES: ('MSMT17_V2')
ROOT_DIR: ('/home/hpds/Repositories/ml-models/dataset')
DATALOADER:
SAMPLER: 'softmax_triplet'
NUM_INSTANCE: 4
NUM_WORKERS: 8
SOLVER:
OPTIMIZER_NAME: 'SGD'
MAX_EPOCHS: 120
BASE_LR: 0.0004
WARMUP_EPOCHS: 20
IMS_PER_BATCH: 64
WARMUP_METHOD: 'cosine'
LARGE_FC_LR: False
CHECKPOINT_PERIOD: 120
LOG_PERIOD: 20
EVAL_PERIOD: 120
WEIGHT_DECAY: 1e-4
WEIGHT_DECAY_BIAS: 1e-4
BIAS_LR_FACTOR: 2
TEST:
EVAL: True
IMS_PER_BATCH: 256
RE_RANKING: False
WEIGHT: '/home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/transformer_120.pth'
NECK_FEAT: 'before'
FEAT_NORM: 'yes'
OUTPUT_DIR: '../../log/transreid/msmt17/vit_base_ics_cfs_lup_384'
2023-07-05 13:54:03 transreid INFO: Running with config:
DATALOADER:
NUM_INSTANCE: 4
NUM_WORKERS: 8
REMOVE_TAIL: 0
SAMPLER: softmax_triplet
DATASETS:
NAMES: MSMT17_V2
ROOT_DIR: /home/hpds/Repositories/ml-models/dataset
ROOT_TRAIN_DIR: ../data
ROOT_VAL_DIR: ../data
INPUT:
PADDING: 10
PIXEL_MEAN: [0.5, 0.5, 0.5]
PIXEL_STD: [0.5, 0.5, 0.5]
PROB: 0.5
RE_PROB: 0.5
SIZE_TEST: [384, 128]
SIZE_TRAIN: [384, 128]
MODEL:
ATT_DROP_RATE: 0.0
COS_LAYER: False
DEVICE: cuda
DEVICE_ID: 0
DEVIDE_LENGTH: 4
DIST_TRAIN: False
DROPOUT_RATE: 0.0
DROP_OUT: 0.0
DROP_PATH: 0.1
FEAT_DIM: 512
GEM_POOLING: False
ID_LOSS_TYPE: softmax
ID_LOSS_WEIGHT: 1.0
IF_LABELSMOOTH: off
IF_WITH_CENTER: no
JPM: False
LAST_STRIDE: 1
METRIC_LOSS_TYPE: triplet
NAME: transformer
NECK: bnneck
NO_MARGIN: True
PRETRAIN_CHOICE: imagenet
PRETRAIN_HW_RATIO: 2
PRETRAIN_PATH: /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/vit_base_ics_cfs_lup.pth
REDUCE_FEAT_DIM: False
RE_ARRANGE: True
SHIFT_NUM: 5
SHUFFLE_GROUP: 2
SIE_CAMERA: False
SIE_COE: 3.0
SIE_VIEW: False
STEM_CONV: True
STRIDE_SIZE: [16, 16]
TRANSFORMER_TYPE: vit_base_patch16_224_TransReID
TRIPLET_LOSS_WEIGHT: 1.0
OUTPUT_DIR: ../../log/transreid/msmt17/vit_base_ics_cfs_lup_384
SOLVER:
BASE_LR: 0.0004
BIAS_LR_FACTOR: 2
CENTER_LOSS_WEIGHT: 0.0005
CENTER_LR: 0.5
CHECKPOINT_PERIOD: 120
COSINE_MARGIN: 0.5
COSINE_SCALE: 30
EVAL_PERIOD: 120
GAMMA: 0.1
IMS_PER_BATCH: 64
LARGE_FC_LR: False
LOG_PERIOD: 20
MARGIN: 0.3
MAX_EPOCHS: 120
MOMENTUM: 0.9
OPTIMIZER_NAME: SGD
SEED: 1234
STEPS: (40, 70)
TRP_L2: False
WARMUP_EPOCHS: 20
WARMUP_FACTOR: 0.01
WARMUP_METHOD: cosine
WEIGHT_DECAY: 0.0001
WEIGHT_DECAY_BIAS: 0.0001
TEST:
DIST_MAT: dist_mat.npy
EVAL: True
FEAT_NORM: yes
IMS_PER_BATCH: 256
NECK_FEAT: before
RE_RANKING: False
WEIGHT: /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/transformer_120.pth
MSMT17_V2 /home/hpds/Repositories/ml-models/dataset
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container
{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} cam_container
=> MSMT17 loaded
2023-07-05 13:54:03 transreid.check INFO: Dataset statistics:
2023-07-05 13:54:03 transreid.check INFO: ----------------------------------------
2023-07-05 13:54:03 transreid.check INFO: subset | # ids | # images | # cameras
2023-07-05 13:54:03 transreid.check INFO: ----------------------------------------
2023-07-05 13:54:03 transreid.check INFO: train | 1041 | 32621 | 15
2023-07-05 13:54:03 transreid.check INFO: query | 3060 | 11659 | 15
2023-07-05 13:54:03 transreid.check INFO: gallery | 3060 | 82161 | 15
2023-07-05 13:54:03 transreid.check INFO: ----------------------------------------
using img_triplet sampler
using Transformer_type: vit_base_patch16_224_TransReID as a backbone
using stride: [16, 16], and patch number is num_y24 * num_x8
Resized position embedding from size:torch.Size([1, 129, 768]) to size: torch.Size([1, 193, 768]) with height:24 width: 8
Load 172 / 174 layers.
Loading pretrained ImageNet model......from /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/vit_base_ics_cfs_lup.pth
===========building transformer===========
Loading pretrained model from /home/hpds/Repositories/ml-models/proto/TransReID-SSL/checkpoint/transformer_120.pth
2023-07-05 13:54:05 transreid.test INFO: Enter inferencing
True
torch.cuda.device_count() 1
The test feature is normalized
=> Computing DistMat with euclidean_distance
/home/hpds/Repositories/ml-models/proto/TransReID-SSL/transreid_pytorch/utils/metrics.py:12: UserWarning: This overload of addmm_ is deprecated:
addmm_(Number beta, Number alpha, Tensor mat1, Tensor mat2)
Consider using one of the following signatures instead:
addmm_(Tensor mat1, Tensor mat2, *, Number beta, Number alpha) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:1485.)
dist_mat.addmm_(1, -2, qf, gf.t())
distmat (11659, 82161) <class 'numpy.ndarray'>
2023-07-05 14:04:03 transreid.test INFO: Validation Results
2023-07-05 14:04:03 transreid.test INFO: mAP: 71.2%
2023-07-05 14:04:03 transreid.test INFO: CMC curve, Rank-1 :87.9%
2023-07-05 14:04:03 transreid.test INFO: CMC curve, Rank-5 :93.6%
2023-07-05 14:04:03 transreid.test INFO: CMC curve, Rank-10 :95.1%
The text was updated successfully, but these errors were encountered: