Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing on single image #7

Open
adithyagaurav opened this issue Nov 17, 2020 · 1 comment
Open

Testing on single image #7

adithyagaurav opened this issue Nov 17, 2020 · 1 comment

Comments

@adithyagaurav
Copy link

Hi, I am trying to implement your model with the imagenet pre-trained weights you have provided on the repository. I'm hoping to run inference on a single image. The problem I'm facing is that, every time I run inference, model gives the output tensor(600), which means it's predicting the class for every image to be 600. I have tried different images (of different classes), the model consistently labels every image to 600.

I wish to know why must this be happening, am I doing something wrong? Following is my code:

model = darknet53(1000)
checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])

test_transform = transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
        ])

pil_image = test_transform(Image.open(image_path))
print(pil_image.shape)
torch_image = pil_image.unsqueeze(0)
print(torch_image.shape)

out = model(torch_image)
label = torch.argmax(out)
print(label)

Can you help me?

@ITBoy-China
Copy link

ITBoy-China commented Aug 19, 2022

Hi, I am trying to implement your model with the imagenet pre-trained weights you have provided on the repository. I'm hoping to run inference on a single image. The problem I'm facing is that, every time I run inference, model gives the output tensor(600), which means it's predicting the class for every image to be 600. I have tried different images (of different classes), the model consistently labels every image to 600.

I wish to know why must this be happening, am I doing something wrong? Following is my code:

model = darknet53(1000)
checkpoint = torch.load(checkpoint_path, map_location=torch.device('cpu'))
model.load_state_dict(checkpoint['state_dict'])

test_transform = transforms.Compose([
            transforms.Resize(256),
            transforms.CenterCrop(224),
            transforms.ToTensor(),
            transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
        ])

pil_image = test_transform(Image.open(image_path))
print(pil_image.shape)
torch_image = pil_image.unsqueeze(0)
print(torch_image.shape)

out = model(torch_image)
label = torch.argmax(out)
print(label)

Can you help me?

You forgot to set 'model.eval()'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants