Skip to content

juhha1/UNet-and-BESNet

Repository files navigation

UNet and BESNet for semantic segmentation

example_image-pet example_image-mri

Customized implementation of U-Net and BESNet in PyTorch and Keras (not updated yet) for Oxford-IIIT Pet Dataset and LGG Brain Segmentation Dataset.

Oxford-IIIT Pet Dataset contains 7,393 images and annotations of 37 different categories of pets (dogs and cats). The annotations include breed, head ROI, and pixel level trimap segmentation (background, edge, mask). In this project, only image and trimap segmentation were used.

LGG Brain Segmentation Dataset contains 3,923 brain MR (FLAIR) images paired with segmentation masks. In the dataset, it does not have edges. To create edges, I used edge detection (cv2.Canny) in data/lgg-mri-segmentation/prepare_data.py.

Downloading and Preparing Dataset

  1. Download: go to data/oxford-iiit-pet and download dataset by running command: sh download_data.sh
  2. Prepare: go to data/oxford-iiit-pet and run: python prepare_dataset.py
  1. Download: go to data/lgg-mri-segmentation and download dataset by running command: sh download_data.sh *** kaggle API is used here. If there is no kaggle API, go to kaggle website to download.
  2. Prepare: go to data/lgg-mri-segmentation and run: python prepare_dataset.py

Training

> python train.py -h
usage: train.py [-h] [-d DATA] [-s SAVE_NET] [-e NUM_EPOCH] [-b BATCH_SIZE]
                [-l LR] [-n NET] [--height HEIGHT] [--width WIDTH]
                [--alpha ALPHA] [--beta BETA] [--bece-loss BECE_LOSS]

optional arguments:
  -h, --help            show this help message and exit
  -d DATA, --data DATA  Which dataset? (pet OR mri) (default: pet)
  -s SAVE_NET, --save SAVE_NET
                        Save checkpoints (default: True)
  -e NUM_EPOCH, --epoch NUM_EPOCH
                        Number of epochs (default: 50)
  -b BATCH_SIZE, --batch BATCH_SIZE
                        Batch size (default: 8)
  -l LR, --learning-rate LR
                        Learning Rate (default: 0.001)
  -n NET, --net NET     Type of network to train (unet OR besnet) (default:
                        besnet)
  --height HEIGHT       Height of input image (default: 128)
  --width WIDTH         Width of input image (default: 128)
  --alpha ALPHA         Alpha value for BECE loss (for BESNet) (default: 0.5)
  --beta BETA           Beta value for BECE loss (for BESNet) (default: 1)
  --bece-loss BECE_LOSS
                        Loss for MDP in BESNet (BECE loss or BCE) (default:
                        True)

-n or --net argument takes parameters: unet or besnet. unet is for training U-Net and besnet is for training BESNet.

BESNet takes additional argument --bece-loss for whether to implement Boundary Enhanced Cross-Entropy (BECE) loss or not. To implement BECE loss, --alpha (0.5 by default) and --beta (1 by default) should be set as well.

Example Images

Pet Dataset

UNet

pet, unet

BESNet without BECE loss

pet, besnet, without bece

BESNet with BECE loss

pet, besnet, with bece

MRI Dataset

UNet

mri, unet

BESNet without BECE loss

mri, besnet, without bece

BESNet with BECE loss

mri, besnet, with bece

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages