This code is the official implementation of the following paper:
β-Multivariational Autoencoder (βMVAE) for Entangled Representation Learning in Video Frames
These video sequences are from DAVIS16 dataset.(a.Image, b.Annotation, c.βMVUnet)
To create the environment, follow these commands:
conda create -n bmvae python=3.8
conda activate bmvae
pip install -r requirements.txt
This code is developed and tested on Ubuntu OS 18.04.5.
- First, download the network weights from the Google Drive Link. and put them on the ckpts folder.
- Run the following command for testing βMVAE network:
python test_bmvae.py
- Run this command for testing βMVUnet network:
python test_bmvUnet.py
- U-Net: Semantic segmentation with PyTorch
- Samples are from DAVIS2016 dataset
- Training data are preprocessed sequences from YouTube-VOS dataset
Please cite our work as: