Skip to content

Latest commit

 

History

History
83 lines (48 loc) · 2.19 KB

README.md

File metadata and controls

83 lines (48 loc) · 2.19 KB

InvVis

This is the model code for InvVis: Large-Scale Data Embedding for Invertible Visualization.

teasor

Pretrained Model

You can download our pretrained model here.

Testing

You can use test.py to test your model.

The model checkpoint should be placed in pretrained/ . The default checkpoint name is DHN_4channel.pth, you can modify this by changing the value of pretrainedModelDir in config.yml .

The test image should be placed in data/test/. Three images are expected for model test:

  • cover.png : The cover image for data embedding, usually a visualization image.
  • data_image.png : A 3-channel image, each channel of which is a data image generated with our Data-to-Image (DTOI) algorithm.
  • qr_image.png : A QR Code image containing one or more QR Codes encoded with chart information.

More details are presented in our paper.

We have prepared some images in data/test/ for a quick start.

Once the aboved mentioned data is prepared, you can test your model with:

python test.py

The result images can be found in result/ .

Training

You can also train the model with your own data.

The training data should be ordered like:

data
|-- train
|   |-- MASSVIS  # or replace it with your own cover image dataset
|   |-- QR_Image_Dir # the directory of your QR Image dataset
|   |-- Data_Image_Dir1
|   |-- Data_Image_Dir2
|   |-- Data_Image_Dir3
	...

You can use more kinds of data image for training by modifying dataloader.py and config.yml.

Once the data is prepared, you can train your model with:

python train.py

The model checkpoints will be saved in checkpoints/ .

Citation

@article{ye2023invvis,
  title={InvVis: Large-scale data embedding for invertible visualization},
  author={Ye, Huayuan and Li, Chenhui and Li, Yang and Wang, Changbo},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2023},
  publisher={IEEE}
}