You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you, please, suggest the best way to use this model to interpolate video? Just take 2 neibour frames of video, infer them and then stitch new frames back?
Should model be retrained for each video or it can be used to interpolate any video with good quality?
Thank you in advance for your reply!
The text was updated successfully, but these errors were encountered:
Our pretrained models can already get relative good frame interpolation visual quality on common videos.
To get best visual quality on your specific videos, you can load the provided checkpoint, and then fine-tune IFRNet on your collected video datasets, which should contain sufficient quantity of frame sequences with diverse motion and texture.
The model do not need to be retrained for each video, but to be retrained on datasets with all these videos once. Then, you can get good frame interpolation quality with any video in the same domain of this training dataset. For training and inference, you can refer to train_vimeo90k.py, demo_2x.py for 2x interpolation and train_gopro.py, demo_8x.py for 8x interpolation.
Hello!
Thanks for your work!
Can you, please, suggest the best way to use this model to interpolate video? Just take 2 neibour frames of video, infer them and then stitch new frames back?
Should model be retrained for each video or it can be used to interpolate any video with good quality?
Thank you in advance for your reply!
The text was updated successfully, but these errors were encountered: