< Implementation of Model in Deep learning Paper with PyTorch >
You can read the reviews about these models from my blog : https://mole-starseeker.tistory.com
-
Standard (Just ToyNet for Implementation of Deep Learning Model) [Code]
-
VGG16 (Very Deep Convolutional Networks for Large-Scale Image Recognition) [Code] [Paper]
-
Inception V2, V3 (Rethinking the Inception Architecture for Computer Vision) [Code] [Paper]
-
ResNet (Deep Residual Learning for Image Recognition) [Code] [Paper]
-
Inception V4 (Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning) [Code] [Paper]
-
Inception-ResNet V1, V2 (Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning) [Code1] [Code2] [Paper]
-
DenseNet (Densely Connected Convolutional Networks) [Code] [Paper]
-
SqueezeNet (SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size) [Code] [Paper]
-
Xception (Xception: Deep Learning with Depthwise Separable Convolutions) [Code] [Paper]
-
MobileNet (MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications) [Code] [Paper]
-
ResNext (Aggregated Residual Transformations for Deep Neural Networks) [Code] [Paper]
-
SRCNN (Image Super-Resolution Using Deep Convolutional Networks) [Code] [Paper]
-
VDSR (Accurate Image Super-Resolution Using Very Deep Convolutional Networks) [Code] [Paper]
-
SESR (SESR: Single Image Super Resolution with Recursive Squeeze and Excitation Networks) [Code] [Paper]