Skip to content

pciodyuc/models

This branch is 8926 commits behind tensorflow/models:master.

Folders and files

NameName
Last commit message
Last commit date
May 19, 2016
Mar 20, 2017
Mar 14, 2017
Mar 15, 2017
Mar 20, 2017
Mar 25, 2017
Mar 15, 2017
Sep 12, 2016
Mar 15, 2017
Mar 15, 2017
Mar 15, 2017
Mar 14, 2017
Mar 14, 2017
Mar 20, 2017
Mar 19, 2017
Mar 16, 2017
Mar 25, 2017
Mar 15, 2017
Mar 24, 2017
Mar 22, 2017
Mar 20, 2017
Mar 22, 2017
Mar 16, 2017
Jun 2, 2016
May 12, 2016
Apr 1, 2016
Jan 20, 2016
Mar 4, 2016
Mar 16, 2017
Nov 4, 2016

Repository files navigation

TensorFlow Models

This repository contains machine learning models implemented in TensorFlow. The models are maintained by their respective authors.

To propose a model for inclusion please submit a pull request.

Models

  • autoencoder: various autoencoders.
  • compression: compressing and decompressing images using a pre-trained Residual GRU network.
  • differential_privacy: privacy-preserving student models from multiple teachers.
  • im2txt: image-to-text neural network for image captioning.
  • inception: deep convolutional networks for computer vision.
  • learning_to_remember_rare_events: a large-scale life-long memory module for use in deep learning.
  • lm_1b: language modeling on the one billion word benchmark.
  • namignizer: recognize and generate names.
  • neural_gpu: highly parallel neural computer.
  • neural_programmer: neural network augmented with logic and mathematic operations.
  • next_frame_prediction: probabilistic future frame synthesis via cross convolutional networks.
  • real_nvp: density estimation using real-valued non-volume preserving (real NVP) transformations.
  • resnet: deep and wide residual networks.
  • skip_thoughts: recurrent neural network sentence-to-vector encoder.
  • slim: image classification models in TF-Slim.
  • street: identify the name of a street (in France) from an image using a Deep RNN.
  • swivel: the Swivel algorithm for generating word embeddings.
  • syntaxnet: neural models of natural language syntax.
  • textsum: sequence-to-sequence with attention model for text summarization.
  • transformer: spatial transformer network, which allows the spatial manipulation of data within the network.
  • tutorials: models described in the TensorFlow tutorials.
  • video_prediction: predicting future video frames with neural advection.

About

Models built with TensorFlow

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 62.8%
  • C++ 29.4%
  • HTML 3.6%
  • Jupyter Notebook 1.5%
  • Shell 1.2%
  • JavaScript 0.8%
  • Other 0.7%