Good resources to learn TensorFlow
Here are some good resources to learn tensorflow.
- TensorFlow Tutorial 1 – From the basics to slightly more interesting applications of TensorFlow
- TensorFlow Tutorial 2 – Introduction to deep learning based on Google’s TensorFlow framework. These tutorials are direct ports of Newmu’s Theano
- TensorFlow Examples – TensorFlow tutorials and code examples for beginners
- Sungjoon’s TensorFlow-101 – TensorFlow tutorials written in Python with Jupyter Notebook
- Terry Um’s TensorFlow Exercises – Re-create the codes from other TensorFlow examples
- Installing TensorFlow on Raspberry Pi 3 – TensorFlow compiled and running properly on the Raspberry Pi
- Classification on time series – Recurrent Neural Network classification in TensorFlow with LSTM on cellphone sensor data
- Show, Attend and Tell – Attention Based Image Caption Generator
- Neural Style Implementation of Neural Style
- Pretty Tensor – Pretty Tensor provides a high level builder API
- Neural Style – An implementation of neural style
- TensorFlow White Paper Notes – Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation
- NeuralArt – Implementation of A Neural Algorithm of Artistic Style
- Deep-Q learning Pong with TensorFlow and PyGame
- Generative Handwriting Demo using TensorFlow – An attempt to implement the random handwriting generation portion of Alex Graves’ paper
- Neural Turing Machine in TensorFlow – implementation of Neural Turing Machine
- GoogleNet Convolutional Neural Network Groups Movie Scenes By Setting – Search, filter, and describe videos based on objects, places, and other things that appear in them
- Neural machine translation between the writings of Shakespeare and modern English using TensorFlow – This performs a monolingual translation, going from modern English to Shakespeare and vis-versa.
- Chatbot – Implementation of “A neural conversational model”
- Colornet – Neural Network to colorize grayscale images – Neural Network to colorize grayscale images
- Neural Caption Generator – Implementation of “Show and Tell”
- Neural Caption Generator with Attention – Implementation of “Show, Attend and Tell”
- Weakly_detector – Implementation of “Learning Deep Features for Discriminative Localization”
- Dynamic Capacity Networks – Implementation of “Dynamic Capacity Networks”
- HMM in TensorFlow – Implementation of viterbi and forward/backward algorithms for HMM
- DeepOSM – Train TensorFlow neural nets with OpenStreetMap features and satellite imagery.
- DQN-tensorflow – TensorFlow implementation of DeepMind’s ‘Human-Level Control through Deep Reinforcement Learning’ with OpenAI Gym by Devsisters.com
- Highway Network – TensorFlow implementation of “Training Very Deep Networks” with a blog post
- Sentence Classification with CNN – TensorFlow implementation of “Convolutional Neural Networks for Sentence Classification” with a blog post
- End-To-End Memory Networks – Implementation of End-To-End Memory Networks
- Character-Aware Neural Language Models – TensorFlow implementation of Character-Aware Neural Language Models
- YOLO TensorFlow ++ – TensorFlow implementation of ‘YOLO: Real-Time Object Detection’, with training and an actual support for real-time running on mobile devices.
- Wavenet – This is a TensorFlow implementation of the WaveNet generative neural network architecture for audio generation.
- Mnemonic Descent Method – Tensorflow implementation of “Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment”
- YOLO TensorFlow – Implementation of ‘YOLO : Real-Time Object Detection’
- Magenta – Research project to advance the state of the art in machine intelligence for music and art generation
- Scikit Flow (TF Learn) – Simplified interface for Deep/Machine Learning (now part of TensorFlow)
tensorflow.rb – TensorFlow native interface for ruby using SWIG
tflearn – Deep learning library featuring a higher-level API
- TensorFlow-Slim – High-level library for defining models
- TensorFrames – TensorFlow binding for Apache Spark
- caffe-tensorflow – Convert Caffe models to TensorFlow format
- keras – Minimal, modular deep learning library for TensorFlow and Theano
- SyntaxNet: Neural Models of Syntax – A TensorFlow implementation of the models described in Globally Normalized Transition-Based Neural Networks, Andor et al. (2016)
- TensorFlow Guide 1 – A guide to installation and use
- TensorFlow Guide 2 – Continuation of first video
- TensorFlow Basic Usage – A guide going over basic usage
- TensorFlow Deep MNIST for Experts – Goes over Deep MNIST
- TensorFlow Udacity Deep Learning – Basic steps to install TensorFlow for free on the Cloud 9 online service with 1Gb of data
- Why Google wants everyone to have access to TensorFlow
- Videos from TensorFlow Silicon Valley Meet Up 1/19/2016
- Videos from TensorFlow Silicon Valley Meet Up 1/21/2016
- Stanford CS224d Lecture 7 – Introduction to TensorFlow, 19th Apr 2016 – CS224d Deep Learning for Natural Language Processing by Richard Socher
- Diving into Machine Learning through TensorFlow – Pycon 2016 Portland Oregon, Slide & Code by Julia Ferraioli, Amy Unruh, Eli Bixby
- Large Scale Deep Learning with TensorFlow – Spark Summit 2016 Keynote by Jeff Dean
- Tensorflow and deep learning – without at PhD – by Martin Görner
- TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems – This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google
- TF.Learn: TensorFlow’s High-level Module for Distributed Machine Learning
- Comparative Study of Deep Learning Software Frameworks – The study is performed on several types of deep learning architectures and we evaluate the performance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings
- Distributed TensorFlow with MPI – In this paper, we extend recently proposed Google TensorFlow for execution on large scale clusters using Message Passing Interface (MPI)
- Globally Normalized Transition-Based Neural Networks – This paper describes the models behind SyntaxNet.
- TensorFlow: A system for large-scale machine learning – This paper describes the TensorFlow dataflow model in contrast to existing systems and demonstrate the compelling performance
- TensorFlow: smarter machine learning, for everyone – An introduction to TensorFlow
- Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open Source – Release of SyntaxNet, “an open-source neural network framework implemented in TensorFlow that provides a foundation for Natural Language Understanding systems.
- Why TensorFlow will change the Game for AI
- TensorFlow for Poets – Goes over the implementation of TensorFlow
- Introduction to Scikit Flow – Simplified Interface to TensorFlow – Key Features Illustrated
- Building Machine Learning Estimator in TensorFlow – Understanding the Internals of TensorFlow Learn Estimators
- TensorFlow – Not Just For Deep Learning
- The indico Machine Learning Team’s take on TensorFlow
- The Good, Bad, & Ugly of TensorFlow – A survey of six months rapid evolution (+ tips/hacks and code to fix the ugly stuff), Dan Kuster at Indico, May 9, 2016
- Fizz Buzz in TensorFlow – A joke by Joel Grus
- RNNs In TensorFlow, A Practical Guide And Undocumented Features – Step-by-step guide with full code examples on GitHub.
- Using TensorBoard to Visualize Image Classification Retraining in TensorFlow
- First Contact with TensorFlow by Jordi Torres, professor at UPC Barcelona Tech and a research manager and senior advisor at Barcelona Supercomputing Center
- Deep Learning with Python – Develop Deep Learning Models on Theano and TensorFlow Using Keras by Jason Brownlee
- TensorFlow for Machine Intelligence – Complete guide to use TensorFlow from the basics of graph computing, to deep learning models to using it in production environmemts – Bleeding Edge Press
- Getting Started with TensorFlow – Get up and running with the latest numerical computing library by Google and dive deeper into your data, by Giancarlo Zaccone
- Hands-On Machine Learning with Scikit-Learn and TensorFlow – by Aurélien Geron, former lead of the YouTube video classification team. Covers ML fundamentals, training and deploying deep nets across multiple servers and GPUs using TensorFlow, the latest CNN, RNN and Autoencoder architectures, and Reinforcement Learning (Deep Q).
- Building Machine Learning Projects with Tensorflow – by Rodolfo Bonnin. This book covers various projects in TensorFlow that expose what can be done with TensorFlow in different scenarios. The book provides projects on training models, machine learning, deep learning, and working with various neural networks. Each project is an engaging and insightful exercise that will teach you how to use TensorFlow and show you how layers of data can be explored by working with Tensors.
You can find python source code under the
python directory, and associated notebooks under
|1||basics.py||Setup with tensorflow and graph computation.|
|2||linear_regression.py||Performing regression with a single factor and bias.|
|3||polynomial_regression.py||Performing regression using polynomial factors.|
|4||logistic_regression.py||Performing logistic regression using a single layer neural network.|
|5||basic_convnet.py||Building a deep convolutional neural network.|
|6||modern_convnet.py||Building a deep convolutional neural network with batch normalization and leaky rectifiers.|
|7||autoencoder.py||Building a deep autoencoder with tied weights.|
|8||denoising_autoencoder.py||Building a deep denoising autoencoder which corrupts the input.|
|9||convolutional_autoencoder.py||Building a deep convolutional autoencoder.|
|10||residual_network.py||Building a deep residual network.|
|11||variational_autoencoder.py||Building an autoencoder with a variational encoding.|
Introduction to deep learning based on Google’s TensorFlow framework. These tutorials are direct ports of Newmu’s Theano Tutorials.
- Simple Multiplication
- Linear Regression
- Logistic Regression
- Feedforward Neural Network (Multilayer Perceptron)
- Deep Feedforward Neural Network (Multilayer Perceptron with 2 Hidden Layers O.o)
- Convolutional Neural Network
- Denoising Autoencoder
- Recurrent Neural Network (LSTM)
- Save and restore net
TensorFlow Tutorial with popular machine learning algorithms implementation. This tutorial was designed for easily diving into TensorFlow, through examples.
It is suitable for beginners who want to find clear and concise examples about TensorFlow. For readability, the tutorial includes both notebook and code with explanations.
- Nearest Neighbor (notebook) (code)
- Linear Regression (notebook) (code)
- Logistic Regression (notebook) (code)
- Multilayer Perceptron (notebook) (code)
- Convolutional Neural Network (notebook) (code)
- Recurrent Neural Network (LSTM) (notebook) (code)
- Bidirectional Recurrent Neural Network (LSTM) (notebook) (code)
- Dynamic Recurrent Neural Network (LSTM) (code)
- AutoEncoder (notebook) (code)
- Save and Restore a model (notebook) (code)
- Tensorboard – Graph and loss visualization (notebook) (code)
- Tensorboard – Advanced visualization (code)
Some examples require MNIST dataset for training and testing. Don’t worry, this dataset will automatically be downloaded when running examples (with input_data.py). MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.
Official Website: http://yann.lecun.com/exdb/mnist/
- TFLearn Quickstart. Learn the basics of TFLearn through a concrete machine learning task. Build and train a deep neural network classifier.
- Linear Regression. Implement a linear regression using TFLearn.
- Logical Operators. Implement logical operators with TFLearn (also includes a usage of ‘merge’).
- Weights Persistence. Save and Restore a model.
- Fine-Tuning. Fine-Tune a pre-trained model on a new task.
- Using HDF5. Use HDF5 to handle large datasets.
- Using DASK. Use DASK to handle large datasets.
- Multi-layer perceptron. A multi-layer perceptron implementation for MNIST classification task.
- Convolutional Network (MNIST). A Convolutional neural network implementation for classifying MNIST dataset.
- Convolutional Network (CIFAR-10). A Convolutional neural network implementation for classifying CIFAR-10 dataset.
- Network in Network. ‘Network in Network’ implementation for classifying CIFAR-10 dataset.
- Alexnet. Apply Alexnet to Oxford Flowers 17 classification task.
- VGGNet. Apply VGG Network to Oxford Flowers 17 classification task.
- VGGNet Finetuning (Fast Training). Use a pre-trained VGG Network and retrain it on your own data, for fast training.
- RNN Pixels. Use RNN (over sequence of pixels) to classify images.
- Highway Network. Highway Network implementation for classifying MNIST dataset.
- Highway Convolutional Network. Highway Convolutional Network implementation for classifying MNIST dataset.
- Residual Network (MNIST). A bottleneck residual network applied to MNIST classification task.
- Residual Network (CIFAR-10). A residual network applied to CIFAR-10 classification task.
- Google Inception (v3). Google’s Inception v3 network applied to Oxford Flowers 17 classification task.
- Auto Encoder. An auto encoder applied to MNIST handwritten digits.
- Recurrent Neural Network (LSTM). Apply an LSTM to IMDB sentiment dataset classification task.
- Bi-Directional RNN (LSTM). Apply a bi-directional LSTM to IMDB sentiment dataset classification task.
- Dynamic RNN (LSTM). Apply a dynamic LSTM to classify variable length text from IMDB dataset.
- City Name Generation. Generates new US-cities name, using LSTM network.
- Shakespeare Scripts Generation. Generates new Shakespeare scripts, using LSTM network.
- Seq2seq. Pedagogical example of seq2seq reccurent network. See this repo for full instructions.
- CNN Seq. Apply a 1-D convolutional network to classify sequence of words from IMDB sentiment dataset.
- Atari Pacman 1-step Q-Learning. Teach a machine to play Atari games (Pacman by default) using 1-step Q-learning.
- Recommender – Wide & Deep Network. Pedagogical example of wide & deep networks for recommender systems.
- Spiral Classification Problem. TFLearn implementation of spiral classification problem from Stanford CS231n.
- Layers. Use TFLearn layers along with TensorFlow.
- Trainer. Use TFLearn trainer class to train any TensorFlow graph.
- Built-in Ops. Use TFLearn built-in operations along with TensorFlow.
- Summaries. Use TFLearn summarizers along with TensorFlow.
- Variables. Use TFLearn variables along with TensorFlow.