Tensorflow Tutorial Presented By :- Ankur Mali

Slides:



Advertisements
Similar presentations
1 CS 201 Compiler Construction Machine Code Generation.
Advertisements

Programmer's view on Computer Architecture by Istvan Haller.
What does C store? >>A = [1 2 3] >>B = [1 1] >>[C,D]=meshgrid(A,B) c) a) d) b)
Data Structures and Algorithms in Parallel Computing
Comparing TensorFlow Deep Learning Performance Using CPUs, GPUs, Local PCs and Cloud Pace University, Research Day, May 5, 2017 John Lawrence, Jonas Malmsten,
TensorFlow The Deep Learning Library You Should Be Using.
TENSORFLOW Team 2: Andrey, Daniel, John, Jonas, Ken
Information and Computer Sciences University of Hawaii, Manoa
TensorFlow– A system for large-scale machine learning
Deep Learning Software: TensorFlow
Zheng ZHANG 1-st year PhD candidate Group ILES, LIMSI
Getting started with TensorBoard
Tensorflow Tutorial Homin Yoon.
Artificial Neural Networks
2008/11/19: Lecture 18 CMSC 104, Section 0101 John Y. Park
Repetition Structures Chapter 9
Enabling machine learning in embedded systems
5. Function (2) and Exercises
Deep Learning Platform as a Service
Deep Learning Libraries
Intro to NLP and Deep Learning
CS 224S: TensorFlow Tutorial
Coding in Python and Basics of Tensorflow
Chapter 3 Control Statements Lecturer: Mrs Rohani Hassan
Basic machine learning background with Python scikit-learn
Overview of TensorFlow
A VERY Brief Introduction to Convolutional Neural Network using TensorFlow 李 弘
Lecture 5: GPU Compute Architecture
Comparison Between Deep Learning Packages
TensorFlow and Clipper (Lecture 24, cs262a)
Torch 02/27/2018 Hyeri Kim Good afternoon, everyone. I’m Hyeri. Today, I’m gonna talk about Torch.
Tensorflow in Deep Learning
CSCI1600: Embedded and Real Time Software
CSE Social Media & Text Analytics
INF 5860 Machine learning for image classification
Numerical Computing in Python
Introduction to TensorFlow
Tensorflow in Deep Learning
Lecture 5: GPU Compute Architecture for the last time
Deep Learning Packages
Tensorflow in Deep Learning
Introduction to Tensorflow
An open-source software library for Machine Intelligence
Use of Mathematics using Technology (Maltlab)
MXNet Internals Cyrus M. Vahid, Principal Solutions Architect,
MNIST Dataset Training with Tensorflow
CSC 578 Neural Networks and Deep Learning
APACHE MXNET By Beni Mulyana.
Vinit Shah, Joseph Picone and Iyad Obeid
Mastering Memory Modes
Vectors and Matrices In MATLAB a vector can be defined as row vector or as a column vector. A vector of length n can be visualized as matrix of size 1xn.
雲端計算.
Debugging Dataflow Graphs using TensorFlow Debugger.
Theano-Basic CSLT, NLP.
Flowcharts and Pseudo Code
Presented By :- Ankur Mali IST 597
Tensorflow Lecture 박 영 택 컴퓨터학부.
TensorFlow: A System for Large-Scale Machine Learning
Loop-Level Parallelism
How does the CPU work? CPU’s program counter (PC) register has address i of the first instruction Control circuits “fetch” the contents of the location.
Introduction to TensorFlow
EE 193: Parallel Computing
2008/11/19: Lecture 18 CMSC 104, Section 0101 John Y. Park
Deep Learning Libraries
CSCI1600: Embedded and Real Time Software
CSC 578 Neural Networks and Deep Learning
Getting started with TensorBoard
Deep Learning with TensorFlow
Tensorflow in Deep Learning
Machine Learning for Cyber
Presentation transcript:

Tensorflow Tutorial Presented By :- Ankur Mali The Pennsylvania State University IST 597: Foundations of Deep Learning Fall 2018

Widely used Deep Learning frameworks Low Level frameworks Tensorflow Pytorch Lua Theano Caffee MxNet High Level Frameworks Keras TFlearn

What is Tensorflow? Open source software library for numerical computation using data flow graphs. Good for training and deploying deep learning models. Widely popular due to its flexibility and scalability.

Graphs and Session Data Flow Graphs TensorFlow separates definition of computations from their execution. Step 1 :- Build the graph Step 2 :- use session to execute operations in the created graph(Nodes: operators, variables, and constants , Edges: tensors)

Tensor? Tensor can be anything from scalar to n - dimensional array 0-d (scalar or number) 1-d (vector) 2-d (matrix) ...

Basic Operations Import tensorflow as tf # var1 = tf.constant(2.0) Var1 = tf.constant(2.0,name=’var1’) Var2 = tf.constant([3.0,4.5,6.0],name=’var2’) Result = tf.multiply(var1,var2) #broadcasting is similar to numpy.broadcast tf.constant( value, dtype=None, shape=None, name='Const_var', verify_shape=False ) tf.zeros is similar to np.zeros tf.zeros_like is similar to np.zeros_like tf.ones is similar to np.ones tf.ones_like is similar to np.ones_like tf.fill is similar to tf.full

Can we use Constants as Sequences? Tensor objects are not iterable , therefore are not similar to numpy sequences If we do For val in tf.range(4): # we would get TypeError Other types of randomly generated of constants tf.random_normal tf.truncated_normal tf.random_uniform tf.random_shuffle tf.random_crop tf.multinomial tf.random_gamma

Why don’t we use numpy datatype instead of Tf? Tensorflow does integrates seamlessly with numpy tf.float32 ⇒ np.float32 var1 = tf.ones([10,2],np.float32) #requested fetch type is tensor and output is n-dimensional numpy array What’s wrong in getting output as numpy ndarray?

When to use constants? Only use for primitive types. Why?

How to create variables in tensorflow With tf.Variable Var1 = tf.Variable(tf.zeros(784,10),name=’matrix’) #class with many Ops With tf.get_variable Var2 = tf.get_variable(“matrix”,shape=(784,10),initializer=tf.zeros_initializer()) Initialize your variables within the context of the session Use tf.global_variables_initializer() #all variables Or Initialize specific variables using tf.variables_initializer([var1,var2,..])

First Tensorflow Program >>> import tensorflow as tf >>> a = tf.constant(3) >>> b = tf.constant(5) >>> c = tf.multiply(a,b) >>> print(c) Tensor("Mul:0", shape=(), dtype=int32)

How to get value for c? Create a session Sess = tf.Session() Within the session , evaluate the graph to fetch the value of c. print(sess.run(c)) Close the session. sess.close() Alternate approach with tf.Session() as sess: print(sess.run(c))

How to visualize Data Flow Graph in Tensorflow? Tensorboard In above example add tf.summary() writer = tf.summary.FileWriter("output_folder", sess.graph) With tf.Session() as sess: print(sess.run(c)) writer.close() with tf.Session() as sess: writer = tf.summary.FileWriter("output_folder", sess.graph) print(sess.run(c)) writer.close() Open another terminal tensorboard --logdir=current_dir/output_folder/

Placeholders(user input?) tf.placeholder(dtype, shape = None , name = None) var1 = tf.placeholder(tf.float32 , shape=[3]) var2 = tf.placeholder([2.3,4.7],tf.float32) Result = tf.add(var1,var2) with tf.Session() as sess: print(sess.run(Result)) ⇒ Output?

Placeholders tf.placeholder(dtype, shape = None , name = None) var1 = tf.placeholder(tf.float32 , shape=[3]) var2 = tf.placeholder([2.3,4.7],tf.float32) Result = tf.add(var1,var2) with tf.Session() as sess: print(sess.run(Result,feed_dict={var1:[0.7,1.3]} ) ) Avoid lazy Loading

Control Dependencies Important when you need to execute operations sequentially or data is dependent on previous input tf.Graph.control_dependencies(inputs) Control_graph = tf.get_default_graph() With g.control_dependencies([var1,var2,..]): execute(var4,var5)

Subgraphs (save Computation?) import tensorflow as tf x = 5 y = 17 add_op = tf.add(x,y) multiply_op = tf.multiply(x,y) kid_programmer = tf.multiply(x,add_op) naive_programmer = tf.multiply(x,y) power_op = tf.pow(add_op , multiply_op) with tf.Session() as sess: writer = tf.Summary.FileWriter("ist597demo",sess.graph) z = sess.run(power_op) writer.close()

Subgraphs(Distributed Computing ?) We can break graph into several chunks or several sub graphs . Each subgraph can run parallely across CPUs , GPUs or even across powerful TPUs cores. Example Vggnet , alexnet , inceptionnet

Simple approach for distributed computing Steps to follow Create a graph and set log_device_placement TRUE in your session with tf.device('/gpu:0'): a = tf.constant([6.0,60.0], name='a') b = tf.constant([7.0,56.5], name='b') c = tf.multiply(a, b) sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) Z = sess.run(c)

Big No with tensorflow? If you need more than one graph?(seriously ? , that’s poor development practice) Let’s assume we leave behind development principles and if you still need multiple graphs for your logic to work. Multiple graphs require multiple sessions, each will try to use all available resources by default(sometimes causes operating system to freeze/crash(Windows users rest you all know :P)) Can't pass data between them without passing them through python/numpy, which doesn't work in distributed Alternate Approach It’s better to have disconnected subgraphs within one graph

But still if you need #Use tf.Graph to build User graph graph1 = tf.get_default_graph() graph2 = tf.Graph() with graph1.as_default(): var1 = tf.Constant(3) with graph2.as_default(): var2 = tf.Constant(5) Important Note :- Don’t mix default and user graph. And also don’t do any numpy matrix computation (such as load or save)

Useful Tricks Warning “The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. Solution Import os os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘2’ CUDA_VISIBLE_DEVICES=0,1,2,3 Or CUDA_VISIBLE_DEVICES = 0 Print always help

Questions? Deep Questions?