Tensorflow Lecture 박 영 택 컴퓨터학부.

Slides:



Advertisements
Similar presentations
Python Crash Course Accuracy 3 rd year Bachelors V1.0 dd Hour 7.
Advertisements

1 DATA ABSTRACTION: USER DEFINED TYPES AND THE CLASS.
Statistics 350 Lecture 16. Today Last Day: Introduction to Multiple Linear Regression Model Today: More Chapter 6.
NOTES ON MULTIPLE REGRESSION USING MATRICES  Multiple Regression Tony E. Smith ESE 502: Spatial Data Analysis  Matrix Formulation of Regression  Applications.
Statistics 350 Lecture 11. Today Last Day: Start Chapter 3 Today: Section 3.8 Mid-Term Friday…..Sections ; ; (READ)
Lecture 6 Sept 15, 09 Goals: two-dimensional arrays matrix operations circuit analysis using Matlab image processing – simple examples.
Economics 214 Lecture 13 Systems of Equations. Examples of System of Equations Demand and Supply IS-LM Aggregate Demand and Supply.
1 Econometrics 1 Lecture 6 Multiple Regression -tests.
Statistics 350 Lecture 17. Today Last Day: Introduction to Multiple Linear Regression Model Today: More Chapter 6.
2006 Fall MATH 100 Lecture 81 MATH 100 Lecture 25 Final review Class 25 Final review 1.Function of two or more variables.
Solving System of Linear Equations. 1. Diagonal Form of a System of Equations 2. Elementary Row Operations 3. Elementary Row Operation 1 4. Elementary.
Warm-up 1.Review notes from Friday. 2.What is the dimension of the matrix below?
4.2 Adding and Subtracting Matrices 4.3 Matrix Multiplication
Chapter 7 Handling Constraints
Max temp v min temp. It can be seen from the scatterplot that there is a correlation between max temp and min temp. Generally, as min temp increases,
Regression Analysis Week 8 DIAGNOSTIC AND REMEDIAL MEASURES Residuals The main purpose examining residuals Diagnostic for Residuals Test involving residuals.
1 Multiple Regression A single numerical response variable, Y. Multiple numerical explanatory variables, X 1, X 2,…, X k.
Class Opener:. Identifying Matrices Student Check:
Continuous Random Variables Lecture 25 Section Mon, Feb 28, 2005.
Chapter 4 Random Number Generator Speaker : H.M. Liang.
Computation for Physics 計算物理概論 Introduction to NumPy and SciPy.
7.4. 5x + 2y = 16 5x + 2y = 16 3x – 4y = 20 3x – 4y = 20 In this linear system neither variable can be eliminated by adding the equations. In this linear.
Continuous Random Variables Lecture 22 Section Mon, Feb 25, 2008.
Continuous Random Variables Lecture 26 Section Mon, Mar 5, 2007.
Lecture 4. Matlab Basics Getting started 2D-Plots
Statistics 350 Lecture 13. Today Last Day: Some Chapter 4 and start Chapter 5 Today: Some matrix results Mid-Term Friday…..Sections ; ;
Chapter 5: Matrices and Determinants Section 5.5: Augmented Matrix Solutions.
LogisticRegression Regularization AI Lab 방 성 혁. Job Flow [ex2data.txt] Format [118 by 3] mapFeature function [ mapped_x ][ y ] Format [118 by 15] … [118.
LAB 2 Vectors and Matrices Dr.Abdel Fattah FARES.
Lower Bounds on Extended Formulations Noah Fleming University of Toronto Supervised by Toniann Pitassi.
Deep Learning Software: TensorFlow
Zheng ZHANG 1-st year PhD candidate Group ILES, LIMSI
Getting started with TensorBoard
Tensorflow Tutorial Homin Yoon.
Continuous Random Variables
Lecture 3: Linear Regression (with One Variable)
Class 6 Mini Presentations - part
Intro to NLP and Deep Learning
CS 224S: TensorFlow Tutorial
Coding in Python and Basics of Tensorflow
A VERY Brief Introduction to Convolutional Neural Network using TensorFlow 李 弘
HI !.
Graph Vocabulary and Interpreting Graphs of Functions
Tensorflow in Deep Learning
INF 5860 Machine learning for image classification
Introduction to TensorFlow
Tensorflow in Deep Learning
Introduction to Tensorflow
اختر أي شخصية واجعلها تطير!
An open-source software library for Machine Intelligence
PH2150 Scientific Computing Skills
Use Inverse Matrices to Solve 2 Variable Linear Systems
MNIST Dataset Training with Tensorflow
雲端計算.
Continuous Random Variables
Continuous Random Variables
雲端計算.
雲端計算.
IST 597 Tensorflow Tutorial Recurrent Neural Networks
Presented By :- Ankur Mali IST 597
Tensorflow Tutorial Presented By :- Ankur Mali
Introduction to TensorFlow
What is the dimension of the matrix below?
Cases. Simple Regression Linear Multiple Regression.
Review By Artineer.
Regression and Correlation of Data
Getting started with TensorBoard
Deep Learning with TensorFlow
Tensorflow in Deep Learning
Machine Learning for Cyber
Presentation transcript:

Tensorflow Lecture 박 영 택 컴퓨터학부

tensorflow start # tensorflow start import tensorflow as tf c1 = tf.constant(1) c2 = tf.constant(2) c3 = tf.mul(c1, c2) with tf.Session() as sess: print "c3 : ", sess.run(c3) >> c3 : 2

tensorflow constants # tensroflow constants import tensorflow as tf matrix1 = tf.constant([[3,3]]) matrix2 = tf.constant([[2],[2]]) product = tf.matmul(matrix1, matrix2) sess = tf.Session() print sess.run(matrix1) with sess : #sess.run(t) = t.eval() print matrix2.eval() print product.eval() >> [ [3 3] ] [ [2] [2] ] [ [12] ]

tensorflow variables # tensorflow variables import tensorflow as tf state = tf.Variable(0, name = "counter" ) print "state : ", state init_op= tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) print "sess.run(state) : ", sess.run(state) state.assign(1) print "state.assign(1) : ", sess.run(state) assign_op = state.assign(1) sess.run(assign_op) print "sess.run(state) after run(assign_op) : ", sess.run(state) print state >> state : <tensorflow.python.ops.variables. Variable object at 0x1078e8390> sess.run(state) : 0 state.assign(1) : 0 sess.run(state) after run(assign_op) : 1 <tensorflow.python.ops.variables.

a simple example >> state : 0 import tensorflow as tf state = tf.Variable(0, name="counter") one = tf.constant(1) new_value = tf.add(state,one) update = tf.assign(state, new_value) init_op = tf.initialize_all_variables() with tf.Session() as sess : sess.run(init_op) print "state : ", sess.run(state) sess.run(update) print "state after update : ", state print "state after update : ", sess.run(state) >> state : 0 state after update : <tensorflow.python.ops.variables. Variable object at 0x107bc7490> state after update : 1

representation of shape import tensorflow as tf a = tf.zeros([5,]) a1 = tf.zeros((5,)) b = tf.zeros([2,3]) c = tf.zeros((1,2,3)) with tf.Session() as sess: print "a : ", a print "sess.run(a) : ", sess.run(a) print "sess.run(b) : ", sess.run(b) print "sess.run(c) : ", sess.run(c) print "shape of a :", a.get_shape() print "shape of b :", b.get_shape() print "shape of c :", c.get_shape() >> a : Tensor("zeros_18:0", shape=TensorShape([Dimension(5)]), dtype=float32) sess.run(a) : [ 0. 0. 0. 0. 0.] sess.run(b) : [[ 0. 0. 0.] [ 0. 0. 0.]] sess.run(c) : [[[ 0. 0. 0.] [ 0. 0. 0.]]] shape of a : TensorShape([Dimension(5)]) shape of b : TensorShape([Dimension(2), Dimension(3)]) shape of c : TensorShape([Dimension(1), Dimension(2), Dimension(3)])

random uniform : shape, min, max import tensorflow as tf a = tf.random_uniform((2,3),-1,1) b = tf.random_uniform((2,3), 90,100) with tf.Session() as sess: print "a : ", sess.run(a) print "b : ", sess.run(b) print "shape of a : ", a.get_shape() >> a : [[-0.99081087 0.64130092 -0.42918253] [-0.06819224 0.85883069 0.28596878]] b : [[ 93.20597839 97.2769165 97.05194855] [ 91.13065338 95.17055511 97.19480896]] shape of a : TensorShape([Dimension(2), Dimension(3)])

place holder #place holder import tensorflow as tf #x = tf.placeholder("float", shape=(42, 4)) #x = tf.placeholder("float", shape=[42, 4]) x = tf.placeholder("float", (2, 3)) y = tf.zeros([2, 3], "float") print(x.get_shape()) print(y.get_shape()) >> TensorShape([Dimension(2), Dimension(3)]) TensorShape([Dimension(2), Dimension(3)])

matrix add # matrix add import tensorflow as tf x = tf.constant(([1,2], [3,4])) y = tf.constant([[10,20], [30,40]]) z = tf.add(x, y) with tf.Session() as sess: print sess.run(z) >> [[ 11 22 ] [ 33 44 ]]

place holder example #place holder example import tensorflow as tf x = tf.placeholder("float", (2, 3)) y = tf.zeros([2, 3], "float") z = tf.add(x, y) with tf.Session() as sess: print sess.run(z, feed_dict={x: [[1,2,3], [4,5,6]]}) >> [[ 1. 2. 3.] [ 4. 5. 6.]]

place holder example #place holder example import tensorflow as tf x = tf.placeholder("float", (2, 3)) y = tf.placeholder("float", [2, 3]) z = tf.add(x, y) with tf.Session() as sess: print sess.run(z, feed_dict={x: [[1,2,3], [4,5,6]], y:[[10,20,30], [40,50,60]]}) >> [[ 11. 22. 33.] [ 44. 55. 66.]]

a simple linear regression test import numpy as np import tensorflow as tf b = tf.Variable(tf.zeros((3,))) # b shape = 1X3 W = tf.Variable(tf.random_uniform((4,3), -1, 1)) # W shape = 4 X 3 x = tf.placeholder(tf.float32, (None, 4)) # x shape = None X 4 h_i = tf.nn.relu(tf.matmul(x,W) + b) c = tf.matmul(x,W) init_op = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init_op) print "h_i : \n", sess.run(h_i, {x:np.random.rand(2,4)}) print "W: \n", sess.run(W) print "c : matmul([1,1,1,1],W) \n", sess.run(c, {x: [[1,1,1,1]]}) >> h_i : [ [ 0.25116029 0. 1.00402343] [ 0.14622839 0. 0.06366962] ] W: [ [-0.4283855 0.96445107 0.92722011] [-0.01527119 -0.60151052 -0.84161234] [-0.37785292 -0.54684615 0.87315464] [ 0.91101241 0.03124309 0.81620216] ] c : matmul([1,1,1,1],W) [ [ 0.08950281 -0.15266252 1.77496457] ]