Social Computing and Big Data Analytics 社群運算與大數據分析 1 1042SCBDA10 MIS MBA (M2226) (8628) Wed, 8,9, (15:10-17:00) (B309) Deep Learning with Google TensorFlow.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Machine Learning Neural Networks
CS 4700: Foundations of Artificial Intelligence
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Chapter 9 Neural Network.
Artificial Intelligence Lecture No. 29 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
From Machine Learning to Deep Learning. Topics that I will Cover (subject to some minor adjustment) Week 2: Introduction to Deep Learning Week 3: Logistic.
Linear Discrimination Reading: Chapter 2 of textbook.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
CS 188: Artificial Intelligence Learning II: Linear Classification and Neural Networks Instructors: Stuart Russell and Pat Virtue University of California,
Social Computing and Big Data Analytics 社群運算與大數據分析
Social Computing and Big Data Analytics 社群運算與大數據分析 SCBDA06 MIS MBA (M2226) (8628) Wed, 8,9, (15:10-17:00) (B309) Finance Big Data Analytics with.
Social Computing and Big Data Analytics 社群運算與大數據分析 SCBDA05 MIS MBA (M2226) (8628) Wed, 8,9, (15:10-17:00) (B309) Big Data Analytics with Numpy in.
Social Computing and Big Data Analytics 社群運算與大數據分析 SCBDA12 MIS MBA (M2226) (8628) Wed, 8,9, (15:10-17:00) (B309) Social Network Analysis ( 社會網絡分析.
語音訊號處理之初步實驗 NTU Speech Lab 指導教授: 李琳山 助教: 熊信寬
FACULTY EXTERNSHIP OPPORTUNITIES IN DATA SCIENCE AND DATA ANALYTICS Facilitated by: FilAm Software Technology, Clark Freeport Zone Ecuiti, San Francisco,
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Social Computing and Big Data Analytics 社群運算與大數據分析
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Big data classification using neural network
TensorFlow CS 5665 F16 practicum Karun Joseph, A Reference:
Deep Learning Software: TensorFlow
Social Computing and Big Data Analytics 社群運算與大數據分析
CS 6501: 3D Reconstruction and Understanding Convolutional Neural Networks Connelly Barnes.
Big Data A Quick Review on Analytical Tools
Tensorflow Tutorial Homin Yoon.
Artificial Neural Networks
DeepCount Mark Lenson.
Deep Learning Insights and Open-ended Questions
COMP24111: Machine Learning and Optimisation
Applications of Deep Learning and how to get started with implementation of deep learning Presentation By : Manaswi Advisor : Dr.Chinmay.
Principal Solutions AWS Deep Learning
Deep Learning Libraries
Intro to NLP and Deep Learning
CS 224S: TensorFlow Tutorial
Overview of TensorFlow
A VERY Brief Introduction to Convolutional Neural Network using TensorFlow 李 弘
First Steps With Deep Learning Course.
Deep Learning Workshop
TensorFlow and Clipper (Lecture 24, cs262a)
A brief introduction to neural network
Tensorflow in Deep Learning
INF 5860 Machine learning for image classification
Tensorflow in Deep Learning
Deep Learning Packages
Introduction to Tensorflow
An open-source software library for Machine Intelligence
Introduction to Deep Learning with Keras
network of simple neuron-like computing elements
CSC 578 Neural Networks and Deep Learning
TensorBoard Debug, monitor, and examine machine learning models.
AI abc Learn AI in one hour Do AI in one day (a life
Neural Networks Geoff Hulten.
Overview of Neural Network Architecture Assignment Code
商業智慧實務 Practices of Business Intelligence
Artificial Neural Networks
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
IST 597 Tensorflow Tutorial Recurrent Neural Networks
实习生汇报 ——北邮 张安迪.
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Introduction to Neural Networks
Deep Learning Libraries
CSC 578 Neural Networks and Deep Learning
Algorithms in Bioinformatics
Deep Learning with TensorFlow
An introduction to neural network and machine learning
Patterson: Chap 1 A Review of Machine Learning
Machine Learning for Cyber
Presentation transcript:

Social Computing and Big Data Analytics 社群運算與大數據分析 SCBDA10 MIS MBA (M2226) (8628) Wed, 8,9, (15:10-17:00) (B309) Deep Learning with Google TensorFlow (Google TensorFlow 深度學習 ) Min-Yuh Day 戴敏育 Assistant Professor 專任助理教授 Dept. of Information Management, Tamkang University Dept. of Information ManagementTamkang University 淡江大學 淡江大學 資訊管理學系 資訊管理學系 tku.edu.tw/myday/ Tamkang University

週次 (Week) 日期 (Date) 內容 (Subject/Topics) /02/17 Course Orientation for Social Computing and Big Data Analytics ( 社群運算與大數據分析課程介紹 ) /02/24 Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data ( 資料科學與大數據分析: 探索、分析、視覺化與呈現資料 ) /03/02 Fundamental Big Data: MapReduce Paradigm, Hadoop and Spark Ecosystem ( 大數據基礎: MapReduce 典範、 Hadoop 與 Spark 生態系統 ) 課程大綱 (Syllabus) 2

週次 (Week) 日期 (Date) 內容 (Subject/Topics) /03/09 Big Data Processing Platforms with SMACK: Spark, Mesos, Akka, Cassandra and Kafka ( 大數據處理平台 SMACK : Spark, Mesos, Akka, Cassandra, Kafka) /03/16 Big Data Analytics with Numpy in Python (Python Numpy 大數據分析 ) /03/23 Finance Big Data Analytics with Pandas in Python (Python Pandas 財務大數據分析 ) /03/30 Text Mining Techniques and Natural Language Processing ( 文字探勘分析技術與自然語言處理 ) /04/06 Off-campus study ( 教學行政觀摩日 ) 課程大綱 (Syllabus) 3

週次 (Week) 日期 (Date) 內容 (Subject/Topics) /04/13 Social Media Marketing Analytics ( 社群媒體行銷分析 ) /04/20 期中報告 (Midterm Project Report) /04/27 Deep Learning with Theano and Keras in Python (Python Theano 和 Keras 深度學習 ) /05/04 Deep Learning with Google TensorFlow (Google TensorFlow 深度學習 ) /05/11 Sentiment Analysis on Social Media with Deep Learning ( 深度學習社群媒體情感分析 ) 課程大綱 (Syllabus) 4

週次 (Week) 日期 (Date) 內容 (Subject/Topics) /05/18 Social Network Analysis ( 社會網絡分析 ) /05/25 Measurements of Social Network ( 社會網絡量測 ) /06/01 Tools of Social Network Analysis ( 社會網絡分析工具 ) /06/08 Final Project Presentation I ( 期末報告 I) /06/15 Final Project Presentation II ( 期末報告 II) 課程大綱 (Syllabus) 5

6 Source:

Google TensorFlow 7

TensorFlow is an Open Source Software Library for Machine Intelligence 8

numerical computation using data flow graphs 9

Nodes: mathematical operations edges: multidimensional data arrays (tensors) communicated between nodes 10

Computation is a Dataflow Graph 11 Graph of Nodes, also called Operations or ops. weights MatMul bias examples labels AddRelu Xent Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Computation is a Dataflow Graph 12 Edges are N-dimensional arrays: Tensors weights MatMul bias inputs targets AddRelu Xent Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Logistic Regression as Dataflow Graph 13 weights MatMul bias inputs targets AddSoftmax Xent opsOperationsNodes Edges are N-dimensional arrays: Tensors b W X Y Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Computation is a Dataflow Graph 14 Add biases … learning rate …Mul -= ‘Biases’ is a variable Some ops compute gradients -= updates biases with state Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Neural Networks 15 Input Layer (X) Output Layer (Y) Hidden Layer (H) Source: Y X1 X2

Data Flow Graph 16 Source:

Data Flow Graph 17 Source: Data Flow Graph

18 Source: Data Flow Graph

TensorFlow Playground 19

TensorBoard 20

21 Source: ? Hours Sleep X Y Score Hours Study

22 Source: ? Hours Sleep X Y Score Hours Study Training Testing

Training a Network = Minimize the Cost Function 23 Source:

Neural Networks 24 Source:

Neural Networks 25 Input Layer (X) Output Layer (Y) Hidden Layer (H) Source: Y X1 X2

Neural Networks 26 Input Layer (X) Output Layer (Y) Hidden Layers (H) Source: Deep Neural Networks Deep Learning

Neural Networks 27 Input Layer (X) Output Layer (Y) Hidden Layer (H) Source: Y X1 X2 Synapse Neuron Synapse

Neuron and Synapse 28 Source:

Neurons 29 1 Unipolar neuron 2 Bipolar neuron 3 Multipolar neuron4 Pseudounipolar neuron Source:

Neural Networks 30 Input Layer (X) Output Layer (Y) Hidden Layer (H) Source: Score Hours Sleep Hours Study

Neural Networks 31 Source: Input Layer (X) Output Layer (Y) Hidden Layer (H)

Convolutional Neural Networks (CNNs / ConvNets) 32

A regular 3-layer Neural Network 33

A ConvNet arranges its neurons in three dimensions (width, height, depth) 34

DeepDream 35 Source:

Try your first TensorFlow 36

Architecture of TensorFlow 37 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 C ++ front endPython front end Core TensorFlow Execution System CPUGPUAndroidiOS …

Deep Learning A powerful class of machine learning model Modern reincarnation of artificial neural networks Collection of simple, trainable mathematical functions Compatible with many variants of machine learning 38 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

39 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 What is Deep Learning?

40 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 The Neuron x1x1 x2x2 xnxn … … y w1w1 w2w2 wnwn

41 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 The Neuron x1x1 x2x2 xnxn … … y w1w1 w2w2 wnwn

42 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 x1x1 x2x2 x3x3 y Weights Inputs y = max ( 0, * x * x * x 3 )

43 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Learning Algorithm While not done: Pick a random training example “(input, label)” Run neural network on “input” Adjust weights on edges to make output closer to “label” 44 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

45 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 x1x1 x2x2 x3x3 y Weights Inputs y = max ( 0, * x * x * x 3 )

46 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016 x1x1 x2x2 x3x3 y Weights Inputs Next time: y = max ( 0, * x * x * x 3 ) y = max ( 0, * x * x * x 3 )

47 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Important Property of Neural Networks Results get better with More data + Bigger models + More computation (Better algorithms, new insights and improved techniques always help, too!) 48 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

49 Source: Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, WSDM 2016

Install TensorFlow 50

51 Install TensorFlow

52 conda create -n tensorflow python=2.7 Source:

53 Source: source activate tensorflow

54 Source: pip install --ignore-installed --upgrade w py2-none-any.whl

55 conda info --envs conda --version python --version

56 conda info --envs source deactivate source activate tensorflow

57 $ python... >>> import tensorflow as tf >>> hello = tf.constant('Hello, TensorFlow!') >>> sess = tf.Session() >>> print(sess.run(hello)) Hello, TensorFlow! Source:

58 $ python >>> import tensorflow as tf >>> hello = tf.constant('Hello TensorFlow') >>> sess = tf.Session() >>> sess.run(hello) 'Hello TensorFlow' >>> exit() $ Source:

59 conda list

60 pip install ipython

61 conda list

62 pip install ipython[all]

63 conda list

64 conda list

65 ipython notebook

66 import tensorflow as tf hello = tf.constant('Hello TensorFlow') sess = tf.Session() print(sess.run(hello)) a = tf.constant(10) b = tf.constant(32) c = sess.run(a+b) print(c) Source:

67 import tensorflow as tf import numpy as np # Create 100 phony x, y data points in NumPy, y = x * x_data = np.random.rand(100).astype(np.float32) y_data = x_data * # Try to find values for W and b that compute y_data = W * x_data + b # (We know that W should be 0.1 and b 0.3, but Tensorflow will # figure that out for us.) W = tf.Variable(tf.random_uniform([1], -1.0, 1.0)) b = tf.Variable(tf.zeros([1])) y = W * x_data + b # Minimize the mean squared errors. loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # Before starting, initialize the variables. We will 'run' this first. init = tf.initialize_all_variables() # Launch the graph. sess = tf.Session() sess.run(init) # Fit the line. for step in xrange(201): sess.run(train) if step % 20 == 0: print(step, sess.run(W), sess.run(b)) # Learns best fit is W: [0.1], b: [0.3] Source: TensorFlow Example

68 Source: TensorFlow Example

69 Source: TensorFlow Example

70 TensorFlow TensorBoard Graphs

71 from __future__ import absolute_import from __future__ import division from __future__ import print_function # Import data from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf flags = tf.app.flags FLAGS = flags.FLAGS flags.DEFINE_string('data_dir', '/tmp/data/', 'Directory for storing data') mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True) sess = tf.InteractiveSession() # Create the model x = tf.placeholder(tf.float32, [None, 784]) W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x, W) + b) # Define loss and optimizer y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) # Train tf.initialize_all_variables().run() for i in range(1000): batch_xs, batch_ys = mnist.train.next_batch(100) train_step.run({x: batch_xs, y_: batch_ys}) # Test trained model correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels})) TensorFlow Example MNIST Softmax

72 TensorFlow Example MNIST Softmax

73 TensorFlow Example MNIST Softmax

74 from __future__ import absolute_import from __future__ import division from __future__ import print_function import gzip import os import tempfile import numpy from six.moves import urllib from six.moves import xrange # pylint: disable=redefined-builtin import tensorflow as tf from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') W_conv1 = weight_variable([5, 5, 1, 32]) b_conv1 = bias_variable([32]) x_image = tf.reshape(x, [-1,28,28,1]) h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) h_pool1 = max_pool_2x2(h_conv1) W_conv2 = weight_variable([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2(h_conv2) W_fc1 = weight_variable([7 * 7 * 64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1])) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess.run(tf.initialize_all_variables()) for i in range(20000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={ x:batch[0], y_: batch[1], keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) print("test accuracy %g"%accuracy.eval(feed_dict={ x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})) TensorFlow Deep MNIST for Experts

75 TensorFlow Deep MNIST for Experts

76 TensorFlow Deep MNIST for Experts

77 TensorFlow Deep MNIST for Experts

78 TensorFlow Deep MNIST for Experts

Deep Learning Software Theano – CPU/GPU symbolic expression compiler in python (from MILA lab at University of Montreal) Keras – A theano based deep learning library. Tensorflow – TensorFlow™ is an open source software library for numerical computation using data flow graphs. 79 Source:

References Sunila Gollapudi (2016), Practical Machine Learning, Packt Publishing Sebastian Raschka (2015), Python Machine Learning, Packt Publishing TensorFlow: Rajat Monga (2016), TensorFlow: Machine Learning for Everyone, Jeff Dean (2016), Large-Scale Deep Learning For Building Intelligent Computer Systems, The 9th ACM International Conference on Web Search and Data Mining (WSDM 2016), San Francisco, California, USA., February 22-25, Deep Learning Basics: Neural Networks Demystified, oU oU Deep Learning SIMPLIFIED, Nu Nu Theano: Keras: 80