Review By Artineer.

Slides:



Advertisements
Similar presentations
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Advertisements

Slides from: Doug Gray, David Poole
Neural networks Introduction Fitting neural networks
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Test practice Multiplication. Multiplication 9x2.
Theano Md. Iftekhar Tanveer Rafayet Ali Md. Iftekhar Tanveer Rafayet Ali.
A Casual Chat on Convex Optimization in Machine Learning Data Mining at Iowa Group Qihang Lin 02/09/2014.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Statistics 350 Lecture 16. Today Last Day: Introduction to Multiple Linear Regression Model Today: More Chapter 6.
Linear Regression  Using a linear function to interpolate the training set  The most popular criterion: Least squares approach  Given the training set:
November 2, 2010Neural Networks Lecture 14: Radial Basis Functions 1 Cascade Correlation Weights to each new hidden node are trained to maximize the covariance.
Review of the fundamental concepts of probability Exploratory data analysis: quantitative and graphical data description Estimation techniques, hypothesis.
SVM (Support Vector Machines) Base on statistical learning theory choose the kernel before the learning process.
How do you simplify? Simple Complicated.
Collaborative Filtering Matrix Factorization Approach
Linear Regression When looking for a linear relationship between two sets of data we can plot what is known as a scatter diagram. x y Looking at the graph.
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Classification / Regression Neural Networks 2
Model representation Linear regression with one variable
Andrew Ng Linear regression with one variable Model representation Machine Learning.
M Machine Learning F# and Accord.net. Alena Dzenisenka Software architect at Luxoft Poland Member of F# Software Foundation Board of Trustees Researcher.
Differentiating “Combined” Functions ---Part I Constant Multiples, Sums and Differences.
The problem of overfitting
Regularization (Additional)
Logistic Regression (Classification Algorithm)
M Machine Learning F# and Accord.net.
“GLMrous designs” “GLMrous designs” “Are you regressed or something?” “Pseudonyms & aliases” “Pseudonyms & aliases” Models I Models II.
3.9 Differentials and the Linearization of a Graph Primary Learning Target: Understand the concept of the tangent line approximation to a curve and how.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
LogisticRegression Regularization AI Lab 방 성 혁. Job Flow [ex2data.txt] Format [118 by 3] mapFeature function [ mapped_x ][ y ] Format [118 by 15] … [118.
WEEK 2 SOFT COMPUTING & MACHINE LEARNING YOSI KRISTIAN Gradient Descent for Linear Regression.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
Zheng ZHANG 1-st year PhD candidate Group ILES, LIMSI
Getting started with TensorBoard
Machine Learning – Classification David Fenyő
Tensorflow Tutorial Homin Yoon.
Machine Learning & Deep Learning
Learning with Perceptrons and Neural Networks
Intro to NLP and Deep Learning
A VERY Brief Introduction to Convolutional Neural Network using TensorFlow 李 弘
A brief introduction to neural network
Tensorflow in Deep Learning
ريكاوري (بازگشت به حالت اوليه)
Introduction to Tensorflow
An open-source software library for Machine Intelligence
Collaborative Filtering Matrix Factorization Approach
Differentiating “Combined” Functions ---Part I
Differentiating “Combined” Functions ---Part I
Overview of Neural Network Architecture Assignment Code
雲端計算.
How do we find the best linear regression line?
EQUATION 4.1 Relationship Between One Dependent and One Independent Variable: Simple Regression Analysis.
Presented By :- Ankur Mali IST 597
Tensorflow Tutorial Presented By :- Ankur Mali
Least squares linear classifier
Predicted microbiota age against the actual age of visitors and villagers. Predicted microbiota age against the actual age of visitors and villagers. (A)
Artificial Intelligence 10. Neural Networks
The Math of Machine Learning
Logistic Regression Chapter 7.
Tensorflow Lecture 박 영 택 컴퓨터학부.
Cases. Simple Regression Linear Multiple Regression.
Multiple features Linear Regression with multiple variables
Multiple features Linear Regression with multiple variables
—ROC curves for each simple test compared with NCS (gold standard) plotting the sensitivity versus 1-specificity (the false-positive rate) for different.
Linear regression with one variable
Getting started with TensorBoard
Deep Learning with TensorFlow
Logistic Regression Geoff Hulten.
An introduction to neural network and machine learning
Machine Learning for Cyber
Presentation transcript:

Review By Artineer

Review Derivative 𝑓=𝑥𝑛 𝜕𝑓 𝜕𝑥 = 𝑛𝑥n-1

Review Derivative 𝑓=𝑎(𝑏 𝑥 ) 𝑓=𝑎(𝑏 𝑥 ) 𝜕𝑓 𝜕𝑥 = 𝑎 ′ 𝑏 𝑥 × 𝑏′(𝑥)

Review Derivative 𝑓=2𝑥+𝑦 𝜕𝑓 𝜕𝑥 = 2 𝜕𝑓 𝜕𝑦 = 1

Review Hypothesis 𝑦=𝑎𝑥+𝑏 𝑦=𝑤𝑥+𝑏

Review Cost Function 1 2∗10 𝑖=1 10 𝑦𝑖 −(𝑎𝑥𝑖+𝑏) 2 𝑒=

Review wn+1 = wn + 𝛼 𝜕𝑒 𝜕𝑤 bn+1 = bn + 𝛼 𝜕𝑒 𝜕𝑏 Gradient Descent Cost Function wn+1 = wn + 𝛼 𝜕𝑒 𝜕𝑤 bn+1 = bn + 𝛼 𝜕𝑒 𝜕𝑏

Multiple Linear Regression By Artineer

Concept Simple Linear Regression Multiple Linear Regression 𝑥1 𝑥 𝑦 𝑥2 𝑥3 𝑥…

Concept Simple Linear Regression Multiple Linear Regression 𝑦=𝑤𝑥+𝑏 𝑦=𝑤1𝑥1+𝑤2𝑥2+𝑤3𝑥3+ …+𝑏

Concept Simple Linear Regression Multiple Linear Regression

Hypothesis Y = w1x1 + w2x2 + w3x3 + … + b It's very complicated.

Hypothesis Y = w1x1 + w2x2 + w3x3 + … + b Y = (x1 x2 x3 …) 𝑤1 𝑤2 𝑤3 … + b

Hypothesis Y = (x1 x2 x3 …) 𝑤1 𝑤2 𝑤3 … + b Y = XW + b

Cost Function Simple Linear Regression Multiple Linear Regression

1 2 𝑖=1 10 𝑦𝑖 −(𝑤1𝑥1𝑖+𝑤2𝑥2𝑖+𝑤3𝑥3𝑖+ 𝑏) 2 Cost Function Simple Linear Regression Multiple Linear Regression 1 2 𝑖=1 10 𝑦𝑖 −(𝑤𝑥𝑖+𝑏) 2 1 2 𝑖=1 10 𝑦𝑖 −(𝑤1𝑥1𝑖+𝑤2𝑥2𝑖+𝑤3𝑥3𝑖+ 𝑏) 2 n = x의 개수

e = 1 2 𝑖=1 10 𝑦𝑖 − 𝑤1𝑥1𝑖+𝑤2𝑥2𝑖+𝑤3𝑥3𝑖+𝑏 2 Cost Function e = 1 2 𝑖=1 10 𝑦𝑖 − 𝑤1𝑥1𝑖+𝑤2𝑥2𝑖+𝑤3𝑥3𝑖+𝑏 2

Multiple Linear Regression Gradient Descent Simple Linear Regression Multiple Linear Regression

Tensorflow import tensorflow as tf x_data = [1, 2, 3] y_data = [1, 2, 3] w = tf.Variable(tf.random_normal([1]), name)

Tensorflow import tensorflow as tf x_data = [1, 2, 3] y_data = [2, 4, 6] a = tf.Variable(tf.random_normal([1]), name='a') b = tf.Variable(tf.random_normal([1]), name='b') hypothesis = a * x_train + b e = tf.reduce_mean(tf.square(hypothesis - y_train)) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01) train = optimizer.minimize(e) sess = tf.Session() sess.run(tf.global_variables_initializer()) for step in range(2001): sess.run(train) if step % 20 == 0: print("iteration : ", step, "e : ", sess.run(e), " ( y = ", sess.run(a),"x + ", sess.run(b), " )")

Tensorflow