雲端計算.

Slides:



Advertisements
Similar presentations
Neural Networks and SVM Stat 600. Neural Networks History: started in the 50s and peaked in the 90s Idea: learning the way the brain does. Numerous applications.
Advertisements

Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
The loss function, the normal equation,
Classification and Prediction: Regression Analysis
Computer Science 1000 Spreadsheets II Permission to redistribute these slides is strictly prohibited without permission.
Learning with large datasets Machine Learning Large scale machine learning.
© Copyright 1992–2005 by Deitel & Associates, Inc. and Pearson Education Inc. All Rights Reserved. Tutorial 8 - Interest Calculator Application: Introducing.
Andrew Ng Linear regression with one variable Model representation Machine Learning.
ICS 178 Introduction Machine Learning & data Mining Instructor max Welling Lecture 6: Logistic Regression.
Spreadsheets What is Excel?. Objectives 1. Identify the parts of the Excel Screen 2. Identify the functions of a spreadsheet 3. Identify how spreadsheets.
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Practical Kinetics Exercise 3: Initial Rates vs. Reaction Progress Analysis Objectives: 1.Initial Rates 2.Whole Reaction Kinetic Analysis 3.Extracting.
Practical Kinetics Exercise 2: Integral Rate Laws Objectives: 1.Import Experimental Data (Excel, CSV) 2.Fit to zero-, first-, and second-order rate laws.
Page 1 CS 546 Machine Learning in NLP Review 2: Loss minimization, SVM and Logistic Regression Dan Roth Department of Computer Science University of Illinois.
LogisticRegression Regularization AI Lab 방 성 혁. Job Flow [ex2data.txt] Format [118 by 3] mapFeature function [ mapped_x ][ y ] Format [118 by 15] … [118.
PREDICTING SONG HOTNESS
An Introduction To The Backpropagation Algorithm.
CS 5163 Introduction to Data Science
PH2150 Scientific Computing Skills
Python for Data Analysis
A Smart Tool to Predict Salary Trends of H1-B Holders
Zheng ZHANG 1-st year PhD candidate Group ILES, LIMSI
Machine Learning – Classification David Fenyő
Artificial Neural Networks
Intro to NLP and Deep Learning
A Simple Artificial Neuron
Classification with Perceptrons Reading:
Basic machine learning background with Python scikit-learn
Python for Quant Finance
Policy Compression for MDPs
Suppose the maximum number of hours of study among students in your sample is 6. If you used the equation to predict the test score of a student who studied.
1) A residual: a) is the amount of variation explained by the LSRL of y on x b) is how much an observed y-value differs from a predicted y-value c) predicts.
Numerical Descriptives in R
Balance Scale Data Set This data set was generated to model psychological experimental results. Each example is classified as having the balance scale.
CSCI B609: “Foundations of Data Science”
Ying shen Sse, tongji university Sep. 2016
Vectorized Code, Logical Indexing
Interpolation and curve fitting
An Introduction To The Backpropagation Algorithm
CAR EVALUATION SIYANG CHEN ECE 539 | Dec
Logistic Regression.
Machine Learning Interpretability
雲端計算.
雲端計算.
Option One Install Python via installing Anaconda:
Topics Introduction to Value-returning Functions: Generating Random Numbers Writing Your Own Value-Returning Functions The math Module Storing Functions.
Pandas John R. Woodward.
雲端計算.
Perceptron by Python Wen-Yen Hsu, Hsiao-Lung Chan
雲端計算.
Presented By :- Ankur Mali IST 597
Softmax Classifier.
Dr. Sampath Jayarathna Cal Poly Pomona
Back Propagation and Representation in PDP Networks
雲端計算.
Analysis for Predicting the Selling Price of Apartments Pratik Nikte
Artificial Intelligence 10. Neural Networks
Backpropagation David Kauchak CS159 – Fall 2019.
Dr. Sampath Jayarathna Cal Poly Pomona
Programming with Data Lab 6
Dr. Sampath Jayarathna Old Dominion University
Training Up: Intro to Python
Matplotlib and Pandas
PYTHON PANDAS FUNCTION APPLICATIONS
Using Clustering to Make Prediction Intervals For Neural Networks
DATAFRAME.
By : Mrs Sangeeta M Chauhan , Gwalior
Beating the market -- forecasting the S&P 500 Index
Machine Learning for Cyber
Machine Learning for Cyber
Presentation transcript:

雲端計算

Tensorflow

Tensorflow

install matplotlib

pip install sklearn

Load the necessary libraries import math from IPython import display from matplotlib import cm from matplotlib import gridspec from matplotlib import pyplot as plt import numpy as np import pandas as pd from sklearn import metrics import tensorflow as tf from tensorflow.python.data import Dataset tf.logging.set_verbosity(tf.logging.ERROR) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format

Load data set and randomize the data california_housing_dataframe = pd.read_csv("https://download.mlcc.google.com/mledu-datasets/california_housing_train.csv", sep=",") california_housing_dataframe = california_housing_dataframe.reindex( np.random.permutation(california_housing_dataframe.index)) california_housing_dataframe["median_house_value"] /= 1000.0 print(california_housing_dataframe) reindex():重新排序 np.random.permutation(): 打亂資料順序

Define features and configure feature columns Type of data each feature contains Categorical Data: text Numerical Data: number (integer or float) In TensorFlow, we indicate a feature's data type using a construct called a feature column. Feature columns store only a description of the feature data; they do not contain the feature data itself.

Define features and configure feature columns # Define the input feature: total_rooms. my_feature = california_housing_dataframe[["total_rooms"]] # Configure a numeric feature column for total_rooms. feature_columns = [tf.feature_column.numeric_column("total_rooms")]

Configure the LinearRegressor # Use gradient descent as the optimizer for training the model. my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0000001) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) # Configure the linear regression model with our feature columns and optimizer. # Set a learning rate of 0.0000001 for Gradient Descent. linear_regressor = tf.estimator.LinearRegressor( feature_columns=feature_columns, optimizer=my_optimizer ) Gradient clipping ensures the magnitude of the gradients do not become too large during training.

Define the input function def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(buffer_size=10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels """Trains a linear regression model of one feature. 參數 Args: features: pandas DataFrame of features targets: pandas DataFrame of targets batch_size: Size of batches to be passed to the model shuffle: True or False. Whether to shuffle the data. num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely Returns: Tuple of (features, labels) for next data batch """ buffer_size :size of the dataset from which shuffle will randomly sample

Train the model _ = linear_regressor.train( input_fn = lambda:my_input_fn(my_feature, targets), steps=100 )

Evaluate the model # Create an input function for predictions. prediction_input_fn =lambda: my_input_fn(my_feature, targets, num_epochs=1, shuffle=False) # make predictions. predictions = linear_regressor.predict(input_fn=prediction_input_fn) # Format predictions as a NumPy array, so we can calculate error metrics. predictions = np.array([item['predictions'][0] for item in predictions]) # Error mean_squared_error = metrics.mean_squared_error(predictions, targets) root_mean_squared_error = math.sqrt(mean_squared_error) print("Mean Squared Error (on training data): %0.3f" % mean_squared_error) print("Root Mean Squared Error (on training data): %0.3f" % root_mean_squared_error)

Evaluate the model min_house_value = california_housing_dataframe["median_house_value"].min() max_house_value = california_housing_dataframe["median_house_value"].max() min_max_difference = max_house_value - min_house_value print("Min. Median House Value: %0.3f" % min_house_value) print("Max. Median House Value: %0.3f" % max_house_value) print("Difference between Min. and Max.: %0.3f" % min_max_difference) print("Root Mean Squared Error: %0.3f" % root_mean_squared_error)

Evaluate the model calibration_data = pd.DataFrame() calibration_data["predictions"] = pd.Series(predictions) calibration_data["targets"] = pd.Series(targets) calibration_data.describe()

Evaluate the model sample = california_housing_dataframe.sample(n=300) x_0 = sample["total_rooms"].min() x_1 = sample["total_rooms"].max() weight = linear_regressor.get_variable_value('linear/linear_model/total_rooms/weights')[0] bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights') # Get the predicted median_house_values for the min and max total_rooms values. y_0 = weight * x_0 + bias y_1 = weight * x_1 + bias # Plot our regression line from (x_0, y_0) to (x_1, y_1). plt.plot([x_0, x_1], [y_0, y_1], c='r') plt.ylabel("median_house_value") plt.xlabel("total_rooms") # Plot a scatter plot from our data sample. plt.scatter(sample["total_rooms"], sample["median_house_value"]) plt.show()

Tweak the model hyperparameters

Tweak the model hyperparameters

Tweak the model hyperparameters

驗收: Achieve an RMSE of 180 or Below

驗收