Failure Prediction in Hardware Systems Douglas Turnbull Neil Alldrin CSE 221: Operating System Final Project Fall 2003 1.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Classification.. continued. Prediction and Classification Last week we discussed the classification problem.. – Used the Naïve Bayes Method Today..we.
Decision Tree Approach in Data Mining
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 Part I Introduction to Data Mining by Tan,
Sensor-Based Abnormal Human-Activity Detection Authors: Jie Yin, Qiang Yang, and Jeffrey Junfeng Pan Presenter: Raghu Rangan.
11/26/081 AUTOMATIC SOLAR ACTIVITY DETECTION BASED ON IMAGES FROM HSOS NAOC, HSOS YANG Xiao, LIN GangHua
A Joint Model of Text and Aspect Ratings for Sentiment Summarization Ivan Titov (University of Illinois) Ryan McDonald (Google Inc.) ACL 2008.
A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts 04 10, 2014 Hyun Geun Soo Bo Pang and Lillian Lee (2004)
Neural Computation Final Project -Earthquake Prediction , Spring Alon Talmor Ido Yariv.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Assuming normally distributed data! Naïve Bayes Classifier.
CS 590M Fall 2001: Security Issues in Data Mining Lecture 3: Classification.
TRADING OFF PREDICTION ACCURACY AND POWER CONSUMPTION FOR CONTEXT- AWARE WEARABLE COMPUTING Presented By: Jeff Khoshgozaran.
Page-level Template Detection via Isotonic Smoothing Deepayan ChakrabartiYahoo! Research Ravi KumarYahoo! Research Kunal PuneraUniv. of Texas at Austin.
Final Project: Project 9 Part 1: Neural Networks Part 2: Overview of Classifiers Aparna S. Varde April 28, 2005 CS539: Machine Learning Course Instructor:
Supervised classification performance (prediction) assessment Dr. Huiru Zheng Dr. Franscisco Azuaje School of Computing and Mathematics Faculty of Engineering.
Evaluation of kernel function modification in text classification using SVMs Yangzhe Xiao.
CSE803 Fall Pattern Recognition Concepts Chapter 4: Shapiro and Stockman How should objects be represented? Algorithms for recognition/matching.
1 HealthSense : Classification of Health-related Sensor Data through User-Assisted Machine Learning Presenter: Mi Zhang Feb. 23 rd, 2009 From Prof. Gregory.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Stockman CSE803 Fall Pattern Recognition Concepts Chapter 4: Shapiro and Stockman How should objects be represented? Algorithms for recognition/matching.
Introduction to Machine Learning Approach Lecture 5.
Face Detection CSE 576. Face detection State-of-the-art face detection demo (Courtesy Boris Babenko)Boris Babenko.
Real-Time Odor Classification Through Sequential Bayesian Filtering Javier G. Monroy Javier Gonzalez-Jimenez
Final Project Classification of Sleep data Akane Sano Affective Computing Group Media Lab.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
SVMLight SVMLight is an implementation of Support Vector Machine (SVM) in C. Download source from :
Water Contamination Detection – Methodology and Empirical Results IPN-ISRAEL WATER WEEK (I 2 W 2 ) Eyal Brill Holon institute of Technology, Faculty of.
Bayesian Networks. Male brain wiring Female brain wiring.
Classification. An Example (from Pattern Classification by Duda & Hart & Stork – Second Edition, 2001)
1 Pattern Recognition Concepts How should objects be represented? Algorithms for recognition/matching * nearest neighbors * decision tree * decision functions.
Classifier Evaluation Vasileios Hatzivassiloglou University of Texas at Dallas.
Data Mining: Classification & Predication Hosam Al-Samarraie, PhD. Centre for Instructional Technology & Multimedia Universiti Sains Malaysia.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Classification Techniques: Bayesian Classification
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 24 Nov 2, 2005 Nanjing University of Science & Technology.
CSSE463: Image Recognition Day 11 Lab 4 (shape) tomorrow: feel free to start in advance Lab 4 (shape) tomorrow: feel free to start in advance Test Monday.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Robust Real Time Face Detection
USE RECIPE INGREDIENTS TO PREDICT THE CATEGORY OF CUISINE Group 7 – MEI, Yan & HUANG, Chenyu.
Bradley Cowie Supervised by Barry Irwin Security and Networks Research Group Department of Computer Science Rhodes University DATA CLASSIFICATION FOR CLASSIFIER.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
CSSE463: Image Recognition Day 11 Due: Due: Written assignment 1 tomorrow, 4:00 pm Written assignment 1 tomorrow, 4:00 pm Start thinking about term project.
Evaluating Classifiers Reading: T. Fawcett, An introduction to ROC analysis, Sections 1-4, 7 (linked from class website)An introduction to ROC analysis.
LIGO-G Z r statistics for time-domain cross correlation on burst candidate events Laura Cadonati LIGO-MIT LSC collaboration meeting, LLO march.
Classification Cheng Lei Department of Electrical and Computer Engineering University of Victoria April 24, 2015.
THIRD CLASSIFICATION OF MICROCALCIFICATION STAGES IN MAMMOGRAPHIC IMAGES THIRD REVIEW Supervisor: Mrs.P.Valarmathi HOD/CSE Project Members: M.HamsaPriya( )
Next, this study employed SVM to classify the emotion label for each EEG segment. The basic idea is to project input data onto a higher dimensional feature.
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Unit 10 ITT TECHNICAL INSTITUTE NT1330 Client-Server Networking II Date: 2/24/2016 Instructor: Williams Obinkyereh.
Research Methodology Proposal Prepared by: Norhasmizawati Ibrahim (813750)
Recognition of biological cells – development
School of Computer Science & Engineering
Prepared by: Mahmoud Rafeek Al-Farra
CSSE463: Image Recognition Day 11
Inside Job: Applying Traffic Analysis to Measure Tor from Within
Generalization ..
CSSE463: Image Recognition Day 11
Classification and Prediction
EE513 Audio Signals and Systems
Model Evaluation and Selection
Xin Qi, Matthew Keally, Gang Zhou, Yantao Li, Zhen Ren
Model generalization Brief summary of methods
CSSE463: Image Recognition Day 11
CSSE463: Image Recognition Day 11
THE ASSISTIVE SYSTEM SHIFALI KUMAR BISHWO GURUNG JAMES CHOU
Automatic Prosodic Event Detection
Presentation transcript:

Failure Prediction in Hardware Systems Douglas Turnbull Neil Alldrin CSE 221: Operating System Final Project Fall

Background Using sensors from a high-end server, can we predict system board failures. 2 If we can predict failure, we can take preventative action to avoid costly failures. System Specifications: 18 Hot Swappable System Boards 4 Processors per Board 18 Sensors per Board Measures various temperatures and voltages

3 Sensor Logs Each board has an associated Sensor Log: About every minute, the sensors are sampled and the measurements are stored in the sensor logs. System board failures are also record in the sensor log. We need to extract a data set from these logs to represent failure events (positive examples) and normal operating conditions (negative examples). We accomplish this using a Windowing Abstraction.

4 Windowing Abstraction Sensor Window – Adjacent entries in the sensor log that are used to predict failures Potential Failure Window – An example is labeled as positive or negative if a failure occurs in the potential failure window.

5 Feature Vectors Feature Vectors are created from the data in a sensor window. There are two types of feature vectors: Raw Feature Vectors – a vector all the sensor measurement in a sensor window. Summary Feature Vectors – the mean, standard deviation, range and slope for each of the sensors in a sensor window.

6 Classification A classifier assigns labels (positive or negative) to novel feature vectors after it has been trained using a set of feature vectors with known labels. Many classifiers can be used, such as SVMs, Bayesian mixture models, and neural networks. We use a Radial Basis Function (RBF) network, a special form or a neural network, because it is computationally efficient.

7 Evaluation Predictions True PositivesFalse Positives False NegativesTrue Negatives Failure Non-failure Ground Truth Prediction We must consider two rates when evaluating our prediction system. True Positive Rate (tpr) – A measure of our ability to correctly predict true failures. tpr = Correctly Predicted Failures / Total Number of True Failures False Positive Rate (fpr) – A measure of the number of mispredictions. fpr = incorrectly Predicted Failures / Total Number of Non-Failures Failure Non-failure

8 Preliminary Results Observations: 1.Summary feature vectors have lower false positive rates than Raw Feature Vectors. 2. Window size does not seem to matter. How can we improve these results?

9 Feature Subset Selection We can further improve prediction accuracy (and reduce computation) by reducing the number of features used by our classifier. Feature are selected automatically using Forward Stepwise Selection.

10 Results

11 Best Results We find the best prediction results with Summary Feature Vectors using 2/3 of the summary features: 0.87 True Positive Rate (tpr) 0.10 False Positive Rate (fpr) Our data set assumes that we are equally likely to find a failure as a non-failure. When one considers that there are very few failures in most hardware system, even a low false positive rate will produce many false positives.

12 Future Work Implement other classifiers – SVMS, Bayesian Mixture Models Develop a larger data set with more examples of failures Apply framework to other hardware system such as personal computers Modify operating system to take advantage of failure prediction Migrate processes to other system boards Run diagnostic tests Turn off suspect system boards Backup data

13 The End Questions?

14 RBF Network

15 Value of a prediction system The value of a prediction system can be summarized as, Value = (benefit of predicted failure) * tpr – (cost of mispredicted failure) * fpr

16 Template