Download presentation
Presentation is loading. Please wait.
Published byEmma Brown Modified over 9 years ago
1
1 Machine Learning in Performance Management Irina Rish IBM T.J. Watson Research Center January 24, 2001
2
Irina Rish, IBM TJWRC 2 Outline Introduction Machine learning applications in Performance Management Bayesian learning tools: extending ABLE Advancing theory Summary and future directions
3
Irina Rish, IBM TJWRC 3 Pattern discovery, classification, diagnosis and prediction Pattern discovery, classification, diagnosis and prediction Learning problems: examples System event mining Events from hosts Time End-user transaction recognition Remote Procedure Calls (RPCs) BUY? SELL? OPEN_DB? SEARCH? Transaction1Transaction2
4
Irina Rish, IBM TJWRC 4 Approach: Bayesian learning Numerous important applications: Medicine Stock market Bio-informatics eCommerce Military ……… Diagnosis: P(cause|symptom)=? Diagnosis: P(cause|symptom)=? Learn (probabilistic) dependency models Learn (probabilistic) dependency models C S B D X P(S) P(B|S) P(X|C,S) P(C|S) P(D|C,B) Prediction: P(symptom|cause)=? Prediction: P(symptom|cause)=? Bayesian networks Pattern classification: P(class|data)=? Pattern classification: P(class|data)=?
5
Irina Rish, IBM TJWRC 5 Outline Introduction Machine-learning applications in Performance Management Transaction Recognition In progress: Event Mining; Probe Placement; etc. Bayesian learning tools: extending ABLE Advancing theory Summary and future directions
6
Irina Rish, IBM TJWRC 6 End-User Transaction Recognition: why is it important? Client Workstation End-User Transactions (EUT) Remote Procedure Calls (RPCs) Server (Web, DB, Lotus Notes) Session (connection) Realistic workload models (for testing performance) Resource management (anticipating requests) Quantifying end-user perception of performance (response times) OpenDB Search SendMail Examples: Lotus Notes, Web/eBusiness (on-line stores, travel agencies, trading): database transactions, buy/sell, search, email, etc. RPCs ?
7
Irina Rish, IBM TJWRC 7 Why is it hard? Why learn from data? MoveMsgToFolder FindMailByKey 1. OPEN_COLLECTION 2. UDATE_COLLECTION 3. DB_REPLINFO_GET 4. GET_MOD_NOTES 5. READ_ENTRIES 6. OPEN_COLLECTION 7. FIND_BY_KEY 8. READ_ENTRIES EUTsRPCs Example: EUTs and RPCs in Lotus Notes Many RPC and EUT types (92 RPCs and 37 EUTs) Large (unlimited) data sets (10,000+ Tx inst.) Manual classification of a data subset took about a month Non-deterministic and unknown EUT RPC mapping: “Noise” sources - client/server states No client-side instrumentation – unknown EUT boundaries
8
Irina Rish, IBM TJWRC 8 Our approach: Classification + Segmentation (similar to text classification) (similar to speech understanding, image segmentation)
9
Irina Rish, IBM TJWRC 9 How to represent transactions? “Feature vectors” RPC counts RPC occurrences
10
Irina Rish, IBM TJWRC 10 Classification scheme RPCs labeled with EUTs Learning Classifier Unlabeled RPCs EUTs Training phase Feature Extraction Classifier Training data: Operation phase “Test” data: Feature Extraction Classification
11
Irina Rish, IBM TJWRC 11 Our classifier: naïve Bayes (NB) 2. Classification: given (unlabeled) instance, choose most likely class: 1.Training: estimate parameters and (e.g., ML-estimates) (Bayesian decision rule) Simplifying (“naïve”) assumption: feature independence given class
12
Irina Rish, IBM TJWRC 12 Classification results on Lotus CoC data Significant improvement over baseline classifier (75%) NB is simple, efficient, and comparable to the state-of-the-art classifiers: SVM – 85-87%, Decision Tree – 90-92% Best-fit distribution (shift. geom) - not necessarily best classifier! (?) Baseline classifier: Always selects most- frequent transaction Accuracy Training set size NB + Bernoulli, mult. or geom. NB + shifted geom.
13
Irina Rish, IBM TJWRC 13 Transaction recognition: segmentation + classification Naive Bayes classifier Dynamic programming (Viterbi search) (Recursive) DP equation:
14
Irina Rish, IBM TJWRC 14 Transaction recognition results Third bestbestMultinomial Fourth bestbestGeometric bestworstShift. Geom. Second bestbestBernoulli SegmentationClassificationModel Accuracy Training set size Good EUT recognition accuracy: 64% (harder problem than classification!) Reversed order of results: best classifier - not necessarily best recognizer! (?) further research!
15
Irina Rish, IBM TJWRC 15 EUT recognition: summary A novel approach: learning EUTs from RPCs Patent, conference paper (AAAI-2000), prototype system Successful results on Lotus Notes data (Lotus CoC): Classification – naive Bayes (up to 87% accuracy) EUT recognition – Viterbi+Bayes (up to 64% accuracy) Work in progress: Better feature selection (RPC subsequences?) Selecting “best classifier” for segmentation task Learning more sophisticated classifiers (Bayesian networks) Information-theoretic approach to segmentation (MDL)
16
Irina Rish, IBM TJWRC 16 Outline Introduction Machine-learning applications in Performance Management Transaction Recognition In progress: Event Mining; Probing Strategy; etc. Bayesian learning tools: extending ABLE Advancing theory Summary and future directions
17
Irina Rish, IBM TJWRC 17 Event Mining: analyzing system event sequences Example: USAA data 858 hosts, 136 event types 67184 data points: (13 days, by sec) Event examples: High-severity events: 'Cisco_Link_Down‘, 'chassisMinorAlarm_On‘, etc. Low-severity events: 'tcpConnectClose‘, 'duplicate_ip‘, etc. Events from hosts Time (sec) What is it? Why is it important? learning system behavior patterns for better performance management Why is it hard? large complex systems (networks) with many dependencies; prior models not always available many events/hosts, data sets: huge and constantly growing
18
Irina Rish, IBM TJWRC 18 ??? Event1 Event N 1. Learning event dependency models Event2 EventM Important issue: incremental learning from data streams Current approach: learn dynamic probabilistic graphical models (temporal, or dynamic Bayes nets) Predict: time to failure event co-occurrence existence of hidden nodes – “root causes” Recognize sequence of high-level system states: unsupervised version of EUT recognition problem
19
Irina Rish, IBM TJWRC 19 2. Clustering hosts by their history “Problematic” hosts“Silent” hosts group hosts w/ similar event sequences: what is appropriate similarity (“distance”) metric? One example: e.g., distance between “compressed” sequences – event distribution models:
20
Irina Rish, IBM TJWRC 20 Probing strategy (EPP) Objectives: find probe frequency F that minimizes 1.E (Tprobe-Tstart) - failure detection, or 2.E( total “failure” time – total “estimated” failure time) - gives accurate performance estimate Constraints on additional load induced by probes: L(F) < MaxLoad time response time Availability violations Probes
21
Irina Rish, IBM TJWRC 21 Outline Introduction Machine-learning applications in Performance Management Bayesian learning tools: extending ABLE Advancing theory Summary and future directions
22
ABLE: Agent Building and Learning Environment
23
Irina Rish, IBM TJWRC 23 What is ABLE? What is my contribution? A JAVA toolbox for building reasoning and learning agents Provides: visual environment, boolean and fuzzy rules, neural networks, genetic search My contributions: naïve Bayes classifier (batch and incremental) Discretization Future releases: General Bayesian learning and inference tools Available at AlphaWorks: www.alphaWorks.ibm.com/tech Project page: w3.rchland.ibm.com/projects/ABLE
24
How does it work?
25
Irina Rish, IBM TJWRC 25 Who is using Naïve Bayes tools? Impact on other IBM projects Video Character Recognition: (w/ C. Dorai): Naïve Bayes: 84% accuracy Better than SVM on some pairs of characters (aver. SVM = 87%) Current work: combining Naïve Bayes with SVMs Environmental data analysis: (w/ Yuan-Chi Chang) Learning mortality rates using data on air pollutants Naïve Bayes is currently being evaluated Performance management: Event mining – in progress EUT recognition – successful results
26
Irina Rish, IBM TJWRC 26 Outline Introduction Machine-learning in Performance Management Bayesian learning tools: extending ABLE Advancing theory analysis of naïve Bayes classifier inference in Bayesian Networks Summary and future directions
27
Irina Rish, IBM TJWRC 27 Why Naïve Bayes does well? And when? Class When independence assumptions do not hurt classification? Class-conditional feature independence: Unrealistic assumption! But why/when it works? Intuition: wrong probability estimates wrong classification! True NB estimate P(class|f) Naïve Bayes: Bayes-optimal :
28
Irina Rish, IBM TJWRC 28 Case 1: functional dependencies Lemma 1: Naïve Bayes is optimal when features are functionally dependent given class Proof :
29
Irina Rish, IBM TJWRC 29 Lemma 2: Naïve Bayes is a “good approximation” for “almost-functional” dependencies Case 2: “almost-functional” (low-entropy) distributions Related practical examples: RPC occurrences in EUTs: often almost-deterministic (and NB does well) Successful “local inference” in almost-deterministic Bayesian networks (Turbo coding, “mini-buckets” – see Dechter&Rish 2000) Formally: δ1)afP( or,δ1)aP(f ii then If n1,...,i for,
30
Irina Rish, IBM TJWRC 30 Experimental results support theory 1.Less “noise” (smaller ) => NB closer to optimal Random problem generator: uniform P(class); random P(f|class): 1. A randomly selected entry in P(f|class) is assigned 2. The rest of entries – uniform random sampling + normalization 2. Feature dependence does NOT correlate with NB error
31
Irina Rish, IBM TJWRC 31 Outline Introduction Machine-learning in Performance Management Transaction Recognition Event Mining Bayesian learning tools: extending ABLE Advancing theory analysis of naïve Bayes classifier inference in Bayesian Networks Summary and future directions
32
Irina Rish, IBM TJWRC 32 From Naïve Bayes to Bayesian Networks Naïve Bayes model: independent features given class Bayesian network (BN) model: Any joint probability distributions lung Cancer Smoking X-ray Bronchitis Dyspnoea P(D|C,B) P(B|S) P(S) P(X|C,S) P(C|S) = P(S) P(C|S) P(B|S) P(X|C,S) P(D|C,B) P(S, C, B, X, D)= CPD: C B D=0 D=1 0 0 0.1 0.9 0 1 0.7 0.3 1 0 0.8 0.2 1 1 0.9 0.1 Query: P (lung cancer=yes | smoking=no, dyspnoea=yes ) = ?
33
Irina Rish, IBM TJWRC 33 Example: Printer Troubleshooting (Microsoft Windows 95) Print Output OK Correct Driver Uncorrupted Driver Correct Printer Path Net Cable Connected Net/Local Printing Printer On and Online Correct Local Port Correct Printer Selected Local Cable Connected Application Output OK Print Spooling On Correct Driver Settings Printer Memory Adequate Network Up Spooled Data OK GDI Data Input OK GDI Data Output OK Print Data OK PC to Printer Transport OK Printer Data OK Spool Process OK Net Path OK Local Path OK Paper Loaded Local Disk Space Adequate [Heckerman, 95]
34
Irina Rish, IBM TJWRC 34 MEU Decision-making (given utility function) MEU Decision-making (given utility function) How to use Bayesian networks? Applications: Medicine Stock market Bio-informatics eCommerce Performance management etc. cause symptom cause Classification: P(class|data)=? Classification: P(class|data)=? Prediction: P(symptom|cause)=? Prediction: P(symptom|cause)=? Diagnosis: P(cause|symptom)=? Diagnosis: P(cause|symptom)=? NP-complete inference problems Approximate algorithms
35
Irina Rish, IBM TJWRC 35 Idea: reduce complexity of inference by ignoring some dependencies Successfully used for approximating Most Probable Explanation: Very efficient on real-life (medical, decoding) and synthetic problems Local approximation scheme “Mini-buckets” (paper submitted to JACM) Less “noise” => higher accuracy similarly to naïve Bayes! General theory needed: Independence assumptions and “almost-deterministic” distributions noise Approximation accuracy Potential impact: efficient inference in complex performance management models (e.g., event mining, system dependence models)
36
Irina Rish, IBM TJWRC 36 Theory and algorithms: analysis of Naïve Bayes accuracy (Research Report) approximate Bayesian inference (submitted paper) patent on meta-learning Theory and algorithms: analysis of Naïve Bayes accuracy (Research Report) approximate Bayesian inference (submitted paper) patent on meta-learning Summary Machine-learning tools: (alphaWorks) Extending ABLE w/ Bayesian classifier Applying classifier to other IBM projects: Video character recognition Environmental data analysis Machine-learning tools: (alphaWorks) Extending ABLE w/ Bayesian classifier Applying classifier to other IBM projects: Video character recognition Environmental data analysis Performance management: End-user transaction recognition: (Lotus CoC) novel method, patent, paper; applied to Lotus Notes In progress: event mining (USAA), probing strategies (EPP) Performance management: End-user transaction recognition: (Lotus CoC) novel method, patent, paper; applied to Lotus Notes In progress: event mining (USAA), probing strategies (EPP)
37
Irina Rish, IBM TJWRC 37 Future directions Automated learning and inference Research interest Practical Problems Generic tools Theory Performance management: Transaction recognition – better feature selection, segmentation Event Mining – Bayes net models, clustering Web log analysis – segmentation/ classification/ clustering Modeling system dependencies – Bayes nets “Technology transfer” – generic approach to “event streams” (EUTs, sys.events, web page accesses) ML library / ABLE: Bayesian learning general Bayes nets temporal BNs incremental learning Bayesian inference Exact inference Approximations Other tools: SVMs, decision trees Combined tools, meta-learning tools Analysis of algorithms: Naïve Bayes accuracy: other distribution types Accuracy of local inference approximations Comparing model selection criteria (e.g., Bayes net learning) Relative analysis and combination of classifiers (Bayes/max. margin/DT) Incremental learning
38
Irina Rish, IBM TJWRC 38 Collaborations Transaction recognition J. Hellerstein, T. Jayram (Watson) Event Mining J. Hellerstein, R. Vilalta, S. Ma, C. Perng (Watson) ABLE J. Bigus, R. Vilalta (Watson) Video Character Recognition C. Dorai (Watson) MDL approach to segmentation B. Dom (Almaden) Approximate inference in Bayes nets R. Dechter (UCI) Meta-learning R. Vilalta (Watson) Environmental data analysis Y. Chang (Watson)
39
Irina Rish, IBM TJWRC 39 Machine learning discussion group Weekly seminars: 11:30-2:30 (w/ lunch) in 1S-F40 Active group members: Mark Brodie, Vittorio Castelli, Joe Hellerstein, Daniel Oblinger, Jayram Thathachar, Irina Rish (more people joint recently) Agenda: discussions of recent ML papers, book chapters (“Pattern Classification” by Duda, Hart, and Stork, 2000) brain-storming sessions about particular ML topics Recent discussions: accuracy of Bayesian classifiers (naïve Bayes) Web site: http://reswat4.research.ibm.com/projects/mlreadinggroup/mlreadinggroup.nsf/main/t oppage
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.