Data Mining & Machine Learning Group UH-DMML: Ongoing Data Mining Research 2006-2009 Data Mining and Machine Learning Group, Computer Science Department,

Slides:



Advertisements
Similar presentations
Clustering Basic Concepts and Algorithms
Advertisements

Chung Sheng CHEN, Nauful SHAIKH, Panitee CHAROENRATTANARUK, Christoph F. EICK, Nouhad RIZK and Edgar GABRIEL Department of Computer Science, University.
Data Mining Glen Shih CS157B Section 1 Dr. Sin-Min Lee April 4, 2006.
Automated Analysis and Code Generation for Domain-Specific Models George Edwards Center for Systems and Software Engineering University of Southern California.
Algorithms and Problem Solving-1 Algorithms and Problem Solving.
© Prentice Hall1 DATA MINING TECHNIQUES Introductory and Advanced Topics Eamonn Keogh (some slides adapted from) Margaret Dunham Dr. M.H.Dunham, Data Mining,
Algorithms and Problem Solving. Learn about problem solving skills Explore the algorithmic approach for problem solving Learn about algorithm development.
Clementine Server Clementine Server A data mining software for business solution.
© Prentice Hall1 DATA MINING Introductory and Advanced Topics Part II Margaret H. Dunham Department of Computer Science and Engineering Southern Methodist.
Data Mining – Intro.
DASHBOARDS Dashboard provides the managers with exactly the information they need in the correct format at the correct time. BI systems are the foundation.
Chapter 5 Data mining : A Closer Look.
UH Data Mining & Machine Learning Group May 1, 2009 Christoph F. Eick Department of Computer Science University of Houston A Domain-Driven Framework.
Radial Basis Function Networks
LÊ QU Ố C HUY ID: QLU OUTLINE  What is data mining ?  Major issues in data mining 2.
1 © Goharian & Grossman 2003 Introduction to Data Mining (CS 422) Fall 2010.
Data Mining Techniques
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence From Data Mining To Knowledge.
Data Mining Chun-Hung Chou
Department of Computer Science 1 Data Mining / KDD Let us find something interesting! Definition := “KDD is the non-trivial process of identifying valid,
Ihr Logo Chapter 5 Business Intelligence: Data Warehousing, Data Acquisition, Data Mining, Business Analytics, and Visualization Turban, Aronson, and Liang.
Last Words COSC Big Data (frameworks and environments to analyze big datasets) has become a hot topic; it is a mixture of data analysis, data mining,
CLEANROOM SOFTWARE ENGINEERING.
Discovering Interesting Regions in Spatial Data Sets Christoph F. Eick for the Data Mining Class 1.Motivation: Examples of Region Discovery 2.Region Discovery.
Department of Computer Science Research Areas and Projects 1. Data Mining and Machine Learning Group ( research.
Discovering Interesting Regions in Spatial Data Sets using Supervised Clustering Christoph F. Eick, Banafsheh Vaezian, Dan Jiang, Jing Wang PKDD Conference,
Name: Sujing Wang Advisor: Dr. Christoph F. Eick
Ohio State University Department of Computer Science and Engineering 1 Cyberinfrastructure for Coastal Forecasting and Change Analysis Gagan Agrawal Hakan.
A N A RCHITECTURE AND A LGORITHMS FOR M ULTI -R UN C LUSTERING Rachsuda Jiamthapthaksin, Christoph F. Eick and Vadeerat Rinsurongkawong Computer Science.
Extracting Regional Knowledge from Spatial Datasets Christoph F. Eick Department of Computer Science, University of Houston 1.Motivation: Why is Regional.
Department of Computer Science Research Areas and Projects 1. Data Mining and Machine Learning Group ( research.
Department of Computer Science 2015 Research Areas and Projects 1.Data Mining and Machine Learning Group (UH-DMML) Its research is focusing on: 1.Spatial.
INTERACTIVE ANALYSIS OF COMPUTER CRIMES PRESENTED FOR CS-689 ON 10/12/2000 BY NAGAKALYANA ESKALA.
1. Data Mining (or KDD) Let us find something interesting! Definition := “Data Mining is the non-trivial process of identifying valid, novel, potentially.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Data Mining – Intro. Course Overview Spatial Databases Temporal and Spatio-Temporal Databases Multimedia Databases Data Mining.
Data Mining & Machine Learning Group ACM-GIS08 Christoph Eick (University of Houston, USA), Rachana Parmar (University of Houston, USA), Wei Ding.
1 Computing Challenges for the Square Kilometre Array Mathai Joseph & Harrick Vin Tata Research Development & Design Centre Pune, India CHEP Mumbai 16.
MOSAIC: A Proximity Graph Approach for Agglomerative Clustering Jiyeon Choo, Rachsuda Jiamthapthaksin, Chun-shen Chen, Ulvi Celepcikay, Christian Guisti,
Department of Computer Science 1 KDD / Data Mining Let us find something interesting!  Motivation: We are drowning in data, but we are staving for knowledge.
Data Mining & Machine Learning Group ADMA09 Rachsuda Jianthapthaksin, Christoph F. Eick and Ricardo Vilalta University of Houston, Texas, USA A Framework.
Christoph F. Eick Questions and Topics Review November 11, Discussion of Midterm Exam 2.Assume an association rule if smoke then cancer has a confidence.
Change Analysis in Spatial Datasets by Interestingness Comparison Vadeerat Rinsurongkawong, and Christoph F. Eick Department of Computer Science, University.
Data Mining and Machine Learning Group (UH-DMML) Wei Ding Rachana Parmar Ulvi Celepcikay Ji Yeon Choo Chun-Sheng Chen Abraham Bagherjeiran Soumya Ghosh.
An Introduction Student Name: Riaz Ahmad Program: MSIT( ) Subject: Data warehouse & Data Mining.
Department of Computer Science Research Focus of UH-DMML Christoph F. Eick Data Mining Geographical Information Systems (GIS) High Performance Computing.
Discovering Interesting Regions in Spatial Data Sets Christoph F. Eick for Data Mining Class 1.Motivation: Examples of Region Discovery 2.Region Discovery.
Department of Computer Science 1 Data Mining / KDD Let us find something interesting! Definition := “KDD is the non-trivial process of identifying valid,
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
Data Mining & Machine Learning Group UH-DMML: Ongoing Data Mining Research Data Mining and Machine Learning Group, Computer Science Department, University.
Department of Computer Science Research Areas and Projects 1. Data Mining and Machine Learning Group ( research.
WHAT IS DATA MINING?  The process of automatically extracting useful information from large amounts of data.  Uses traditional data analysis techniques.
Corresponding Clustering: An Approach to Cluster Multiple Related Spatial Datasets Vadeerat Rinsurongkawong and Christoph F. Eick Department of Computer.
1 Creating Situational Awareness with Data Trending and Monitoring Zhenping Li, J.P. Douglas, and Ken. Mitchell Arctic Slope Technical Services.
Christoph F. Eick Questions Review October 12, How does post decision tree post-pruning work? What is the purpose of applying post-pruning in decision.
Discovering Interesting Regions in Spatial Data Sets Christoph F. Eick for Data Mining Class 1.Motivation: Examples of Region Discovery 2.Region Discovery.
Introduction to Machine Learning, its potential usage in network area,
Big data classification using neural network
Data Mining – Intro.
Machine Learning overview Chapter 18, 21
Machine Learning overview Chapter 18, 21
Introduction C.Eng 714 Spring 2010.
Research Areas and Projects
Research Areas Christoph F. Eick
Data Warehousing and Data Mining
UH-DMML: Ongoing Data Mining Research
Algorithms and Problem Solving
Spatial Data Mining Definition: Spatial data mining is the process of discovering interesting patterns from large spatial datasets; it organizes by location.
Automated Analysis and Code Generation for Domain-Specific Models
Machine Learning overview Chapter 18, 21
Presentation transcript:

Data Mining & Machine Learning Group UH-DMML: Ongoing Data Mining Research Data Mining and Machine Learning Group, Computer Science Department, University of Houston, TX August 8, 2008 Abraham Bagherjeiran* Ulvi Celepcikay Chun-Sheng Chen Ji Yeon Choo* Wei Ding* Paulo Martins Christian Giusti* Rachsuda Jiamthapthaksin Dan Jiang* Seungchan Lee Rachana Parmar* Vadeerat Rinsurongkawong Justin Thomas*Banafsheh Vaezian*Jing Wang* Dr. Christoph F. Eick

Data Mining & Machine Learning Group Current Topics Investigated Discovering regional knowledge in geo-referenced datasets Shape-aware clustering algorithms Emergent pattern discovery Machine Learning Spatial Databases Data Set Domain Expert Measure of Interestingness Acquisition Tool Fitness Function Family of Clustering Algorithms Visualization Tools Ranked Set of Interesting Regions and their Properties Region Discovery Display Database Integration Tool Region Discovery FrameworkApplications of Region Discovery Framework Discovering risk patterns of arsenic Cougar^2: Open Source DMML Framework Development of Clustering Algorithms with Plug-in Fitness Functions Distance Function LearningAdaptive Clustering Using Machine Learning for Spacecraft Simulation Multi-Run-Multi-Objective clustering 3

Data Mining & Machine Learning Group 1. Development of Clustering Algorithms with Plug-in Fitness Functions

Data Mining & Machine Learning Group Clustering with Plug-in Fitness Functions Motivation:  Finding subgroups in geo-referenced datasets has many applications.  However, in many applications the subgroups to be searched for do not share the characteristics considered by traditional clustering algorithms, such as cluster compactness and separation.  Consequently, it is desirable to develop clustering algorithms that provide plug-in fitness functions that allow domain experts to express desirable characteristics of subgroups they are looking for.  Only very few clustering algorithms published in the literature provide plug-in fitness functions; consequently existing clustering paradigms have to be modified and extended by our research to provide such capabilities.  Many other applications for clustering with plug-in fitness functions exist.

Data Mining & Machine Learning Group Current Suite of Clustering Algorithms  Representative-based: SCEC, SRIDHCR, SPAM, CLEVER  Grid-based: SCMRG  Agglomerative: MOSAIC  Density-based: SCDE (not really plug-in but some fitness functions can be simulated) Clustering Algorithms Density-based Agglomerative-basedRepresentative-based Grid-based

Data Mining & Machine Learning Group 2. Discovering Regional Knowledge in Geo-Referenced Datasets

Data Mining & Machine Learning Group Mining Regional Knowledge in Spatial Datasets Framework for Mining Regional Knowledge Spatial Databases Integrated Data Set Domain Experts Fitness Functions Family of Clustering Algorithms Regional Association Mining Algorithms Ranked Set of Interesting Regions and their Properties Measures of interestingness Regional Knowledge Regional Knowledge Objective: Develop and implement an integrated framework to automatically discover interesting regional patterns in spatial datasets. Hierarchical Grid-based & Density-based Algorithms Spatial Risk Patterns of Arsenic

Data Mining & Machine Learning Group Finding Regional Co-location Patterns in Spatial Datasets Objective: Find co-location regions using various clustering algorithms and novel fitness functions. Applications: 1. Finding regions on planet Mars where shallow and deep ice are co-located, using point and raster datasets. In figure 1, regions in red have very high co- location and regions in blue have anti co-location. 2. Finding co-location patterns involving chemical concentrations with values on the wings of their statistical distribution in Texas ’ ground water supply. Figure 2 indicates discovered regions and their associated chemical patterns. Figure 1: Co-location regions involving deep and shallow ice on Mars Figure 2: Chemical co-location patterns in Texas Water Supply

Data Mining & Machine Learning Group Regional Pattern Discovery via Principal Component Analysis Objective: Discovering regions and regional patterns using Principal Component Analysis (PCA) Applications: Region discovery, regional pattern discovery (i.e. finding interesting sub-regions in Texas where arsenic is highly correlated with fluoride and pH) in spatial data, and regional regression. Idea: Correlation patterns among attributes tend to be hidden globally. But with the help of statistical approaches and our region discovery framework, some interesting regional correlations among the attributes can be discovered. Oner Ulvi Celepcikay Apply PCA-Based Fitness Function & Assign Rewards Calculate Principal Components & Variance Captured Discover Regions & Regional Patterns (Globally Hidden) Region DiscoveryPost-Processing

Data Mining & Machine Learning Group Regional Pattern Discovery via Principal Component Analysis Oner Ulvi Celepcikay Apply PCA-Based Fitness Function & Assign Rewards Calculate Principal Components & Variance Captured Discover Regions & Regional Patterns (Globally Hidden) Region DiscoveryPost-Processing using PCA Results a.PCA-based Distance matrix b.Highest Correlated Attributes Set (HCAS) Distance Matrix using Regression Analysis Global Regression Model Regional Effects Model t-statistics model (to test if the difference between regions is Statistically Significant)

Data Mining & Machine Learning Group 3. Shape-Aware Clustering Algorithms

Data Mining & Machine Learning Group Discovering Clusters of Arbitrary Shapes  Objective: Detect arbitrary shape clusters effectively and efficiently.  2 nd Approach: Approximate arbitrary shapes using unions of small convex polygons.  3 rd Approach: Employ density estimation techniques for discovering arbitrary shape clusters.  1 st Approach: Develop cluster evaluation measures for non-spherical cluster shapes.  Derive a shape signature for a given shape. (boundary-based, region-based, skeleton based shape representation)  Transform the shape signature into a fitness function and use it in a clustering algorithm. Rachsuda Jiamthapthaksin, Christian Giusti, and Jiyeon Choo

Data Mining & Machine Learning Group 4. Discovering Risk Patterns of Arsenic

Data Mining & Machine Learning Group Discovering Spatial Patterns of Risk from Arsenic: A Case Study of Texas Ground Water Wei Ding, Vadeerat Rinsurongkawong and Rachsuda Jiamthapthaksin Objective: Analysis of Arsenic Contamination and its Causes.  Collaboration with Dr. Bridget Scanlon and her research group at the University of Texas in Austin.  Our approach  Experimental Results

Data Mining & Machine Learning Group 5. Emergent Pattern Discovery

Data Mining & Machine Learning Group Objectives of Emergent Pattern Discovery  Emergent patterns capture how the most recent data differ from data in the past. Emergent pattern discovery finds what is new in data.  Challenges of emergent pattern discovery include:  The development of a formal framework that characterizes different types of emergent patterns  The development of a methodology to detect emergent patterns in spatio-temporal datasets  The capability to find emergent patterns in regions of arbitrary shape and granularity  The development of scalable emergent pattern discovery algorithms that are able to cope with large data sizes and large numbers of patterns Time 0Time 1 The change from time 0 to 1 Emergent pattern discovery for Earthquake data

Data Mining & Machine Learning Group Change Analysis by Comparing Clusters

Data Mining & Machine Learning Group CHANGE PREDICATES  Agreement(r,r’)= |r  r’| / |r  r’|  Containment(r,r’)= |r  r’| / |r|  Novelty (r’) = (r’ —(r1  …  rk))  Relative-Novelty(r’) = |r’ —(r1  …  rk)|/|r’|  Disappearance(r)= (r—(r’1  …  r’k))  Relative-Disappearance(r)= |r—(r’1  …  r’k)|/|r| Remark: “|” denotes size operator.

Data Mining & Machine Learning Group 6. Machine Learning

Data Mining & Machine Learning Group Online Learning of Spacecraft Simulation Models  Developed an online machine learning methodology for increasing the accuracy of spacecraft simulation models  Directly applied to the International Space Station for use in the Johnson Space Center Mission Control Center  Approach  Use a regional sliding-window technique, a contribution of this research, that regionally maintains the most recent data  Build new system models incrementally from streaming sensor data using the best training approach (regression trees, model trees, artificial neural networks, etc…)  Use a knowledge fusion approach, also a contribution of this research, to reduce predictive error spikes when confronted with making predictions in situations that are quite different from training scenarios  Benefits  Increases the effectiveness of NASA mission planning, real-time mission support, and training  Reacts the dynamic and complex behavior of the International Space Station (ISS)  Removes the need for the current approach of refining models manually  Results  Substantial error reductions up to 76% in our experimental evaluation on the ISS Electrical Power System  Cost reductions due to complete automation of the previous manually-intensive approach

Data Mining & Machine Learning Group Distance Function Learning Using Intelligent Weight Updating and Supervised Clustering Distance function: Measure the similarity between objects. Objective: Construct a good distance function using AI and machine learning techniques that learn attribute weights. Bad distance function  1 Good distance function  2 Clustering X Distance Function Q Cluster Goodness of the Distance Function Q q(X) Clustering Evaluation Weight Updating Scheme / Search Strategy The framework:  Generate a distance function: Apply weight updating schemes / Search Strategies to find a good distance function candidate  Clustering: Use this distance function candidate in a clustering algorithm to cluster the dataset  Evaluate the distance function: We evaluate the goodness of the distance function by evaluating the clustering result according to a predefined evaluation function.

Data Mining & Machine Learning Group 7. Cougar^2: Open Source Data Mining and Machine Learning Framework

Data Mining & Machine Learning Group Cougar^2 1 is a new framework for data mining and machine learning. Its goal is to simplify the transition of algorithms on paper to actual implementation. It provides an intuitive API for researchers. Its design is based on object oriented design principles and patterns. Developed using test first development (TFD) approach, it advocates TFD for new algorithm development. The framework has a unique design which separates learning algorithm configuration, the actual algorithm itself and the results produced by the algorithm. It allows easy storage and sharing of experiment configuration and results. Department of Computer Science, University of Houston, Houston TX FRAMEWORK ARCHITECTURE The framework architecture follows object oriented design patterns and principles. It has been developed using Test First Development approach and adding new code with unit tests is easy. There are two major components of the framework: Dataset and Learning algorithm. Datasets deal with how to read and write data. We have two types of datasets: NumericDataset where all the values are of type double and NominalDataset where all the values are of type int where each integer value is mapped to a value of a nominal attribute. We have a high level interface for Dataset and so one can write code using this interface and switching from one type of dataset to another type becomes really easy. Learning algorithms work on these data and return reusable results. To use a learning algorithm requires configuring the learner, running the learner and using the model built by the learner. We have separated these tasks in three separate parts: Factory – which does the configuration, Learner – which does actually learning/data mining task and builds the model and Model – which can be applied on new dataset or can be analyzed. Several algorithms have been implemented using the framework. The list includes SPAM, CLEVER and SCDE. Algorithm MOSAIC is currently under development. A region discovery framework and various interestingness measures like purity, variance, mean squared error have been implemented using the framework. Developed using: Java, JUnit, EasyMock Hosted at: METHODS CURRENT WORK Parameter configuration Factory Learner Dataset Model creates builds uses Dataset applies to Typically machine learning and data mining algorithms are written using software like Matlab, Weka, RapidMiner (Formerly YALE) etc. Software like Matlab simplify the process of converting algorithm to code with little programming but often one has to sacrifice speed and usability. On the other extreme, software like Weka and RapidMiner increase the usability by providing GUI and plug-ins which requires researchers to develop GUI. Cougar^2 tries to address some of the issues with these software. Reusable and Efficient software Test First Development Platform Independent Support research efforts into new algorithms Analyze experiments by reading and reusing learned models Intuitive API for researchers rather than GUI for end users Easy to share experiments and experiment results Rachana Parmar, Justin Thomas, Rachsuda Jiamthapthaksin, Oner Ulvi Celepcikay ABSTRACT BENEFITS OF COUGAR^2 ABSTRACT 1: First version of Cougar^2 was developed by a Ph.D. student of the research group – Abraham Bagherjeiran Region Discovery Factory Region Discovery Algorithm Region Discovery Model Dataset A SUPERVISED LEARNING EXAMPLE A REGION DISCOVERY EXAMPLE MOTIVATION Hot No Yes Sunny Outlook Overcast Cold Temp. Decisio n Tree Factory Decision Tree Learner Model (Decision Tree) Dataset Decisio n Tree Factory Decision Tree Learner Model (Decision Tree) Dataset Cougar^2: Open Source Data Mining and Machine Learning Framework

Data Mining & Machine Learning Group 8. Multi-Run Multi-Objective Clustering

Data Mining & Machine Learning Group Objectives MRMO-Clustering 1. Provide a system that automatically conducts experiments: different clustering algorithm and fitness functions parameters are selected using reinforcement learning, experiments will be run, the promising results will be stored, more experiments will be run, and finally the results are summarized presented to the user. 2. Improve clustering results by using clusters obtained in different runs of a clustering algorithms; the final clustering result will be constructed by choosing clusters that have been obtained in different runs. 3. Support finding clusters that are good with respect to multiple objective (fitness) functions. 4. Overcome initialization problems that most clustering algorithms face.

Data Mining & Machine Learning Group A MRMO System Architecture 1. Parameters selecting unit 2. Clustering algorithms 3. Utilities computing unit 4. Evaluate all results (need more results?) 6. Summary generation unit 5. Storage unit Geo-referenced datasets Yes No State: A_PARAM Reinforcement Learning State transition operators: A_PARAM Utility function: Fitness function(cross_quality + novelty + computing _time) A_PARAM, clustering results