Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 DATA MINING Introductory and Advanced Topics Part I References from Dunham.

Similar presentations


Presentation on theme: "1 DATA MINING Introductory and Advanced Topics Part I References from Dunham."— Presentation transcript:

1 1 DATA MINING Introductory and Advanced Topics Part I References from Dunham

2 © Prentice Hall2 Data Mining Outline PART I PART I –Introduction –Related Concepts –Data Mining Techniques PART II PART II –Classification –Clustering –Association Rules PART III PART III –Web Mining –Spatial Mining –Temporal Mining

3 © Prentice Hall3 Introduction Outline Define data mining Define data mining Data mining vs. databases Data mining vs. databases Basic data mining tasks Basic data mining tasks Data mining development Data mining development Data mining issues Data mining issues Goal: Provide an overview of data mining.

4 © Prentice Hall4 Introduction Data is growing at a phenomenal rate Data is growing at a phenomenal rate Users expect more sophisticated information Users expect more sophisticated information How? How? UNCOVER HIDDEN INFORMATION DATA MINING

5 © Prentice Hall5 Data Mining Definition Finding hidden information in a database Finding hidden information in a database Fit data to a model Fit data to a model Similar terms Similar terms –Exploratory data analysis –Data driven discovery –Deductive learning

6 © Prentice Hall6 Data Mining Algorithm Objective: Fit Data to a Model Objective: Fit Data to a Model –Descriptive –Predictive Preference – Technique to choose the best model Preference – Technique to choose the best model Search – Technique to search the data Search – Technique to search the data –“Query”

7 © Prentice Hall7 Database Processing vs. Data Mining Processing Query Query –Well defined –SQL Query Query –Poorly defined –No precise query language Data Data – Operational data Output Output – Precise – Subset of database Data Data – Not operational data Output Output – Fuzzy – Not a subset of database

8 © Prentice Hall8 Query Examples Database Database Data Mining Data Mining – Find all customers who have purchased milk – Find all items which are frequently purchased with milk. (association rules) – Find all credit applicants with last name of Smith. – Identify customers who have purchased more than $10,000 in the last month. – Find all credit applicants who are poor credit risks. (classification) – Identify customers with similar buying habits. (Clustering)

9 © Prentice Hall9 Data Mining Models and Tasks

10 © Prentice Hall10 Basic Data Mining Tasks Classification maps data into predefined groups or classes Classification maps data into predefined groups or classes –Supervised learning –Pattern recognition –Prediction Regression is used to map a data item to a real valued prediction variable. Regression is used to map a data item to a real valued prediction variable. Regression Clustering groups similar data together into clusters. Clustering groups similar data together into clusters. –Unsupervised learning –Segmentation –Partitioning

11 Classification © Prentice Hall11

12 Classification examples © Prentice Hall12

13 © Prentice Hall13 Basic Data Mining Tasks (cont’d) Summarization maps data into subsets with associated simple descriptions. Summarization maps data into subsets with associated simple descriptions. –Characterization –Generalization Link Analysis uncovers relationships among data. Link Analysis uncovers relationships among data. –Affinity Analysis –Association Rules –Sequential Analysis determines sequential patterns.

14 © Prentice Hall14 Ex: Time Series Analysis Example: Stock Market Example: Stock Market Predict future values Predict future values Determine similar patterns over time Determine similar patterns over time Classify behavior Classify behavior

15 © Prentice Hall15 Data Mining vs. KDD Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data. Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data. Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process. Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process.

16 Reference Dunhill16 KDD Process Selection: Obtain data from various sources. Selection: Obtain data from various sources. Preprocessing: Cleanse data. Preprocessing: Cleanse data. Transformation: Convert to common format. Transform to new format. Transformation: Convert to common format. Transform to new format. Data Mining: Obtain desired results. Data Mining: Obtain desired results. Interpretation/Evaluation: Present results to user in meaningful manner. Interpretation/Evaluation: Present results to user in meaningful manner. Modified from [FPSS96C]

17 © Prentice Hall17 KDD Process Ex: Web Log Selection: Selection: –Select log data (dates and locations) to use Preprocessing: Preprocessing: – Remove identifying URLs – Remove error logs Transformation: Transformation: –Sessionize (sort and group) Data Mining: Data Mining: –Identify and count patterns –Construct data structure Interpretation/Evaluation: Interpretation/Evaluation: –Identify and display frequently accessed sequences. Potential User Applications: Potential User Applications: –Cache prediction –Personalization

18 February 24, 2016Data Mining: Concepts and Techniques-Kamber 18 Knowledge Discovery (KDD) Process –Data mining—core of knowledge discovery process Data Cleaning Data Integration Databases Data Warehouse Task-relevant Data Selection Data Mining Pattern Evaluation

19 © Prentice Hall19 Data Mining Development Similarity Measures Hierarchical Clustering IR Systems Imprecise Queries Textual Data Web Search Engines Bayes Theorem Regression Analysis EM Algorithm K-Means Clustering Time Series Analysis Neural Networks Decision Tree Algorithms Algorithm Design Techniques Algorithm Analysis Data Structures Relational Data Model SQL Association Rule Algorithms Data Warehousing Scalability Techniques

20 © Prentice Hall20 KDD Issues Human Interaction: Interfaces may be needed with both domain and technical experts Human Interaction: Interfaces may be needed with both domain and technical experts Overfitting : When the model does not fit future database states Overfitting : When the model does not fit future database states Outliers : Data entries do not fit nicely into the derivd model Outliers : Data entries do not fit nicely into the derivd model Interpretation: Experts are required to interpret the results Interpretation: Experts are required to interpret the results Visualization: view and understand the outputs of data mining algorithms Visualization: view and understand the outputs of data mining algorithms Large Dataset: Sampling and parallelization are effective tools to attack the scalability problem Large Dataset: Sampling and parallelization are effective tools to attack the scalability problem High Dimensionality: Dimensionality reduction is used to solve the curse of dimensionality problems High Dimensionality: Dimensionality reduction is used to solve the curse of dimensionality problems

21 © Prentice Hall21 KDD Issues (cont’d) Multimedia Data: complicates or invalidates many proposed algorithms Multimedia Data: complicates or invalidates many proposed algorithms Missing Data: replaced by estimates Missing Data: replaced by estimates Irrelevant Data: Irrelevant Data: Noisy Data Noisy Data Changing Data Changing Data Integration Integration Application Application

22 © Prentice Hall22 Social Implications of DM Privacy Privacy Profiling Profiling Unauthorized use Unauthorized use

23 © Prentice Hall23 Data Mining Metrics Usefulness Usefulness Return on Investment (ROI) Return on Investment (ROI) Accuracy Accuracy Space/Time Space/Time

24 © Prentice Hall24 Database Perspective on Data Mining Scalability Scalability Real World Data Real World Data Updates Updates Ease of Use Ease of Use

25 February 24, 2016 25 Architecture: Typical Data Mining System data cleaning, integration, and selection Database or Data Warehouse Server Data Mining Engine Pattern Evaluation Graphical User Interface Database Data Warehouse World-Wide Web Other Info Repositories Knowledge Base

26 © Prentice Hall26 Related Concepts Outline Database/OLTP Systems Database/OLTP Systems Fuzzy Sets and Logic Fuzzy Sets and Logic Information Retrieval(Web Search Engines) Information Retrieval(Web Search Engines) Dimensional Modeling Dimensional Modeling Data Warehousing Data Warehousing OLAP/DSS OLAP/DSS Statistics Statistics Machine Learning Machine Learning Pattern Matching Pattern Matching Goal: Examine some areas which are related to data mining.

27 © Prentice Hall27 DB & OLTP Systems Schema Schema –(ID,Name,Address,Salary,JobNo) Data Model Data Model –ER –Relational Transaction Transaction Query: Query: SELECT Name FROM T WHERE Salary > 100000 DM: Only imprecise queries

28 © Prentice Hall28 Fuzzy Sets and Logic Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. f(x): Probability x is in F. f(x): Probability x is in F. 1-f(x): Probability x is not in F. 1-f(x): Probability x is not in F. EX: EX: –T = {x | x is a person and x is tall} –Let f(x) be the probability that x is tall –Here f is the membership function DM: Prediction and classification are fuzzy.

29 © Prentice Hall29 Fuzzy Sets

30 © Prentice Hall30 Classification/Prediction is Fuzzy Loan Amnt SimpleFuzzy Accept Reject

31 © Prentice Hall31 Information Retrieval Information Retrieval (IR): retrieving desired information from textual data. Information Retrieval (IR): retrieving desired information from textual data. Library Science Library Science Digital Libraries Digital Libraries Web Search Engines Web Search Engines Traditionally keyword based Traditionally keyword based Sample query: Sample query: Find all documents about “data mining”. DM: Similarity measures; Mine text/Web data. Mine text/Web data.

32 © Prentice Hall32 Information Retrieval (cont’d) Similarity: measure of how close a query is to a document. Similarity: measure of how close a query is to a document. Documents which are “close enough” are retrieved. Documents which are “close enough” are retrieved. Metrics: Metrics: –Precision = |Relevant and Retrieved| |Retrieved| |Retrieved| –Recall = |Relevant and Retrieved| |Relevant| |Relevant|

33 © Prentice Hall33 IR Query Result Measures and Classification IRClassification

34 © Prentice Hall34 Dimensional Modeling View data in a hierarchical manner more as business executives might View data in a hierarchical manner more as business executives might Useful in decision support systems and mining Useful in decision support systems and mining Dimension: collection of logically related attributes; axis for modeling data. Dimension: collection of logically related attributes; axis for modeling data. Facts: data stored Facts: data stored Ex: Dimensions – products, locations, date Ex: Dimensions – products, locations, date Facts – quantity, unit price Facts – quantity, unit price DM: May view data as dimensional.

35 © Prentice Hall35 Relational View of Data

36 © Prentice Hall36 Dimensional Modeling Queries Roll Up: more general dimension Roll Up: more general dimension Drill Down: more specific dimension Drill Down: more specific dimension Dimension (Aggregation) Hierarchy Dimension (Aggregation) Hierarchy SQL uses aggregation SQL uses aggregation Decision Support Systems (DSS): Computer systems and tools to assist managers in making decisions and solving problems. Decision Support Systems (DSS): Computer systems and tools to assist managers in making decisions and solving problems.

37 © Prentice Hall37 Cube view of Data

38 © Prentice Hall38 Aggregation Hierarchies

39 © Prentice Hall39 Star Schema

40 © Prentice Hall40 Data Warehousing “ Subject-oriented, integrated, time-variant, nonvolatile” William Inmon “ Subject-oriented, integrated, time-variant, nonvolatile” William Inmon Operational Data: Data used in day to day needs of company. Operational Data: Data used in day to day needs of company. Informational Data: Supports other functions such as planning and forecasting. Informational Data: Supports other functions such as planning and forecasting. Data mining tools often access data warehouses rather than operational data. Data mining tools often access data warehouses rather than operational data. DM: May access data in warehouse.

41 © Prentice Hall41 Operational vs. Informational Operational DataData Warehouse ApplicationOLTPOLAP UsePrecise QueriesAd Hoc TemporalSnapshotHistorical ModificationDynamicStatic OrientationApplicationBusiness DataOperational ValuesIntegrated SizeGigabitsTerabits LevelDetailedSummarized AccessOftenLess Often ResponseFew SecondsMinutes Data SchemaRelationalStar/Snowflake

42 © Prentice Hall42 OLAP Online Analytic Processing (OLAP): provides more complex queries than OLTP. Online Analytic Processing (OLAP): provides more complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional database/transaction processing. OnLine Transaction Processing (OLTP): traditional database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations: Visualization of operations: –Slice: examine sub-cube. –Dice: rotate cube to look at another dimension. –Roll Up/Drill Down DM: May use OLAP queries.

43 © Prentice Hall43 OLAP Operations Single CellMultiple CellsSliceDice Roll Up Drill Down

44 © Prentice Hall44Statistics Simple descriptive models Simple descriptive models Statistical inference: generalizing a model created from a sample of the data to the entire dataset. Statistical inference: generalizing a model created from a sample of the data to the entire dataset. Exploratory Data Analysis: Exploratory Data Analysis: –Data can actually drive the creation of the model –Opposite of traditional statistical view. Data mining targeted to business user Data mining targeted to business user DM: Many data mining methods come from statistical techniques.

45 © Prentice Hall45 Machine Learning Machine Learning: area of AI that examines how to write programs that can learn. Machine Learning: area of AI that examines how to write programs that can learn. Often used in classification and prediction Often used in classification and prediction Supervised Learning: learns by example. Supervised Learning: learns by example. Unsupervised Learning: learns without knowledge of correct answers. Unsupervised Learning: learns without knowledge of correct answers. Machine learning often deals with small static datasets. Machine learning often deals with small static datasets. DM: Uses many machine learning techniques.

46 © Prentice Hall46 Pattern Matching (Recognition) Pattern Matching: finds occurrences of a predefined pattern in the data. Pattern Matching: finds occurrences of a predefined pattern in the data. Applications include speech recognition, information retrieval, time series analysis. Applications include speech recognition, information retrieval, time series analysis. DM: Type of classification.

47 © Prentice Hall47 DM vs. Related Topics

48 © Prentice Hall48 Data Mining Techniques Outline Statistical Statistical –Point Estimation –Models Based on Summarization –Bayes Theorem –Hypothesis Testing –Regression and Correlation Similarity Measures Similarity Measures Decision Trees Decision Trees Neural Networks Neural Networks –Activation Functions Genetic Algorithms Genetic Algorithms Goal: Provide an overview of basic data mining techniques

49 © Prentice Hall49 Point Estimation Point Estimate: estimate a population parameter. Point Estimate: estimate a population parameter. May be made by calculating the parameter for a sample. May be made by calculating the parameter for a sample. May be used to predict value for missing data. May be used to predict value for missing data. Ex: Ex: –R contains 100 employees –99 have salary information –Mean salary of these is $50,000 –Use $50,000 as value of remaining employee’s salary. Is this a good idea?

50 © Prentice Hall50 Estimation Error Bias: Difference between expected value and actual value. Bias: Difference between expected value and actual value. Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value: Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value: Why square? Why square? Root Mean Square Error (RMSE) Root Mean Square Error (RMSE)

51 © Prentice Hall51 Jackknife Estimate Jackknife Estimate: estimate of parameter is obtained by omitting one value from the set of observed values. Jackknife Estimate: estimate of parameter is obtained by omitting one value from the set of observed values. Ex: estimate of mean for X={x, …, x} Ex: estimate of mean for X={x 1, …, x n }

52 © Prentice Hall52 Maximum Likelihood Estimate (MLE) Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model. Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model. Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function: Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function: Maximize L. Maximize L.

53 © Prentice Hall53 MLE Example Coin toss five times: {H,H,H,H,T} Coin toss five times: {H,H,H,H,T} Assuming a perfect coin with H and T equally likely, the likelihood of this sequence is: Assuming a perfect coin with H and T equally likely, the likelihood of this sequence is: However if the probability of a H is 0.8 then: However if the probability of a H is 0.8 then:

54 © Prentice Hall54 MLE Example (cont’d) General likelihood formula: General likelihood formula: Estimate for p is then 4/5 = 0.8 Estimate for p is then 4/5 = 0.8

55 © Prentice Hall55 Expectation-Maximization (EM) Solves estimation with incomplete data. Solves estimation with incomplete data. Obtain initial estimates for parameters. Obtain initial estimates for parameters. Iteratively use estimates for missing data and continue until convergence. Iteratively use estimates for missing data and continue until convergence.

56 © Prentice Hall56 EM Example

57 © Prentice Hall57 EM Algorithm

58 © Prentice Hall58 Models Based on Summarization Visualization: Frequency distribution, mean, variance, median, mode, etc. Visualization: Frequency distribution, mean, variance, median, mode, etc. Box Plot: Box Plot:

59 © Prentice Hall59 Scatter Diagram

60 © Prentice Hall60 Bayes Theorem Posterior Probability: P(h|x) Posterior Probability: P(h 1 |x i ) Prior Probability: P(h) Prior Probability: P(h 1 ) Bayes Theorem: Bayes Theorem: Assign probabilities of hypotheses given a data value. Assign probabilities of hypotheses given a data value.

61 © Prentice Hall61 Bayes Theorem Example Credit authorizations (hypotheses): h 1 =authorize purchase, h= authorize after further identification, h=do not authorize, h= do not authorize but contact police Credit authorizations (hypotheses): h 1 =authorize purchase, h 2 = authorize after further identification, h 3 =do not authorize, h 4 = do not authorize but contact police Assign twelve data values for all combinations of credit and income: Assign twelve data values for all combinations of credit and income: From training data: P(h 1 ) = 60%; P(h 2 )=20%; P(h 3 )=10%; P(h 4 )=10%. From training data: P(h 1 ) = 60%; P(h 2 )=20%; P(h 3 )=10%; P(h 4 )=10%.

62 © Prentice Hall62 Bayes Example(cont’d) Training Data: Training Data:

63 © Prentice Hall63 Bayes Example(cont’d) Calculate P(x i |h j ) and P(x i ) Calculate P(x i |h j ) and P(x i ) Ex: P(x 7 |h 1 )=2/6; P(x 4 |h 1 )=1/6; P(x 2 |h 1 )=2/6; P(x 8 |h 1 )=1/6; P(x i |h 1 )=0 for all other x i. Ex: P(x 7 |h 1 )=2/6; P(x 4 |h 1 )=1/6; P(x 2 |h 1 )=2/6; P(x 8 |h 1 )=1/6; P(x i |h 1 )=0 for all other x i. Predict the class for x 4 : Predict the class for x 4 : –Calculate P(h j |x 4 ) for all h j. –Place x 4 in class with largest value. –Ex: »P(h 1 |x 4 )=(P(x 4 |h 1 )(P(h 1 ))/P(x 4 ) =(1/6)(0.6)/0.1=1. =(1/6)(0.6)/0.1=1. »x 4 in class h 1.

64 © Prentice Hall64 Hypothesis Testing Find model to explain behavior by creating and then testing a hypothesis about the data. Find model to explain behavior by creating and then testing a hypothesis about the data. Exact opposite of usual DM approach. Exact opposite of usual DM approach. H 0 – Null hypothesis; Hypothesis to be tested. H 0 – Null hypothesis; Hypothesis to be tested. H 1 – Alternative hypothesis H 1 – Alternative hypothesis

65 © Prentice Hall65 Chi Squared Statistic O – observed value O – observed value E – Expected value based on hypothesis. E – Expected value based on hypothesis. Ex: Ex: –O={50,93,67,78,87} –E=75 –  2 =15.55 and therefore significant

66 © Prentice Hall66 Regression Predict future values based on past values Predict future values based on past values Linear Regression assumes linear relationship exists. Linear Regression assumes linear relationship exists. y = c 0 + c 1 x 1 + … + c n x n Find values to best fit the data Find values to best fit the data

67 © Prentice Hall67 Linear Regression

68 © Prentice Hall68 Correlation Examine the degree to which the values for two variables behave similarly. Examine the degree to which the values for two variables behave similarly. Correlation coefficient r: Correlation coefficient r: 1 = perfect correlation1 = perfect correlation -1 = perfect but opposite correlation-1 = perfect but opposite correlation 0 = no correlation0 = no correlation

69 © Prentice Hall69 Similarity Measures Determine similarity between two objects. Determine similarity between two objects. Similarity characteristics: Similarity characteristics: Alternatively, distance measure measure how unlike or dissimilar objects are. Alternatively, distance measure measure how unlike or dissimilar objects are.

70 © Prentice Hall70 Similarity Measures

71 © Prentice Hall71 Distance Measures Measure dissimilarity between objects Measure dissimilarity between objects

72 © Prentice Hall72 Twenty Questions Game

73 © Prentice Hall73 Decision Trees Decision Tree (DT): Decision Tree (DT): –Tree where the root and each internal node is labeled with a question. –The arcs represent each possible answer to the associated question. –Each leaf node represents a prediction of a solution to the problem. Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs. Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs.

74 © Prentice Hall74 Decision Tree Example

75 © Prentice Hall75 Decision Trees A Decision Tree Model is a computational model consisting of three parts: A Decision Tree Model is a computational model consisting of three parts: –Decision Tree –Algorithm to create the tree –Algorithm that applies the tree to data Creation of the tree is the most difficult part. Creation of the tree is the most difficult part. Processing is basically a search similar to that in a binary search tree (although DT may not be binary). Processing is basically a search similar to that in a binary search tree (although DT may not be binary).

76 © Prentice Hall76 Decision Tree Algorithm

77 © Prentice Hall77 DT Advantages/Disadvantages Advantages: Advantages: –Easy to understand. –Easy to generate rules Disadvantages: Disadvantages: –May suffer from overfitting. –Classifies by rectangular partitioning. –Does not easily handle nonnumeric data. –Can be quite large – pruning is necessary.

78 © Prentice Hall78 Neural Networks Based on observed functioning of human brain. Based on observed functioning of human brain. (Artificial Neural Networks (ANN) (Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a graphical viewpoint. We view a neural network (NN) from a graphical viewpoint. Alternatively, a NN may be viewed from the perspective of matrices. Alternatively, a NN may be viewed from the perspective of matrices. Used in pattern recognition, speech recognition, computer vision, and classification. Used in pattern recognition, speech recognition, computer vision, and classification.

79 © Prentice Hall79 Neural Networks Neural Network (NN) is a directed graph F= with vertices V={1,2,…,n} and arcs A={ |1 with vertices V={1,2,…,n} and arcs A={ |1<=i,j<=n}, with the following restrictions: –V is partitioned into a set of input nodes, V I, hidden nodes, V H, and output nodes, V O. –The vertices are also partitioned into layers –Any arc must have node i in layer h-1 and node j in layer h. –Arc is labeled with a numeric value w ij. –Node i is labeled with a function f i.

80 © Prentice Hall80 Neural Network Example

81 © Prentice Hall81 NN Node

82 © Prentice Hall82 NN Activation Functions Functions associated with nodes in graph. Functions associated with nodes in graph. Output may be in range [-1,1] or [0,1] Output may be in range [-1,1] or [0,1]

83 © Prentice Hall83 NN Activation Functions

84 © Prentice Hall84 NN Learning Propagate input values through graph. Propagate input values through graph. Compare output to desired output. Compare output to desired output. Adjust weights in graph accordingly. Adjust weights in graph accordingly.

85 © Prentice Hall85 Neural Networks A Neural Network Model is a computational model consisting of three parts: A Neural Network Model is a computational model consisting of three parts: –Neural Network graph –Learning algorithm that indicates how learning takes place. –Recall techniques that determine hew information is obtained from the network. We will look at propagation as the recall technique. We will look at propagation as the recall technique.

86 © Prentice Hall86 NN Advantages Learning Learning Can continue learning even after training set has been applied. Can continue learning even after training set has been applied. Easy parallelization Easy parallelization Solves many problems Solves many problems

87 © Prentice Hall87 NN Disadvantages Difficult to understand Difficult to understand May suffer from overfitting May suffer from overfitting Structure of graph must be determined a priori. Structure of graph must be determined a priori. Input values must be numeric. Input values must be numeric. Verification difficult. Verification difficult.

88 © Prentice Hall88 Genetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and iteratively creates new “better” solutions. Creates an initial feasible solution and iteratively creates new “better” solutions. Based on human evolution and survival of the fittest. Based on human evolution and survival of the fittest. Must represent a solution as an individual. Must represent a solution as an individual. Individual: string I=I 1,I 2,…,I n where I j is in given alphabet A. Individual: string I=I 1,I 2,…,I n where I j is in given alphabet A. Each character I j is called a gene. Each character I j is called a gene. Population: set of individuals. Population: set of individuals.

89 © Prentice Hall89 Genetic Algorithms A Genetic Algorithm (GA) is a computational model consisting of five parts: A Genetic Algorithm (GA) is a computational model consisting of five parts: –A starting set of individuals, P. –Crossover: technique to combine two parents to create offspring. –Mutation: randomly change an individual. –Fitness: determine the best individuals. –Algorithm which applies the crossover and mutation techniques to P iteratively using the fitness function to determine the best individuals in P to keep.

90 © Prentice Hall90 Crossover Examples

91 © Prentice Hall91 Genetic Algorithm

92 © Prentice Hall92 GA Advantages/Disadvantages Advantages Advantages –Easily parallelized Disadvantages Disadvantages –Difficult to understand and explain to end users. –Abstraction of the problem and method to represent individuals is quite difficult. –Determining fitness function is difficult. –Determining how to perform crossover and mutation is difficult.


Download ppt "1 DATA MINING Introductory and Advanced Topics Part I References from Dunham."

Similar presentations


Ads by Google