Download presentation
Presentation is loading. Please wait.
Published byShavonne Melton Modified over 9 years ago
1
National Centre for Agricultural Economics and Policy Research (NCAP), New Delhi Rajni Jain rajni@ncap.res.in
2
Expert System: Major Constraint Expert systems are automated systems that provide knowledge like experts and are useful when it is difficult to get expert advice Development of expert system is challenging because it is to hard to develop the complete rule base which can work like an expert: – Non availability of experts – Scarcity of time – Cost of their time
3
DM: An aid for development of Expert System DM: Extraction of potentially useful, meaningful and novel patterns from the data using suitable algorithms It is a great idea to use data mining tools and techniques to extract rules. reduce the time required for interactions with the domain experts Rules should never be used without validation from experts
4
Introduction Importance of Rules Directly from Decision Tables DM – DT – RS – Fuzzy – ANN
5
Decision TREE Used to represent a sequence of decisions to arrive at a particular result Y N Colour White Red
6
Decision-Tree (ID3) A decision-tree is a set of nodes and leaves where each node tests the value of an attribute and branches on all possible values. ID3 is the first most popular decision-tree building algorithm. A statistical property called information gain is used to decide which attribute is best for the node under consideration. Greedy Algorithm, all attributes till the examples reach leaf node are retained for computation. Parent node attribute is eliminated.
7
CLS (Concept Learning System) Step 1: If all instances in C are positive, then create YES node and halt. – If all instances in C are negative, create a NO node and halt. – Otherwise select a feature, F with values v1,..., vn and create a decision node. Step 2: Partition the training instances in C into subsets C1, C2,..., Cn according to the values of V. Step 3: apply the algorithm recursively to each of the sets Ci.
8
CLS contd.. Note, the trainer (the expert) decides which feature to select.
9
ID3 Algorithm ID3 improves on CLS by adding a feature selection heuristic. ID3 searches through the attributes of the training instances and extracts the attribute that best separates the given examples. If the attribute perfectly classifies the training sets then ID3 stops; otherwise it recursively operates on the n (where n = number of possible values of an attribute) partitioned subsets to get their "best" attribute. The algorithm uses a greedy search, that is, it picks the best attribute and never looks back to reconsider earlier choices.
10
Data Requirements for ID3 Attribute value Description: The same attributes must describe each example and have a fixed number of values Predefined classes: An example’s attributes must already be defined as they are not learned by ID3 Discrete Classes Sufficient Examples
11
Attribute Selection in ID3 How does ID3 decides which attribute is the best? – Information Gain, a statistical property is used Information gain measures how well a given attribute separates training examples into targeted classes The attribute with the highest information gain is selected Entropy measures the amount of the information
12
Entropy Given a collection S of c outcomes Entropy(S) = -p(I) log2 p(I) where p(I) is the proportion of S belonging to class I. is over c. Log2 is log base 2. Note that S is not an attribute but the entire sample set.
13
Example 1: Entropy If S is a collection of 14 examples with 9 YES and 5 NO examples then Entropy(S) = - (9/14) Log2 (9/14) - (5/14) Log2 (5/14) = 0.940 Notice entropy is 0 if all members of S belong to the same class (the data is perfectly classified). The range of entropy is 0 ("perfectly classified") to 1 ("totally random").
14
Information Gain Gain(S, A) is information gain of example set S on attribute A is defined as Gain(S, A) = Entropy(S) - ((|S v | / |S|) * Entropy(S v )) Where: is each value v of all possible values of attribute A S v = subset of S for which attribute A has value v |S v | = number of elements in S v |S| = number of elements in S
15
Example 2: Information Gain Suppose S is a set of 14 examples in which one of the attributes is wind speed. The values of Wind can be Weak or Strong. The classification of these 14 examples are 9 YES and 5 NO. For attribute Wind, suppose there are 8 occurrences of Wind = Weak and 6 occurrences of Wind = Strong. For Wind = Weak, 6 of the examples are YES and 2 are NO. For Wind = Strong, 3 are YES and 3 are NO. Therefore Gain(S,Wind)=Entropy(S)-(8/14)*Entropy(S weak )-(6/14)*Entropy(S strong ) = 0.940 - (8/14)*0.811 - (6/14)*1.00 = 0.048 Entropy(S weak ) = - (6/8)*log2(6/8) - (2/8)*log2(2/8) = 0.811 Entropy(S s trong ) = - (3/6)*log2(3/6) - (3/6)*log2(3/6) = 1.00 For each attribute, the gain is calculated and the highest gain is used in the decision node.
16
Example 3: ID3 Suppose we want ID3 to decide whether the weather is amenable to playing baseball. Over the course of 2 weeks, data is collected to help ID3 build a decision tree (see table 1). The target classification is "should we play baseball?" which can be yes or no. The weather attributes are outlook, temperature, humidity, and wind speed. They can have the following values: outlook = { sunny, overcast, rain } temperature = {hot, mild, cool } humidity = { high, normal } wind = {weak, strong }
17
DayOutlookTemperatureHumidityWindPlay ball D1SunnyHotHighWeakNo D2SunnyHotHighStrongNo D3OvercastHotHighWeakYes D4RainMildHighWeakYes D5RainCoolNormalWeakYes D6RainCoolNormalStrongNo D7OvercastCoolNormalStrongYes D8SunnyMildHighWeakNo D9SunnyCoolNormalWeakYes D10RainMildNormalWeakYes D11SunnyMildNormalStrongYes D12OvercastMildHighStrongYes D13OvercastHotNormalWeakYes D14RainMildHighStrongNo
18
Real Example: Summary of dataset NSSO carried out a nationwide survey on cultivation practices as a part of its 54 th round from Jan-June 1998 The enquiry was carried out in the rural areas of India through house hold survey to ascertain the problems and utilization of facilities and the level of adoption in farms of different sizes in the country Focus on the spread of improved agricultural technology Data is extracted for the Haryana State
19
Dataset Summary 39 independent variables from Haryana 1 dependent variable (ifpesticide) value 1,2 – 1:adoption of PIF – 2:non-adoption of PIF 36-nominal, 4 real-valued attributes 1832 cases – 48% adopting PIF – 52% not adopting
20
Why Haryana? Reasonable dataset size (1832 cases) Independent variable (if pesticide) is some what uniformly distributed in Haryana Lakshdweep0Arunachal35 Mizoram1Bihar40 Sikkim2Maharastra43 Dadra8A&N43 Nagaland11Haryana48 Himachal14Goa52 Meghalaya16Karnatka55 Kerala20Tripura58 UP23Gujrat65 Rajasthan24Punjab71 MP27DamanDiu74 J&K29Andhra79 Manipur31TN80 Assam32WB82 Orissa32Pondicherry88 Delhi33Chnadigarh100 Distribution of Class Variable
21
Variables in Haryana-Farmers Dataset (NSSO, 54th round) #Attribute NameValue set#Attribute NameValue set 1SnoCropGroup{1,2,3,4,5}21jointforest{1,2} 2CropCode{1,2,3,4,5,6,7,8,9,11,99}22managetanks{1,2} 3Season{1,2}23iftreepatta{1,2} 4NumAreaSown{1,2,3,4,5,6}24iftimberright{1,2} 5iftractor{1,2,3}25howoftenright{1,2,9} 6ifepump{1,2,3}26schhhdcpr{1,2,9} 7ifoilpump{1,2,3}27anymemberpreventcpr{1,2} 8ifmanure{1,2,3}28LandWithRightOfSale{1,2,3,4,5} 9iffertilizers{1,2,3}29TotLandOwnedNum{1,2,3,4,5} 10ifimproveseed{0,1,2,3}30LandPossessNum{1,2,3,4,5} 11seedtype{1,2,3,4,9}31NASNum{1,2,3,4,5} 12typemanure{1,2,3,9}32ifsoiltested{1,2} 13ifhired{1,2,9}33ifsoilRecfollow{1,2,9} 14ifirrigate{1,2}34ifhhdtubewell{1,2} 15ifhiredirri{1,2,9}35iftubewellunused{1,2,9} 16ifweedicide{1,2}36irrigatesource{1,2,9} 17ifunusdelectricpump{1,2,9}37ifdieselpump{1,2} 18harvested{1,2,3}38ifunuseddieselpump{1,2,9} 19woodpurpose{1,2,3,4,5,9}39ifelectricpump{1,2} 20iflivestock{1,2}40ifpesticide{1,2}
22
Hypothetical Farmer Dataset IDCropSeedtypeAreaSownIffertilizerIfpesticide X1wheat2smallnoyes X2wheat3mediumyesno X3wheat1mediumyesno X4rice1mediumnoyes X5fodder2largenoyes X6rice3largeno X7rice2largeno X8wheat1smallyesno
23
Decision Tree (DT) using RDT Architecture for Hypothetical Farmer Data no Seedtype 1 Crop rice fodderwheat no yes ? Crop rice fodderwheat yes no yes 23 The Tree is simple to comprehend The tree can be easily traversed to generate rules
24
Description of the codes of the attributes in the Approximate Core based RDT model Attribute name#Code specifications Ifweedicide16yes -1, no-2 CropCode2paddy -1, wheat -2, other cereals -3, pulses- 4, oil seeds - 5, mixed crop – 6, sugarcane -7, vegetables -8, fodder -9, fruits & nuts -10, other cash crops -11, others 99 Iffertilizer9entirely -1, partly-2, none-3 Ifpesticide40yes-1, no-2
25
If weedicide 1 1 1 CropCode 1 1 2 2 2 23 2 2 4 2 2 5 2 2 2 67 99 8 9 2 22 2 2 2 Iffertilize r 1 2 2 1 3 1 1 1 1 2 1 3 2 11 Approximate Core based RDT model for Farmers Adopting Plant Protection Chemicals 1 Ifweedicide: yes -1, no-2 Ifpesticide:yes-1, no-2 Cropecode: paddy -1, wheat -2, other cereals -3, pulses- 4, oil seeds -5, mixed crop – 6, sugarcane -7, vegetables -8, fodder -9, fruits & nuts -10, other cash crops -11, others 99 Iffertilizer: entirely - 1, partly-2, none-3
26
Decision Rules for Adopters 1.If Weedicide= yes then pesticide=yes 2.If weedicide=no and crop=paddy then pesticide=yes 3.If weedicide=no and crop=vegetables and fertilizer=entirely then pesticide=yes 4.If weedicide=no and crop=vegetables and fertilizer=none then pesticide=yes 5.If weedicide=no and crop=cash and fertilizer=entirely then pesticide=yes 6.If weedicide=no and crop=cash and fertilizer=partly then pesticide=yes
27
Decision Rules for Non- adopters 1.If weedicide=no and crop=(wheat, other than rice cereals, pulses, oil seeds, mixed crop, sugarcane, fodder, other crops) then pesticide=no 2.If weedicide=no and crop = vegetables and fertilizer=partly then pesticide=no 3.If weedicide=no and crop = other cash crops and fertilizer=none then pesticide=no
28
A case study of forwarning Powdery Mildew of Mango disease
29
YEART811H811T812H812T813H813T814H814 STAT US 198728.2091.2528.4888.6029.5085.5030.1482.86 1 198832.0575.7531.6480.6031.2777.8330.6679.57 0 198926.6086.2526.2887.2026.4787.3326.3189.14 0 199027.5091.2528.1291.2028.1792.0028.4391.00 1 199128.4387.0028.7086.2029.0083.5029.5780.57 0 199230.1269.2330.4568.5830.8068.3131.2567.82 1 199330.5061.7530.4861.1330.3760.5630.3361.76 0 199430.4589.2530.5685.8030.6383.1730.7181.14 1 199528.6361.3829.1061.2029.5861.1730.7161.57 0 199631.6360.3331.9060.8732.6760.8933.0759.76 1 199732.1371.0032.2069.4031.6769.0031.5068.29 0 200029.0078.3329.2378.6029.3678.8329.5279.14 0 Dataset: PWM status for 11 years and corresponding temperature and humidity conditions
30
Techniques for Rule based learning FS DT RS NATIONAL CENTRE FOR AGRICULTURA L ECONOMICS AND POLICY RESEARCH Or The hybrids like RDT
31
Selection of Algorithms Accuracy: Fraction of test instances which are predicted accurately Interpretability: Ability to understand Complexity: No of conditions in the model No of rules No f variables Selection of an algorithm involves trade-off with the parameters
32
DT: Advantage It can be easily mapped to rules
33
IDTRAINTEST 1987-94 1995, 1996, 1997, 2000 1987-95 1996, 1997, 2000 1987-96 1997, 2000 1987-97 2000 Train and Test Data
34
The Prediction model for PWM Epidemic as obtained using CJP Algorithm on 1987-97 data as the training dataset H811T814 011 NATIONAL CENTRE FOR AGRICULTURA L ECONOMICS AND POLICY RESEARCH <=87 >87 <=30.7 >30.7
35
Rules NATIONAL CENTRE FOR AGRICULTURAL ECONOMICS AND POLICY RESEARCH If (H811>87) then Status = 1 If (H811<=87) and (T814<=30.71) then Status = 0 If (H811 30.71) then Status = 1
36
Conclusions Rule induction can be very useful for development of expert systems Strong linkage and collaborations are required among expert system developer, domain experts and data mining experts
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.