PART 7 Constructing Fuzzy Sets 1. Direct/one-expert 2. Direct/multi-expert 3. Indirect/one-expert 4. Indirect/multi-expert 5. Construction from samples.

Slides:



Advertisements
Similar presentations
Tests of Hypotheses Based on a Single Sample
Advertisements


CSE 330: Numerical Methods
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 13 Nonlinear and Multiple Regression.
Fuzzy Expert Systems. Lecture Outline What is fuzzy thinking? What is fuzzy thinking? Fuzzy sets Fuzzy sets Linguistic variables and hedges Linguistic.
Searching for the Minimal Bézout Number Lin Zhenjiang, Allen Dept. of CSE, CUHK 3-Oct-2005
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Radial Basis Functions
Curve-Fitting Regression
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
September 21, 2010Neural Networks Lecture 5: The Perceptron 1 Supervised Function Approximation In supervised learning, we train an ANN with a set of vector.
Project Management: The project is due on Friday inweek13.
Development of Empirical Models From Process Data
Chapter 18 Fuzzy Reasoning.
Chapter 6: Multilayer Neural Networks
Theory and Applications
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Estimation 8.
The Analysis of Variance
Boyce/DiPrima 9th ed, Ch 11.2: Sturm-Liouville Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 9th edition, by.
Inferences About Process Quality
Linear and generalised linear models
Statistical Comparison of Two Learning Algorithms Presented by: Payam Refaeilzadeh.
Neural Networks Lecture 17: Self-Organizing Maps
Radial Basis Function Networks
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 8 Tests of Hypotheses Based on a Single Sample.
Calibration & Curve Fitting
Neuro-fuzzy Systems Xinbo Gao School of Electronic Engineering Xidian University 2004,10.
Collaborative Filtering Matrix Factorization Approach
Review of normal distribution. Exercise Solution.
Presented by Johanna Lind and Anna Schurba Facility Location Planning using the Analytic Hierarchy Process Specialisation Seminar „Facility Location Planning“
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.2 Estimating Differences.
Chapter 9 Large-Sample Tests of Hypotheses
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
Slides are based on Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
Theory and Applications
Curve-Fitting Regression
Theory and Applications
Fuzzy Applications In Finance and Investment 1390 March School of Economic Sciences In the Name of God Dr. K.Pakizeh k.Dehghan Manshadi E.Jafarzade.
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
+ Unit 5: Estimating with Confidence Section 8.2 Estimating a Population Proportion.
Fundamental Concepts of Algebra
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Rate Distortion Theory. Introduction The description of an arbitrary real number requires an infinite number of bits, so a finite representation of a.
CHAPTER 3 Selected Design and Processing Aspects of Fuzzy Sets.
LINEAR MODELS AND MATRIX ALGEBRA
Computacion Inteligente Least-Square Methods for System Identification.
Statistics for Business and Economics 8 th Edition Chapter 7 Estimation: Single Population Copyright © 2013 Pearson Education, Inc. Publishing as Prentice.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Inferences Concerning Means.
A Presentation on Adaptive Neuro-Fuzzy Inference System using Particle Swarm Optimization and it’s Application By Sumanta Kundu (En.R.No.
 Negnevitsky, Pearson Education, Lecture 12 Hybrid intelligent systems: Evolutionary neural networks and fuzzy evolutionary systems n Introduction.
CSE 330: Numerical Methods. What is true error? True error is the difference between the true value (also called the exact value) and the approximate.
Estimating standard error using bootstrap
Chapter 7. Classification and Prediction
Chapter 5 STATISTICS (PART 4).
Copyright © Cengage Learning. All rights reserved.
Chapter 9 Hypothesis Testing.
Unfolding Problem: A Machine Learning Approach
CONCEPTS OF ESTIMATION
CHAPTER 8 Estimating with Confidence
Principal Component Analysis
CHAPTER 8 Estimating with Confidence
CHAPTER 8 Estimating with Confidence
CHAPTER 8 Estimating with Confidence
CHAPTER 8 Estimating with Confidence
CHAPTER 8 Estimating with Confidence
Presentation transcript:

PART 7 Constructing Fuzzy Sets 1. Direct/one-expert 2. Direct/multi-expert 3. Indirect/one-expert 4. Indirect/multi-expert 5. Construction from samples FUZZY SETS AND FUZZY LOGIC Theory and Applications

22 Direct/one-expert An expert is expected to assign to each given element x a membership grade A(x) that, according to his or her opinion, best captures the meaning of the linguistic term represented by the fuzzy set A. It can be done by either 1.defining the membership function completely in terms of a justifiable mathematical formula, 2.exemplifying it for some selected elements of X.

3 Direct/multi-expert When a direct method is extended from one expert to multiple experts, the opinions of individual experts must be appropriately aggregated. One of the most common methods is based on a probabilistic interpretation of membership functions. 3

4 Direct/multi-expert "x belongs to A" is either true or false, where A is a fuzzy set on X that represent a linguistic term associated with a given linguistic variable. let a i (x) denote the answer of expert i. Assume that a i (x) = 1 when the proposition is valued by expert i as true, and a i (x) = 0 when it is valued as false. n: number of expects

5 Direct/multi-expert Generalize the interpretation A(x) by allowing one to distinguish degrees of competence, c i, of the individual experts. where

6 Direct/multi-expert Let A and B are two fuzzy sets, defined on the same universal set X. We can calculate A(x) and B(x) for each x X, and then choose appropriate fuzzy operators to calculate,, A U B, A ∩ B, and so forth. Let

7 Direct/multi-expert and

8 Indirect/one-expert Given a linguistic term in a particular context, let A denote a fuzzy set that is supposed to capture the meaning of this term. Let x 1, …,n be elements of the universal set X for which we want to estimate the grades of membership in A. 8

9 Indirect/one-expert Our problem is to determine the values a i = A(x i ). Instead of asking the expert to estimate values a i directly. We ask him or her to compare elements x 1, …,n in pairs according to their relative weights of belonging to A.

10 Indirect/one-expert pairwise comparisons A square matrix P = [p ij ], i,j N n, which has positive entries everywhere. Assume first that it is possible to obtain perfect values p ij. In this case, p ij = a i / a j ; and matrix P is consistent in the sense that for all i, j, k N n, which implies that p ii = 1 and p ij = 1/ p ji.

11 Indirect/one-expert Furthermore, for all i N n or, in matrix form, where

12 Indirect/one-expert Pa = na means that n is an eigenvalue of P and a is the corresponding eigenvector. It also can be rewritten in the form where I is the identity matrix.

13 Indirect/one-expert If we assume that then a j for any j N n can be determined by the following simple procedure: hence,

14 Indirect/one-expert The problem of estimating vector a from matrix P now becomes the problem of finding the largest eigenvalue λ max and the associated eigenvector. That is, the estimated vector a must satisfy the equation Pa = λ max a, where λ max is usually close to n.

15 Indirect/multi-expert Let us illustrate methods in this category by describing an interesting method, which enables us to determine degrees of competence of participating experts. It is based on the assumption that, in general, the concept in question is n-dimensional (based on n distinct features), each defined on R. Hence, the universal set on which the concept is defined is R n. 15

16 Indirect/multi-expert The full opinion of expert i regarding the relevance of elements (n-tuples) of R n to the concept is expressed by the hyperparallelepiped Where, denote the interval of values of feature ; that, in the opinion of expert i, relate to the concept in question (i N m, j N n ).

17 Indirect/multi-expert We obtain m hyperparallelepipeds of this form for m experts. Membership function of the fuzzy set by which the concept is to be represented is then constructed by the following algorithmic procedure:

18 Indirect/multi-expert

19 Indirect/multi-expert

20 Indirect/multi-expert

21 Indirect/multi-expert

22 Construction from samples Lagrange Interpretation A curve-fitting method in which the constructed function is assumed to be expressed by a suitable polynomial form. The function f employed for the interpolation of given sample data for all x R has the form 22

23 Construction from samples for all i N n. Since values f (x) need not be in [0,1] for some x R, function f cannot be directly considered as the sought membership function A. We may convert f to A for each x by the formula

24 Construction from samples An advantage of this method is that the membership function matches the sample data exactly. Its disadvantage is that the complexity of the resulting function (expressed by the degree of the polynomial involved) increases with the number of data samples.

25 Construction from samples

26 Construction from samples

27 Construction from samples

28 Construction from samples Least-square curve fitting The method of least-square curve fitting selects that function f (x: α 0, β 0, ) from the class for which reaches its minimum. Then, for all x R. 28

29 Construction from samples An example of the class of bell-shaped functions is frequently used for this purpose. where α controls the position of the center of the bell, (β / 2 ) -2 defines the inflection points, and γ control the height of the bell (Fig. 10.4a). 29

30 Construction from samples Given sample data, we determine (by any effective optimization method) values α 0, β 0, γ 0 of parameters α, β, γ, respectively, for which reaches its minimum. Then, according to A(x), the bell-shape membership function A that best conforms to the sample data is given by the formula for all x R. 30

31 Construction from samples Another class of functions that is frequently used for representing linguistic terms is the class of trapezoidal-shaped functions, The meaning of the five parameters is illustrated in Fig. 10.4b. 31

32 Construction from samples 32

33 Construction from samples 33

34 Construction from samples 34

35 Construction from samples

36 Construction from samples 36

37 Construction from samples 37

38 Construction from samples Neural networks 38

39 Construction from samples Following the backpropagation learning algorithm, we first initialize the weights in the network. This means that we assign a small random number to each weight. Then, we apply pairs 〈 x p, t p 〉 of the training set to the learning algorithm in some order. 39

40 Construction from samples For each x p, we calculate the actual output y p and calculate the square error Using E p, we update the weights in the network according to the backpropagation algorithm described in Appendix A. We also calculate a cumulative cycle error, 40

41 Construction from samples At the end, we compare the cumulative error with the largest acceptable error, E max, specified by the user. – E ≦ E max : the neural network represents the desired membership function. – E > E max : we initiate a new cycle. The algorithm is terminated when either we obtain a solution or the number of cycles exceeds a number specified by the user. 41

42 Construction from samples 42

43 Construction from samples 43

44 Construction from samples 44

45 Construction from samples

46 Construction from samples

47 Construction from samples

48 Construction from samples

49 Construction from samples

50 Construction from samples

51 Construction from samples

52 Construction from samples

53 Exercise