Download presentation
Presentation is loading. Please wait.
1
A Review of Research Topics of the AI Department in Kharkov (MetaIntelligence Laboratory) Vagan Terziyan, Helen Kaikova November 25 - December 5, 1999 Vrije University of Amsterdam (Netherlands) First Joint Research Seminar of AI Departments of Ukraine and Netherlands
2
Authors Helen KaikovaVagan Terziyan Metaintelligence Laboratory Department of Artificial Intelligence Kharkov State Technical University of Radioelectronics, UKRAINE vagan@kture.kharkov.ua helen@jytko.jyu.fi In cooperation with:
3
Contents 4 A Metasemantic Network 4 Metasemantic Algebra of Contexts 4 The Law of Semantic Balance 4 Metapetrinets 4 Multidatabase Mining and Ensemble of Classifiers 4 Trends of Uncertainty, Expanding Context and Discovering Knowledge 4 Recursive Arithmetic 4 Similarity Evaluation in Multiagent Systems 4 On-Line Learning
4
A Metasemantic Network
5
Semantic Metanetwork is considered formally as the set of semantic networks, which are put on each other in such a way that links of every previous semantic network are in the same time nodes of the next network A Semantic Metanetwork
6
An Example of a Semantic Metanetwork
7
How it Works In a Semantic Metanetwork every higher level controls semantic structure of the lower level. Simple controlling rules might be, for example, in what contexts certain link of a semantic structure can exist and in what context it should be deleted from the semantic structure. Such multilevel network can be used in an adaptive control system which structure is automatically changed following changes in a context of the environment. The algebra for reasoning with a semantic metanetwork is also developed.
8
Published and Further Developed in Terziyan V., Multilevel Models for Knowledge Bases Control and Their Applications to Automated Information Systems, Doctor of Technical Sciences Degree Thesis, Kharkov State Technical University of Radioelectronics, 1993 Puuronen S., Terziyan V., A Metasemantic Network, In: E. Hyvonen, J. Seppanen and M. Syrjanen (eds.), SteP-92 - New Directions in Artificial Intelligence, Publications of the Finnish AI Society, Otaniemi, Finland, 1992, Vol. 1, pp. 136-143.
9
A Metasemantic Algebra for Managing Contexts
10
Semantic predicate describes a piece of knowledge (relation or property) by the expression: if there is knowledge that a relation with name L k holds between objects A i and A j A Semantic Predicate
11
Example of Knowledge
12
Semantic Operations: Inversion
13
Semantic Operations: Negation P(,, ) = false, it is the same as: P(,, ) = true.
14
Semantic Operations: Composition If it is true: P(,, ) and P(,, ), then it is also true that: P(,, ).
15
Semantic Operations: Intersection + =.
16
Semantic Operations: Interpretation
17
Interpreting Knowledge in a Context
18
Example of Interpretation The interpreted knowledge about the relation between A 1 and A 3 taking all contexts and metacontexts into account is as follows:
19
Decontextualization Suppose that your colleague, whose context you know well, has described you a situation. You use knowledge about context of this person to interpret the “real” situation. Example is more complicated if several persons describe you the same situation. In this case, the context of the situation is the semantic sum over all personal contexts.
20
Context Recognition Suppose that someone sends you a message describing the situation that you know well. You compare your own knowledge with the knowledge you received. Usually you can derive your opinion about the sender of this letter. Knowledge about the source of the message, you derived, can be considered as certain context in which real situation has been interpreted and this can help you to recognize a source or at least his motivation to change the reality.
21
Lifting (Relative Decontextualization) This means deriving knowledge interpreted in some context if it is known how this knowledge was interpreted in another context.
22
Terziyan V., Puuronen S., Reasoning with Multilevel Contexts in Semantic Metanetworks, In: D.M. Gabbay (Ed.), Formal Aspects in Context, Kluwer Academic Publishers, 1999, pp. 173-190. Published and Further Developed in Terziyan V., Puuronen S., Multilevel Context Representation Using Semantic Metanetwork, In: Context- 97 - Proceedings of International and Interdisciplinary Conference on Modeling and Using Context, Rio de Janeiro, Brazil, Febr. 4-6, 1997, pp. 21-32.
23
The Law of Semantic Balance
24
An Object in Possible World
25
Internal and External View of an Object
26
Internal semantics of object is equal to semantic sum of all chains of semantic relations that start and finish on the shell of this object and pass inside it: Internal Semantics of an Object
27
External Semantics of an Object External semantics of object is equal to internal semantics of the World if consider this object as an Atom in this World (i.e. after removing internal structure of the object from the World):
28
External and internal semantics of any object as evolutionary knowledge are equivalent to each other in a limit. The Law of Semantic Balance
29
The Evolution of Knowledge
30
Published and Further Developed in Terziyan V., Multilevel Models for Knowledge Bases Control and Their Applications to Automated Information Systems, Doctor of Technical Sciences Degree Thesis, Kharkov State Technical University of Radioelectronics, 1993 Grebenyuk V., Kaikova H., Terziyan V., Puuronen S., The Law of Semantic Balance and its Use in Modeling Possible Worlds, In: STeP-96 - Genes, Nets and Symbols, Publications of the Finnish AI Society, Vaasa, Finland, 1996, pp. 97-103. Terziyan V., Puuronen S., Knowledge Acquisition Based on Semantic Balance of Internal and External Knowledge, In: I. Imam, Y.Kondratoff, A. El-Dessouki and A. Moonis (Eds.), Multiple Approaches to Intelligent Systems, Lecture Notes in Artificial Intelligence, Springer-Verlag, V. 1611, 1999, pp. 353-361.
31
Metapetrinets for Flexible Modelling and Control of Complicated Dynamic Processes
32
A Metapetrinet A metapetrinet is able not only to change the marking of a petrinet but also to reconfigure dynamically its structure Each level of the new structure is an ordinary petrinet of some traditional type. A basic level petrinet simulates the process of some application. The second level, i.e. the metapetrinet, is used to simulate and help controlling the configuration change at the basic level.
33
How it Works 4 There is conformity between the places of the second level structure and places or transitions of the basic level structure. 4 One possible control rule is such that a certain place or transition is removed from the present configuration of the basic level if the corresponding place at the metalevel becomes empty. 4 If at least one token appears to an empty metalevel place, then the originally defined corresponding basic level place or transition immediately is created back to the configuration
34
Example of a Metapetrinet
35
Controlling Interactions between Metapetrinet’s Levels
36
Terziyan V., Multilevel Models for Knowledge Bases Control and Their Applications to Automated Information Systems, Doctor of Technical Sciences Degree Thesis, Kharkov State Technical University of Radioelectronics, 1993 Savolainen V., Terziyan V., Metapetrinets for Controlling Complex and Dynamic Processes, International Journal of Information and Management Sciences, V. 10, No. 1, March 1999, pp.13-32. Published and Further Developed in
37
Mining Several Databases with an Ensemble of Classifiers
38
Problem
39
Case ONE:MANY
40
Dynamic Integration of Classifiers 4 Final classification is made by weighted voting of classifiers from the ensemble; 4 Weights of classifiers are recalculated for every new instance; 4 Weighting is based on predicted errors of the classifiers in the neighborhood area of the instance
41
Case MANY:ONE
42
Integration of Databases 4 Final classification of an instance is obtained by weighted voting of predictions made by the classifier for every database separately; 4 Weighting is based on taking the integral of the error function of the classifier across every database
43
Case MANY:MANY
44
Solutions for MANY:MANY
45
Decontextualization of Predictions 4 Sometimes actual value cannot be predicted as weighted mean of individual predictions of classifiers from the ensemble; 4 It means that the actual value is outside the area of predictions; 4 It happens if classifiers are effected by the same type of a context with different power; 4 It results to a trend among predictions from the less powerful context to the most powerful one; 4 In this case actual value can be obtained as the result of “decontextualization” of the individual predictions
46
Neighbor Context Trend 1 2 3 x prediction in (1,2) neighbor context: “worse context” prediction in (1,2,3) neighbor context: “better context” actual value: “ideal context” y xixi y(x i ) y + (x i ) y - (x i )
47
Main Decontextalization Formula y Y y - - prediction in worse context y + - prediction in better context y ’ - decontextualized prediction y - actual value y’ y+y+ y-y- ’’ ++ -- ’ = - ·+- ·+- ·+- ·+ - + + ’ < - ; ’ < + + < -
48
Some Notes 4 Dynamic integration of classifiers based on locally adaptive weights of classifiers allows to handle the case «One Dataset - Many Classifiers»; 4 Integration of databases based on their integral weights relatively to the classification accuracy allows to handle the case «One Classifier - Many Datasets»; 4 Successive or parallel application of the two abowe algorithms allows a variety of solutions for the case «Many Classifiers - Many Datasets»; 4 Decontextualization as the opposite to weighted voting way of integration of classifiers allows to handle context of classification in the case of a trend
49
Published in Puuronen S., Terziyan V., Logvinovsky A., Mining Several Data Bases with an Ensemble of Classifiers, In: T. Bench-Capon, G. Soda and M. Tjoa (Eds.), Database and Expert Systems Applications, Lecture Notes in Computer Science, Springer-Verlag, V. 1677, 1999, pp. 882-891.
50
Other Related Publications Terziyan V., Tsymbal A., Puuronen S., The Decision Support System for Telemedicine Based on Multiple Expertise, International Journal of Medical Informatics, Elsevier, V. 49, No.2, 1998, pp. 217-229. Tsymbal A., Puuronen S., Terziyan V., Arbiter Meta-Learning with Dynamic Selection of Classifiers and its Experimental Investigation, In: J. Eder, I. Rozman, and T. Welzer (Eds.), Advances in Databases and Information Systems, Lecture Notes in Computer Science, Springer-Verlag, Vol. 1691, 1999, pp. 205-217. Skrypnik I., Terziyan V., Puuronen S., Tsymbal A., Learning Feature Selection for Medical Databases, In: Proceedings of the 12th IEEE Symposium on Computer-Based Medical Systems CBMS'99, Stamford, CT, USA, June 1999, IEEE CS Press, pp.53-58. Puuronen S., Terziyan V., Tsymbal A., A Dynamic Integration Algorithm for an Ensemble of Classifiers, In: Zbigniew W. Ras, Andrzej Skowron (Eds.), Foundations of Intelligent Systems: 11th International Symposium ISMIS'99, Warsaw, Poland, June 1999, Lecture Notes in Artificial Intelligence, V. 1609, Springer-Verlag, pp. 592-600.
51
An Interval Approach to Discover Knowledge from Multiple Fuzzy Estimations
52
The Problem of Interval Estimation 4 Measurements (as well as expert opinions) are not absolutely accurate. 4 The measurement result is expected to lie in the interval around the actual value. 4 The inaccuracy leads to the need to estimate the resulting inaccuracy of data processing. 4 When experts are used to estimate the value of some parameter, intervals are commonly used to describe degrees of belief.
53
Noise of an Interval Estimation 4 In many real life cases there is also some noise which does not allow direct measurement of parameters. 4 The noise can be considered as an undesirable effect (context) to the evaluation of a parameter. 4 Different measurement instruments as well as different experts possess different resistance against the influence of noise. 4 Using measurements from several different instruments as well as estimations from multiple experts we try to discover the effect caused by noise and thus be able to derive the decontextualized measurement result.
54
Decontextualization of Noise in Pattern Recognition with Multiple Estimations Decontextualization pattern noise estimations recognized pattern 1 2 3 4 1 2 3 4 result
55
Basic Assumption 4 The estimation of some parameter x given by more accurate knowledge source (i.e. source guarantees smaller upper bound of measurement error) is supposed to be closer to the actual value of parameter x (i.e. source is more resistant against a noise of estimation). 4 The assumption allows us to derive different trends in cases when there are multiple estimations that result to shorter estimation intervals.
56
Basic Idea of Decontextualization
57
An Example
58
Some Notes 4 If you have several opinions (estimations, recognition results, solutions etc.) with different value of uncertainty you can select the most precise one, however 4 it seems more reasonable to order opinions from the worst to the best one and try to recognize a trend of uncertainty which helps you to derive opinion more precise than the best one.
59
Application of the Trend of Uncertainty to Image Restoration
60
Published and Further Developed in Terziyan V., Puuronen S., Kaikova H., Interval-Based Parameter Recognition with the Trends in Multiple Estimations, Pattern Recognition and Image Analysis: Advances of Mathematical Theory and Applications, Interperiodica Publishing, V. 9, No. 4, August 1999. Terziyan V., Puuronen S., Kaikova H., Handling Uncertainty by Decontextualizing Estimated Intervals, In: Proceedings of MISC'99 Workshop on Applications of Interval Analysis to Systems and Control with special emphasis on recent advances in Modal Interval Analysis, 24-26 February 1999, Universitat de Girona, Girona, Spain, pp. 111-121.
61
Flexible Arithmetic for Huge Numbers with Recursive Series of Operations
62
Infinite Series of Arithmetical Operations 4. General case
63
A recursive expansion of the set of ordinary arithmetical operations was investigated; The recursive arithmetical operation was defined, where n is the level of recursion starting with ordinary + (n=1); Basic properties of recursive operations were investigated, an algorithm for calculation of these operations was considered; The recursive counters’ were proposed for representation of huge integers, which are results of recursive operations, in a restricted memory. Some results
64
Published in Terziyan V., Tsymbal A., Puuronen S., Flexible Arithmetic For Huge Numbers with Recursive Series of Operations, In: 13-th AAECC Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, 15-19 November 1999, Hawaii, USA.
65
A Similarity Evaluation Technique for Cooperative Problem Solving with a Group of Agents
66
Goal 4 The goal of this research is to develop simple similarity evaluation technique to be used for cooperative problem solving based on opinions of several agents 4 Problem solving here is finding of an appropriate solution for the problem among available ones based on opinions of several agents
67
Basic Concepts: Virtual Training Environment (VTE) VTE of a group of agents is a quadruple: D is the set of problems D 1, D 2,..., D n in the VTE; C is the set of solutions C 1, C 2,..., C m, that are used to solve the problems; S is the set of agents S 1, S 2,..., S r, who selects solutions to solve the problems; P is the set of semantic predicates that define relationships between D, C, S
68
External Similarity Values External Similarity Values (ESV): binary relations DC, SC, and SD between the elements of (sub)sets of D and C; S and C; and S and D. ESV are based on total support among all the agents for voting for the appropriate connection (or refusal to vote)
69
Internal Similarity Values Internal Similarity Values (ISV): binary relations between two subsets of D, two subsets of C and two subsets of S. ISV are based on total support among all the agents for voting for the appropriate connection (or refusal to vote)
70
Why we Need Similarity Values (or Distance Measure) ? 4 Distance between problems is used by agents to recognize nearest solved problems for any new problem 4 distance between solutions is necessary to compare and evaluate solutions made by different agents 4 distance between agents is useful to evaluate weights of all agents to be able to integrate them by weighted voting.
71
Deriving External Relation DC: How well solution fits the problem Agents Problems Solutions
72
Deriving External Relation SC: Measures Agents Competence in the Area of Solutions 4 The value of the relation (S k,C j ) in a way represents the total support that the agent S k obtains selecting (refusing to select) the solution C j to solve all the problems.
73
Deriving External Relation SD: Measures Agents Competence in the Problem’s Area 4 The value of the relation (S k,D i ) represents the total support that the agent S k receives selecting (or refusing to select) all the solutions to solve the problem D i.
74
Agent’s Evaluation: Competence Quality in Problem Area - measure of the abilities of an agent in the area of problems from the support point of view
75
Agent’s Evaluation: Competence Quality in Solutions’ Area - measure of the abilities of an agent in the area of solutions from the support point of view
76
Quality Balance Theorem The evaluation of an agent competence (ranking, weighting, quality evaluation) does not depend on the competence area “virtual world of problems” or “conceptual world of solutions” because both competence values are always equal.
77
Internal Similarity for Agents: Problems-based Similarity Problems Agents
78
Internal Similarity for Agents: Solutions-Based Similarity Solutions Agents
79
Internal Similarity for Agents: Solutions-Problems-Based Similarity Agents Solutions Problems
80
Conclusion 4 Discussion was given to methods of deriving the total support of each binary similarity relation. This can be used, for example, to derive the most supported solution and to evaluate the agents according to their competence 4 We also discussed relations between elements taken from the same set: problems, solutions, or agents. This can be used, for example, to divide agents into groups of similar competence relatively to the problems-solutions environment
81
Published in Puuronen S., Terziyan V., A Similarity Evaluation Technique for Cooperative Problem Solving with a Group of Agents, In: M. Klush, O. M. Shegory, G. Weiss (Eds.), Cooperative Information Agents III, Lecture Notes in Artificial Intelligence, Springer-Verlag, V. 1652, 1999, pp. 163-174.
82
On-Line Incremental Instance- Based Learning
83
How to derive the most supported knowledge (on-line prediction or classification) from the multiple experts (ensemble of classifiers); how to make quality evaluation of the most supported opinion (of the ensemble prediction); how to make, evaluate, use and refine ranks (weights) of all the experts (predictors) to improve the results of the on- line learning algorithm. The Problems Addressed The following problems has been investigated both on-line learning for human experts and for artificial predictors:
84
Results Published in Kaikova H., Terziyan V., Temporal Knowledge Acquisition From Multiple Experts, In: Shoval P. & Silberschatz A. (Eds.), Proceedings of NGITS’97 - The Third International Workshop on Next Generation Information Technologies and Systems, Neve Ilan, Israel, June - July, 1997, pp. 44 - 55. Puuronen S., Terziyan V., Omelayenko B., Experimental Investigation of Two Rank Refinement Strategies for Voting with Multiple Experts, Artificial Intelligence, Donetsk Institute of Artificial Intelligence, V. 2, 1988, pp. 25-41. Omelayenko B., Terziyan. V., Puuronen S., Managing Training Examples for Fast Learning of Classifiers Ranks, In: CSIT’99 - International Workshop on Computer Science and Information Technologies, January 1999, Moscow, Russia, pp. 139-148. Puuronen S., Terziyan V., Omelayenko B., Multiple Experts Voting: Two Rank Refinement Strategies, In: Integrating Technology & Human Decisions: Global Bridges into the 21 st Century, Proceedings of the D.S.I.’99 5-th International Conference, 4-7 July 1999, Athens, Greece, V. 1, pp. 634-636.
85
We will be Happy to Cooperate with You !
86
Advanced Diagnostics Algorithms in Online Field Device Monitoring Vagan Terziyan (editor) http://www.cs.jyu.fi/ai/Metso_Diagnostics.ppt “Industrial Ontologies” Group: http://www.cs.jyu.fi/ai/OntoGroup/index.html “Industrial Ontologies” Group, Agora Center, University of Jyväskylä, 2003
87
Contents OntoServ.Net 4 Introduction: OntoServ.Net – Global “Health- Care” Environment for Industrial Devices; Bayesian Metanetworks 4 Bayesian Metanetworks for Context-Sensitive Industrial Diagnostics; 4 Temporal Industrial Diagnostics 4 Temporal Industrial Diagnostics with Uncertainty; 4 Dynamic Integration 4 Dynamic Integration of Classification Algorithms for Industrial Diagnostics; Real-Time Neuro- Fuzzy Systems 4 Industrial Diagnostics with Real-Time Neuro- Fuzzy Systems; 4 Conclusion.
88
Vagan Terziyan Oleksiy Khriyenko Oleksandr Kononenko Andriy Zharko
89
Web Services for Smart Devices Smart industrial devices can be also Web Service “users”. Their embedded agents are able to monitor the state of appropriate device, to communicate and exchange data with another agents. There is a good reason to launch special Web Services for such smart industrial devices to provide necessary online condition monitoring, diagnostics, maintenance support, etc. OntoServ.Net: “Semantic Web Enabled Network of Maintenance Services for Smart Devices”, Industrial Ontologies Group, Tekes Project Proposal, March 2003,
90
Global Network of Maintenance Services OntoServ.Net: “Semantic Web Enabled Network of Maintenance Services for Smart Devices”, Industrial Ontologies Group, Tekes Project Proposal, March 2003,
91
Embedded Maintenance Platforms Service Agents Host Agent Embedded Platform Based on the online diagnostics, a service agent, selected for the specific emergency situation, moves to the embedded platform to help the host agent to manage it and to carry out the predictive maintenance activities Maintenance Service
92
OntoServ.Net Challenges smart industrial devices 4 New group of Web service users – smart industrial devices. 4 Internalexternal service platforms 4 Internal (embedded) and external (Web-based) agent enabled service platforms. Mobile Service Component 4 “Mobile Service Component” concept supposes that any service component can move, be executed and learn at any platform from the Service Network, including service requestor side. 4 Semantic Peer-to-Peer 4 Semantic Peer-to-Peer concept for service network management assumes ontology-based decentralized service network management.
93
Agents in Semantic Web 1. “I feel bad, pressure more than 200, headache, … Who can advise what to do ? “ 4. “Never had such experience. No idea what to do” 3. “Wait a bit, I will give you some pills” 2. “ I think you should stop drink beer for a while “ Agents in Semantic Web supposed to understand each other because they will share common standard, platform, ontology and language
94
The Challenge: Global Understanding eNvironment (GUN) How to make entities from our physical world to understand each other when necessary ?.. … Its elementary ! But not easy !! Just to make agents from them !!!
95
GUN Concept Entities will interoperate through OntoShells, which are “supplements” of these entities up to Semantic Web enabled agents 1. “I feel bad, temperature 40, pain in stomach, … Who can advise what to do ? “ 2. “I have some pills for you”
96
Semantic Web: Before GUN Semantic Web Resources Semantic Web Applications Semantic Web applications “understand”, (re)use, share, integrate, etc. Semantic Web resources
97
GUN Concept: All GUN resources “understand” each other Real World objects OntoAdapters Real World Object + + OntoAdapter + + OntoShell = GUN Resource = GUN Resource GUN OntoShells Real World objects of new generation (OntoAdapter inside)
98
Read Our Reports 4 Semantic Web: The Future Starts Today –(collection of research papers and presentations of Industrial Ontologies Group for the Period November 2002-April 2003) 4 Semantic Web and Peer-to-Peer: Integration and Interoperability in Industry 4 Semantic Web Enabled Web Services: State-of-Art and Challenges 4 Distributed Mobile Web Services Based on Semantic Web: Distributed Industrial Product Maintenance System 4 Available online in: http://www.cs.jyu.fi/ai/OntoGroup/index.html Industrial Ontologies Group V. Terziyan A. Zharko O. Kononenko O. Khriyenko
99
Vagan Terziyan Oleksandra Vitko
100
Example of Simple Bayesian Network Conditional (in)dependence rule Joint probability rule Marginalization rule Bayesian rule
101
Contextual and Predictive Attributes Machine Environment Sensors X x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 predictive attributes contextual attributes air pressure dust humidity temperature emission
102
Contextual Effect on Conditional Probability X x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 predictive attributes contextual attributes xkxk xrxr Assume conditional dependence between predictive attributes (causal relation between physical quantities)… xtxt … some contextual attribute may effect directly the conditional dependence between predictive attributes but not the attributes itself
103
Contextual Effect on Conditional Probability X ={x 1, x 2, …, x n } – predictive attribute with n values; Z ={z 1, z 2, …, z q } – contextual attribute with q values; P(Y|X) = {p 1 (Y|X), p 2 (Y|X), …, p r (Y|X)} – conditional dependence attribute (random variable) between X and Y with r possible values; P(P(Y|X)|Z) – conditional dependence between attribute Z and attribute P(Y|X);
104
Contextual Effect on Unconditional Probability X x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 predictive attributes contextual attributes xkxk Assume some predictive attribute is a random variable with appropriate probability distribution for its values… xtxt … some contextual attribute may effect directly the probability distribution of the predictive attribute x1x1 x2x2 x3x3 x4x4 X P(X)
105
Contextual Effect on Unconditional Probability X ={x 1, x 2, …, x n } – predictive attribute with n values; · Z ={z 1, z 2, …, z q } – contextual attribute with q values and P(Z) – probability distribution for values of Z; P(X) = {p 1 (X), p 2 (X), …, p r (X)} – probability distribution attribute for X (random variable) with r possible values (different possible probability distributions for X) and P(P(X)) is probability distribution for values of attribute P(X); · P(Y|X) is a conditional probability distribution of Y given X; · P(P(X)|Z) is a conditional probability distribution for attribute P(X) given Z
106
Bayesian Metanetworks for Advanced Diagnostics Terziyan V., Vitko O., Probabilistic Metanetworks for Intelligent Data Analysis, Artificial Intelligence, Donetsk Institute of Artificial Intelligence, Vol. 3, 2002, pp. 188-197. Terziyan V., Vitko O., Bayesian Metanetwork for Modelling User Preferences in Mobile Environment, In: German Conference on Artificial Intelligence (KI-2003), Hamburg, Germany, September 15-18, 2003.
107
Two-level Bayesian Metanetwork for managing conditional dependencies
108
Causal Relation between Conditional Probabilities xkxk xrxr xmxm xnxn P 1 (X n |X m ) P(X n | X m ) P(P(X n | X m )) P 2 (X n |X m )P 3 (X n |X m ) P 1 (X r |X k ) P(X r | X k ) P(P(X r | X k )) P 2 (X r |X k ) P(P(X r | X k )|P(X n | X m )) There might be causal relationship between two pairs of conditional probabilities
109
Example of Bayesian Metanetwork The nodes of the 2 nd -level network correspond to the conditional probabilities of the 1 st -level network P(B|A) and P(Y|X). The arc in the 2 nd - level network corresponds to the conditional probability P(P(Y|X)|P(B|A))
110
Other Cases of Bayesian Metanetwork (1) Unconditional probability distributions associated with nodes of the predictive level network depend on probability distributions associated with nodes of the contextual level network
111
Other Cases of Bayesian Metanetwork (2) The metanetwork on the contextual level models conditional dependence particularly between unconditional and conditional probabilities of the predictive level
112
Other Cases of Bayesian Metanetwork (3) The combination of cases 1 and 2
113
Relevance 2-level Relevance Bayesian Metanetwork (for modelling relevant features’ selection)
114
Simple Relevance Bayesian Metanetwork We consider relevance as a probability of importance of the variable to the inference of target attribute in the given context. In such definition relevance inherits all properties of a probability.
115
Example of 2-level Relevance Bayesian Metanetwork In a relevance network the relevancies are considered as random variables between which the conditional dependencies can be learned.
116
More Complicated Case of Managing Relevance (1) 1 2 3 4
117
More Complicated Case of Managing Relevance (2)
118
General Case of Managing Relevance (1) Predictive attributes: X1 with values {x1 1,x1 2,…,x1 nx1 }; X2 with values {x2 1,x2 2,…,x2 nx2 }; … XN with values {xn 1,xn 2,…,xn nxn }; Target attribute: Y with values {y 1,y 2,…,y ny }. Probabilities: P(X1), P(X2),…, P(XN); P(Y|X1,X2,…,XN). Relevancies: X1 = P( (X1) = “yes”); X2 = P( (X2) = “yes”); … XN = P( (XN) = “yes”); Goal: to estimate P(Y).
119
General Case of Managing Relevance (2)
120
Example of Relevance Metanetwork Relevance level Predictive level
121
Combined Bayesian Metanetwork In a combined Metanetwork two controlling (contextual) levels will effect the basic level
122
Learning Bayesian Metanetworks from Data 4 Learning Bayesian Metanetwork structure (conditional, contextual and relevance (in)dependencies at each level); 4 Learning Bayesian Metanetwork parameters (conditional and unconditional probabilities and relevancies at each level). Vitko O., Multilevel Probabilistic Networks for Modelling Complex Information Systems under Uncertainty, Ph.D. Thesis, Kharkov National University of Radioelectronics, June 2003. Supervisor: Terziyan V.
123
When Bayesian Metanetworks ? 1.Bayesian Metanetwork can be considered as very powerful tool in cases where structure (or strengths) of causal relationships between observed parameters of an object essentially depends on context (e.g. external environment parameters); 2.Also it can be considered as a useful model for such an object, which diagnosis depends on different set of observed parameters depending on the context.
124
Vagan Terziyan Vladimir Ryabov
125
Temporal Diagnostics of Field Devices The approach to temporal diagnostics uses the algebra of uncertain temporal relations*. Uncertain temporal relations are formalized using probabilistic representation. Relational networks are composed of uncertain relations between some events (set of symptoms) A number of relational networks can be combined into a temporal scenario describing some particular course of events (diagnosis). In future, a newly composed relational network can be compared with existing temporal scenarios, and the probabilities of belonging to each particular scenario are derived. * Ryabov V., Puuronen S., Terziyan V., Representation and Reasoning with Uncertain Temporal Relations, In: A. Kumar and I. Russel (Eds.), Proceedings of the Twelfth International Florida AI Research Society Conference - FLAIRS-99, AAAI Press, California, 1999, pp. 449-453.
126
Conceptual Schema for Temporal Diagnostics N S1S1 S2S2 … SnSn Temporal scenarios Recognition of temporal scenarios We estimate the probability of belonging of the particular relational network to known temporal scenarios. Generating temporal scenarios We compose a temporal scenario combining a number of relational networks consisting of the same set of symptoms and possibly different temporal relations between them. N1N1 N2N2 N3N3 N4N4 N5N5 S Terziyan V., Ryabov V., Abstract Diagnostics Based on Uncertain Temporal Scenarios, International Conference on Computational Intelligence for Modelling Control and Automation CIMCA’2003, Vienna, Austria, 12-14 February 2003, 6 pp.
127
Industrial Temporal Diagnostics (conceptual schema) Industrial object Temporal data Relational network DB of scenarios Estimation Recognition Diagnosis Learning Ryabov V., Terziyan V., Industrial Diagnostics Using Algebra of Uncertain Temporal Relations, IASTED International Conference on Artificial Intelligence and Applications, Innsbruck, Austria, 10-13 February 2003, 6 pp.
128
Event 2 - imperfect temporal relation between temporal points ( Event 1 and Event 2 ): P( event 1, before, event 2 ) = a 1 ; P( event 1, same time, event 2 ) = a 2 ; P( event 1, after, event 2 ) = a 3. Event 1 Imperfect Relation Between Temporal Point Events: Definition Ryabov V., Handling Imperfect Temporal Relations, Ph.D. Thesis, University of Jyvaskyla, December 2002. Supervisors: Puuronen S., Terziyan V.
129
Example of Imperfect Relation Event 2 - imperfect temporal relation between temporal points: P( event 1, before, event 2 ) = 0.5; P( event 1, same time, event 2 ) = 0.2; P( event 1, after, event 2 ) = 0.3. Event 1 1 < = > R(Event 1,Event 2)
130
Operations for Reasoning with Temporal Relations r a,b r b,c r a,c = r a,b r b,c a b c Inversion Sum Composition
131
Temporal Interval Relations 4 The basic interval relations are the thirteen Allen’s relations: A before (b) BB after (bi) A A meets (m) BB met-by (mi) A A overlaps (o) BB overlapped-by (oi) A A starts (s) BB started-by (si) A A during (d) BB contains (di) A A finishes (f) BB finished-by (fi) A A equals (eq) BB equals A A B A B A B B A A B A B B A
132
Imperfect Relation Between Temporal Intervals: Definition interval 2 - imperfect temporal relation between temporal intervals ( interval 1 and interval 2 ): P( interval 1, before, interval 2 ) = a 1 ; P( interval, meets, interval 2 ) = a 2 ; P( interval 1, overlaps, interval 2 ) = a 3 ; … P( interval 1, equals, interval 2 ) = a 13 ; interval 1
133
Industrial Temporal Diagnostics (composing a network of relations) Sensor 3 Sensor 2 Relational network representing the particular case Industrial object Sensor 1 Estimation of temporal relations between symptoms
134
Industrial Temporal Diagnostics (generating temporal scenarios) N1N1 Scenario S N3N3 N2N2 Object A Object B Object C Generating the temporal scenario for “Failure X” DB of scenarios 1. for i=1 to n do 2. for j=i+1 to n do 3. if ( R 1 ) or…or ( R k ) then 4. begin 5. for g=1 to n do 6. if not ( R g ) then Reasoning(, R g ) 7. // if “Reasoning” = False then ( R g )=TUR 8. ( R) = Å ( R t ), where t=1,..k 9. end 10. else go to line 2
135
Recognition of Temporal Scenario Bal(R A,B ) = Industrial object Temporal data Relational network DB of scenarios Estimation Recognition Diagnosis Learning b m o fi di si eqeq s d f oi mimi bi w bi =1 w eq =0.5 w b =0 w f =0.75 Balance point for R A,B Balance point for R C,D Probability value
136
When Temporal Diagnostics ? 1.Temporal diagnostics considers not only a static set of symptoms, but also the time during which they were monitored. This often allows having a broader view on the situation, and sometimes only considering temporal relations between different symptoms can give us a hint to precise diagnostics; 2.This approach might be useful for example in cases when appropriate causal relationships between events (symptoms) are not yet known and the only available for study are temporal relationships; 3.Combination of Bayesian (based on probabilistic causal knowledge) and Temporal Diagnostics would be quite powerful diagnostic tool.
137
Terziyan V., Dynamic Integration of Virtual Predictors, In: L.I. Kuncheva, F. Steimann, C. Haefke, M. Aladjem, V. Novak (Eds), Proceedings of the International ICSC Congress on Computational Intelligence: Methods and Applications - CIMA'2001, Bangor, Wales, UK, June 19 - 22, 2001, ICSC Academic Press, Canada/The Netherlands, pp. 463-469. Vagan Terziyan
138
The Problem During the past several years, in a variety of application domains, researchers in machine learning, computational learning theory, pattern recognition and statistics have tried to combine efforts to learn how to create and combine an ensemble of classifiers. The primary goal of combining several classifiers is to obtain a more accurate prediction than can be obtained from any single classifier alone.
139
Approaches to Integrate Multiple Classifiers Integrating Multiple Classifiers Selection Combination Global (Static) Local (Dynamic) Local (“Virtual” Classifier) Global (Voting-Type) Decontextualization
140
Inductive learning with integration of predictors Sample Instances ytyt Learning Environment P1P1 P2P2...PnPn Predictors/Classifiers
141
Virtual Classifier TC - Team Collector TM - Training Manager TP - Team Predictor TI - Team Integrator FS - Feature Selector DE - Distance Evaluator CL - Classification Processor Virtual Classifier is a group of seven cooperative agents:
142
Classification Team: Feature Selector FS - Feature Selector
143
Feature Selector: finds the minimally sized feature subset that is sufficient for correct classification of the instance Feature Selector Sample Instances
144
Classification Team: Distance Evaluator DE - Distance Evaluator
145
Distance between Two Instances with Heterogeneous Attributes (example) where: d (“red”, “yellow”) = 1d (15°, 25°) = 10°/((+50°)-(-50°)) = 0.1
146
Distance Evaluator: measures distance between instances based on their numerical or nominal attribute values Distance Evaluator
147
Classification Team: Classification Processor CL - Classification Processor
148
Classification Processor: predicts class for a new instance based on its selected features and its location relatively to sample instances Classification Processor Sample Instances Feature Selector Distance Evaluator
149
Team Instructors: Team Collector TC - Team Collector completes Classification Teams for training
150
Team Collector completes classification teams for future training Team Collector FS i DE j CL k Feature Selection methods Distance Evaluation functions Classification rules
151
Team Instructors: Training Manager TM - Training Manager trains all completed teams on sample instances
152
Training Manager trains all completed teams on sample instances Training Manager FS i1 DE j1 CL k1 FS i2 DE j2 CL k2 FS in DE jn CL kn Sample InstancesSample Metadata Classification Teams
153
Team Instructors: Team Predictor TP - Team Predictor predicts weights for every classification team in certain location
154
Team Predictor predicts weights for every classification team in certain location Team Predictor: e.g. WNN algorithm Sample Metadata Predicted weights of classification teams Location
155
Team Prediction: Locality assumption Each team has certain subdomains in the space of instance attributes, where it is more reliable than the others; This assumption is supported by the experiences, that classifiers usually work well not only in certain points of the domain space, but in certain subareas of the domain space [Quinlan, 1993]; If a team does not work well with the instances near a new instance, then it is quite probable that it will not work well with this new instance also.
156
Team Instructors: Team Integrator TI - Team Integrator produces classification result for a new instance by integrating appropriate outcomes of learned teams
157
Team integrator produces classification result for a new instance by integrating appropriate outcomes of learned teams Team Integrator FS i1 DE j1 CL k1 FS i2 DE j2 CL k2 FS in DE jn CL kn New instance y t1 y t2 y t1 ytyt Weights of classification teams in the location of a new instance Classification teams
158
Static Selection of a Classifier 4 Static selection means that we try all teams on a sample set and for further classification select one, which achieved the best classification accuracy among others for the whole sample set. Thus we select a team only once and then use it to classify all new domain instances.
159
Dynamic Selection of a Classifier 4 Dynamic selection means that the team is being selected for every new instance separately depending on where this instance is located. If it has been predicted that certain team can better classify this new instance than other teams, then this team is used to classify this new instance. In such case we say that the new instance belongs to the “competence area” of that classification team.
160
Conclusion 4 Knowledge discovery with an ensemble of classifiers is known to be more accurate than with any classifier alone [e.g. Dietterich, 1997]. 4 If a classifier somehow consists of certain feature selection algorithm, distance evaluation function and classification rule, then why not to consider these parts also as ensembles making a classifier itself more flexible? 4 We expect that classification teams completed from different feature selection, distance evaluation, and classification methods will be more accurate than any ensemble of known classifiers alone, and we focus our research and implementation on this assumption.
161
Yevgeniy Bodyanskiy Volodymyr Kushnaryov
162
Online Stochastic Faults’ Prediction Control Systems Research Laboratory, AI Department, Kharkov National University of Radioelectronics. Head: Prof. E. Bodyanskiy. Carries out research on development of mathematical and algorithmic support of systems for control, diagnostics, forecasting and emulation: 1. Neural network architectures and real-time algorithms for observation and sensor data processing (smoothing, filtering, prediction) under substantial uncertainty conditions; 2. Neural networks in polyharmonic sequence analysis with unknown non-stationary parameters; 3. Analysis of chaotic time series; adaptive algorithms and neural network architectures for early fault detection and diagnostics of stochastic processes; 4. Adaptive multivariable predictive control algorithms for stochastic systems under various types of constraints; 5. Adaptive neuro-fuzzy control of non-stationary nonlinear systems; 6. Adaptive forecasting of non-stationary nonlinear time series by means of neuro-fuzzy networks; 7. Fast real-time adaptive learning procedures for various types of neural and neuro-fuzzy networks. Bodyanskiy Y., Vorobyov S, Recurrent Neural Network Detecting Changes in the Properties of Non-Linear Stochastic Sequences, Automation and Remote Control, V. 1, No. 7, 2000, pp. 1113-1124. Bodyanskiy Y., Vorobyov S., Cichocki A., Adaptive Noise Cancellation for Multi-Sensory Signals, Fluctuation and Noise Letters, V. 1, No. 1, 2001, pp. 12-23. Bodyanskiy Y., Kolodyazhniy V., Stephan A. An Adaptive Learning Algorithm for a Neuro-Fuzzy Network, In: B. Reusch (ed.), Computational Intelligence. Theory and Applications, Berlin-Heidelberg-New York: Springer, 2001, pp. 68-75.
163
Existing Tools Most existing (neuro-) fuzzy systems used for fault diagnosis or classification are based on offline learning with the use of genetic algorithms or modifications of the error back propagation. When the number of features and possible fault situations is large, tuning of the classifying system becomes very time consuming. Moreover, such systems perform very poorly in high dimensions of the input space, so special modifications of the known architectures are required.
164
Neuro-Fuzzy Fault Diagnostics Successful application of the neuro-fuzzy synergism to fault diagnosis of complex systems demands development of an online diagnosing system that quickly learns from examples even with a large amount of data, and maintains high processing speed and high classification accuracy when the number of features is large as well.
165
Challenge: Growing (Learning) Probabilistic Neuro-Fuzzy Network (1) input layer, n inputs 1-st hidden layer, N neurons 2-nd hidden layer, (m+1) elements output layer, m divisors Bodyanskiy Ye., Gorshkov Ye., Kolodyazhniy V., Wernstedt J., Probabilistic Neuro-Fuzzy Network with Non-Conventional Activation Functions, In: Knowledge-Based Intelligent Information & Engineering Systems, Proceedings of Seventh International Conference KES’2003, 3–5 September, Oxford, United Kingdom, LNAI, Springer-Verlag, 2003. Bodyanskiy Ye., Gorshkov Ye., Kolodyazhniy V. Resource-Allocating Probabilistic Neuro-Fuzzy Network, In: Proceedings of International Conference on Fuzzy Logic and Technology, 10–12 September, Zittau, Germany, 2003.
166
Challenge: Growing (Learning) Probabilistic Neuro-Fuzzy Network (2) fuzzy classification network 4 Implements fuzzy reasoning and classification (fuzzy classification network); growing network 4 Creates automatically neurons based on training set (growing network); learning network 4 Learns free parameters of the network based on training set (learning network); high- performance network 4 Guarantees high precision of classification based on fast learning (high- performance network); powerful and economical network 4 Able to perform with huge volumes of data with limited computational resources (powerful and economical network); real-time network 4 Able to work in real-time (real-time network). Tested on real data in comparison with classical probabilistic neural network Unique combination of features
167
Tests for Neuro-Fuzzy Algorithms Industrial Ontologies Group (Kharkov’s Branch), Data Mining Research Group and Control Systems Research Laboratory of the Artificial Intelligence Department of Kharkov National University of Radioelectronics have essential theoretical and practical experience in implementing neuro-fuzzy approach and specifically Real-Time Probabilistic Neuro-Fuzzy Systems for Simulation, Modeling, Forecasting, Diagnostics, Clustering, Control. We are interested in cooperation with Metso in that area and we are ready to present the performance of our algorithms on real data taken from any of Metso’s products to compare our algorithms with existing in Metso algorithms.
168
Inventions we can offer (1) 4 Method of intelligent preventive or predictive diagnostics and forecasting of technical condition of industrial equipment, machines, devices, systems, etc. in real time based on analysis of non-stationary stochastic signals (e.g. from sensors of temperature, pressure, current, shifting, frequency, energy consumption, and other parameters with threshold values). 4 The method is based on advanced data mining techniques, which utilize fuzzy-neuro technologies, and differs from existing tools by flexible self-organizing network structure and by optimization of computational resources while learning.
169
Inventions we can offer (2) 4 Method of intelligent real-time preventive or predictive diagnostics and forecasting of technical condition of industrial equipment, machines, devices, systems, etc. based on analysis of signals with non- stationary and non-multiplied periodical components (e.g. from sensors of vibration, noise, frequencies of rotation, current, voltage, etc.). 4 The method is based on optimization of computational resources while learning because of intelligent reducing of the number of signal components being analyzed.
170
Inventions we can offer (3) 4 Method and mechanism of optimal control of dosage and real-time infusion of anti-wear oil additives into industrial machines based on its real-time condition monitoring.
171
Summary of problems we can solve 4 Rather global system for condition monitoring and preventive maintenance based on OntoServ.Net (global, agent-based, ontology-based, Semantic Web services- based, semantic P2P search-based) technologies, modern and advanced data-mining methods and tools with knowledge creation, warehousing, and updating during not only device’s lifetime, but also utilizing (for various maintenance needs) knowledge obtained afterwards (various testing and investigations techniques other than information taken from “living” device’s sensors) from broken-down, worn out or aged components of the same type.
172
Recently Performed Case Studies (1) 4 Ontology Development for Gas Compressing Equipment Diagnostics Realized by Neural Networks 4 Available in: http://www.cs.jyu.fi/ai/OntoGroup/docs/July2003.pdf Volodymyr Kushnaryov Semen Simkin
173
Recently Performed Case Studies (2) 4 The use of Ontologies for Faults and State Description of Gas- Transfer Units 4 Available in: http://www.cs.jyu.fi/ai/OntoGroup/docs/July2003.pdf Volodymyr Kushnaryov Konstantin Tatarnikov
175
Conclusion 4 Industrial Ontologies Research Group OntoServ.Net branches in Kharkov experts and experiences Metso 4 Industrial Ontologies Research Group (University of Jyvaskyla), which is piloting the OntoServ.Net concept of the Global Semantic Web - Based System for Industrial Maintenance, has also powerful branches in Kharkov (e.g. IOG-Kharkov’s Branch, Control Systems Research Laboratory, Data Mining Research Group, etc.) with experts and experiences in various and challenging data mining and knowledge discovery, online diagnostics, forecasting and control, models learning and integration, etc. methods, which can be and reasonable to be successfully utilized within going-on cooperation between Metso and Industrial Ontologies Group.
176
Find about our recent activities in: 4 http://www.cs.jyu.fi/ai/OntoGroup/projects.htm http://www.cs.jyu.fi/ai/OntoGroup/projects.htm 4 http://www.cs.jyu.fi/ai/vagan/papers.html http://www.cs.jyu.fi/ai/vagan/papers.html
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.