Presentation is loading. Please wait.

Presentation is loading. Please wait.

Qualitative Induction Dorian Šuc and Ivan Bratko Artificial Intelligence Laboratory Faculty of Computer and Information Science University of Ljubljana,

Similar presentations


Presentation on theme: "Qualitative Induction Dorian Šuc and Ivan Bratko Artificial Intelligence Laboratory Faculty of Computer and Information Science University of Ljubljana,"— Presentation transcript:

1 Qualitative Induction Dorian Šuc and Ivan Bratko Artificial Intelligence Laboratory Faculty of Computer and Information Science University of Ljubljana, Slovenia

2 Overview Learning of qualitative models Our learning problem: qualitative trees and qualitatively constrained functions Learning of qualitatively constrained functions Learning of qualitative trees (ep-QUIN, QUIN) QUIN in skill reconstruction (container crane) Conclusions and further work

3 Learning of qualitative models Motivation: building a qualitative model is a time-consuming process that requires significant knowledge Learning from examples of system’s behaviour: –GENMODEL-Coiera, 89; KARDIO-Mozetič, 87; Bratko et al., 89 –MISQ-Kraan et al., 91; Richards et al., 92 –ILP approaches-Bratko et al., 91; Džeroski&Todorovski, 93 Learning of QDE or logical models

4 Our approach Inductive learning of qualitative trees from numerical examples; qualitatively constrained functions based on qualitative proportionality predicates (Forbus, 84) Motivation for learning of qualitative trees: experiments with reconstruction of human control skill and qualitative control strategies (crane, acrobot-Šuc and Bratko 99, 00)

5 Learning problem Usual classification learning problem, but learning of qualitative trees: –in leaves are qualitatively constrained functions (QCFs); QCFs give constraints on the class change in response to a change in attributes –internal nodes (splits) define a partition of the attribute space into areas with common qualitative behavior of the class variable

6 A qualitative tree example A qualitative tree for the function: z=x 2 -y 2 z is monotonically increasing in its dependence on x and monotonically decreasing in its dependence on y z is positively related to x and negatively related to y

7 Qualitatively constrained functions (QCFs) M + (x)  arbitrary monotonically increasing fn. of x A QCF is a generalization of M +, similar to qual. proportionality predicates used in QPT(Forbus, 84) Gas in the container: Pres = c Temp / Vol, c = n R > 0 Gas in the container: Pres = c Temp / Vol, c = n R > 0 Temp= std & Vol   Pres  Temp  & Vol   Pres  Temp  & Vol   Pres  Temp= std & Vol   Pres  Temp  & Vol   Pres  Temp  & Vol   Pres  QCF: Pres = M +,- (Temp,Vol) Temp  & Vol   Pres ? Temp  & Vol   Pres ? Temp  & Vol   Pres ? Temp  & Vol   Pres ?

8 Learning QCFs Pres = 2 Temp / Vol Temp Vol Pres 315.00 56.00 11.25 315.00 62.00 10.16 330.00 50.00 13.20 300.00 50.00 12.00 300.00 55.00 10.90 Pres = 2 Temp / Vol Temp Vol Pres 315.00 56.00 11.25 315.00 62.00 10.16 330.00 50.00 13.20 300.00 50.00 12.00 300.00 55.00 10.90 Learning of the “most consitent” QCF: 1)For each pair of examples form a qualitative change vector 2)Select the QCF with minimal error-cost Learning of the “most consitent” QCF: 1)For each pair of examples form a qualitative change vector 2)Select the QCF with minimal error-cost

9 QCF Incons. Amb. M + (Temp) M - (Temp) M + (Vol) M - (Vol) M +,+ (Temp,Vol) M +,- (Temp,Vol) M -,+ (Temp,Vol) M -,- (Temp,Vol) QCF Incons. Amb. M + (Temp) M - (Temp) M + (Vol) M - (Vol) M +,+ (Temp,Vol) M +,- (Temp,Vol) M -,+ (Temp,Vol) M -,- (Temp,Vol) QCF Incons. Amb. M + (Temp) 3 1 M - (Temp) M + (Vol) M - (Vol) M +,+ (Temp,Vol) M +,- (Temp,Vol) M -,+ (Temp,Vol) M -,- (Temp,Vol) QCF Incons. Amb. M + (Temp) 3 1 M - (Temp) M + (Vol) M - (Vol) M +,+ (Temp,Vol) M +,- (Temp,Vol) M -,+ (Temp,Vol) M -,- (Temp,Vol) Learning QCFs QCF Incons. Amb. M + (Temp) 3 1 M - (Temp) 2,4 1 M + (Vol) 1,2,3 / M - (Vol) 4 / M +,+ (Temp,Vol) 1,3 2 M +,- (Temp,Vol) / 3,4 M -,+ (Temp,Vol) 1,2 3,4 M -,- (Temp,Vol) 4 2 QCF Incons. Amb. M + (Temp) 3 1 M - (Temp) 2,4 1 M + (Vol) 1,2,3 / M - (Vol) 4 / M +,+ (Temp,Vol) 1,3 2 M +,- (Temp,Vol) / 3,4 M -,+ (Temp,Vol) 1,2 3,4 M -,- (Temp,Vol) 4 2 Select QCF with minimal QCF error-cost Select QCF with minimal QCF error-cost q Temp =neg q Vol =neg q Pres =pos q Temp =neg q Vol =neg q Pres =pos

10 Learning qualitative tree For every possible split, split the examples into two subsets, find the “most consistent” QCF for both subsets and select the split minimizing tree- error cost (based on MDL) Algorithm ep-QUIN uses every pair of examples An improvement: heuristic QUIN algorithm that considers also locality and consistency of qualitative change vectors

11 Algorithm ep-QUIN, example 12 learning examples that correspond to 3 linear functions Induced qual. tree does not correspond to the intuition ep-QUIN does not consider the locality of qual. changes

12 Improvement: algorithm QUIN Heuristic QUIN algorithm considers the locality and consistency of qualitative change vectors Human notices 3 groups of near-by points; QUIN considers the proximity of examples Qualitative change vectors of near-by points are weighted more

13 QUIN considers the consistency of the class’s qual. change at k nearest neighbors of the point QUIN: same algorithm as ep-QUIN but with the improved tree-error cost (weighted qualitative change vectors) Heuristic QUIN algorithm considers the locality and consistency of qualitative change vectors Human notices 3 groups of near-by points; QUIN considers the proximity of examples Improvement: algorithm QUIN

14 Experimental evaluation On a set of artificial domains: –Results by QUIN better than ep-QUIN –QUIN can handle noisy data –In simple domains QUIN finds qualitative relations corresponding to our intuition QUIN in skill reconstruction: –QUIN used to induce qual. control strategies from examples of the human control performance –Experiments in the crane domain

15 Skill reconstruction and behavioural cloning Motivation: –understanding of the human skill –development of an automatic controller ML approach to skill reconstruction: learn a control strategy from the logged data from skilled human operators (execution trace). Later called behavioural cloning (Michie, 93). Used in domains as: –pole balancing (Miche et al., 90) –piloting (Sammut et al., 92; Camacho 95) –container cranes (Urbančič & Bratko, 94)

16 Learning problem for skill reconstruction Execution traces used as examples for ML to induce: –a control strategy (comprehensible, symbolic) –automatic controller (criterion of success) Operator’s execution trace: –a sequence of system states and corresponding operator’s actions, logged to a file at a certain frequency

17 Container crane Used in ports for load transportation Control forces: F x, F L State: X, dX, , d , L, dL Based on previous work of Urbančič(94) Control task: transport the load from the start to the goal position

18 Learning problem, cont. F x F L X d X  d  L d L 0 0 0.00 0.00 0.00 0.00 20.00 0.00 2500 0 0.00 0.00 -0.00 -0.01 20.00 0.00 6000 0 0.00 0.01 -0.01 -0.02 20.00 0.00 10000 0 0.02 0.10 -0.07 -0.27 20.00 0.00 14500 0 0.12 0.31 -0.32 -0.85 20.00 0.00 14500 0 0.35 0.59 -0.95 -1.49 20.00 0.01 ….… … … … … … ……. F x F L X d X  d  L d L 0 0 0.00 0.00 0.00 0.00 20.00 0.00 2500 0 0.00 0.00 -0.00 -0.01 20.00 0.00 6000 0 0.00 0.01 -0.01 -0.02 20.00 0.00 10000 0 0.02 0.10 -0.07 -0.27 20.00 0.00 14500 0 0.12 0.31 -0.32 -0.85 20.00 0.00 14500 0 0.35 0.59 -0.95 -1.49 20.00 0.01 ….… … … … … … ……. Usual approach: induce decision trees; COMPREHENSIBILITY

19 QUIN in skill reconstruction, crane domain Qualitative trees induced from execution traces Experiments with traces of 2 operators using different control styles Crane control requires trolley and rope control L des = M + ( X ) bring down the load as the trolley moves from the start to the goal position L des = M + ( X ) bring down the load as the trolley moves from the start to the goal position Rope control QUIN: L des = f(X, d X, , d , d L) Often very simple strategy induced

20 Trolley control QUIN: d X des = f(X, , d  ) More diversity in the induced strategies M-(X)M-(X) M-(X)M-(X) M+()M+() M+()M+() X < 20.7 X < 60.1 M+(X)M+(X) M+(X)M+(X) yes no First the trolley velocity is increasing From about middle distance from the goal (X=20.7) until the goal (X=60.1) the trolley velocity is decreasing At the goal reduce the swing of the rope (by acceleration of the trolley when the rope angle increases)

21 Trolley control QUIN: d X des = f(X, , d  ) More diversity in the induced strategies M-(X)M-(X) M-(X)M-(X) M+()M+() M+()M+() X < 20.7 X < 60.1 X < 29.3 M+(X)M+(X) M+(X)M+(X) d  < -0.02 M-(X)M-(X) M-(X)M-(X) M -,+ ( X,  ) M +,+,- ( X, , d  ) yes no Enables reconstruction of individual differences in control styles

22 QUIN in skill reconstruction Qualitative control strategies: Comprehensible Enable the reconstruction of individual differences in control styles of different operators Define sets of quantitative strategies and can be used as spaces for controller optimization QUIN is able to detect very subtle and important aspect of human control strategies

23 Further work Qualitative simulation to generate possible explanations of a qualitative strategy (Semi-)Qualitative reasoning to find the necessary conditions for the success of the qual. strategy Reducing the space of admissible controllers by qualitative reasoning QUIN is a general tool for qualitative system identification; applying QUIN in different domains


Download ppt "Qualitative Induction Dorian Šuc and Ivan Bratko Artificial Intelligence Laboratory Faculty of Computer and Information Science University of Ljubljana,"

Similar presentations


Ads by Google