Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lifelong Machine Learning and Reasoning

Similar presentations


Presentation on theme: "Lifelong Machine Learning and Reasoning"— Presentation transcript:

1 Lifelong Machine Learning and Reasoning
Daniel L. Silver Acadia University, Wolfville, NS, Canada CoCo NIPS 2015 Montreal, Canada - Dec 12, 2015

2 Significant contributions by
Jane Gomes Moh. Shameer Iqbal Ti Wang Xiang Jiang Geoffrey Mason Hossein Parvar

3 Talk Outline Overview Lifelong Machine Learning Role of Deep Learning
Connection to Knowledge Rep and Reasoning Learning to Reason (L2R) Empirical Studies Conclusion and Future Work

4 Overview It is now appropriate to seriously consider the nature of systems that learn and reason over a lifetime Advocate a systems approach in the context of an agent that can: Acquire new knowledge through learning Retain and consolidate that knowledge Use it in future learning, reasoning and other aspects of AI [D.Silver, Q. Yang, L.Li 2013]

5 Overview Machine learning has made great strides in Learning to Classify (L2C) in a probabilistic manner in accord with the environment P(x) x

6 Overview Propose: Learning to Reason, or L2R P(x) x
As per L.Valiant, D.Roth, R.Khardon, L.Bottou in a PAC sense, reasoning has to be adequate P(x) x

7 Overview Motivation: Learning to Reason, or L2R: LML  KR:
New insights into how to best represent common background knowledge acquired over time and over the input space KR places additional constraints on internal representation in the same way as LML Generative Deep Learning – to use wealth of unlabelled examples and provide greater plasticity

8 Lifelong Machine Learning (LML)
Considers systems that can learn many tasks over a lifetime From impoverished training sets Across a diverse domain of tasks Where practice of tasks happens Able to effectively and efficiently Consolidate (retain and integrate) learned knowledge Transfer prior knowledge when learning a new task

9 Lifelong Machine Learning (LML)
space of hypothesis spaces H’ h'k hj space of hypotheses H space of examples X xi

10 Lifelong Machine Learning (LML)
Testing Examples Instance Space X Retention & Consolidation Domain Knowledge long-term memory Knowledge Transfer (x, f(x)) Inductive Bias Knowledge Selection Model of Classifier h Inductive Learning System short-term memory Training Examples h(x) ~ f(x) S

11 Lifelong Machine Learning (LML)
Testing Examples Instance Space X Domain Knowledge long-term memory Retention & Consolidation Knowledge Transfer (x, f(x)) Inductive Bias Knowledge Selection Model of Classifier h Inductive Learning System short-term memory Training Examples h(x) ~ f(x) S

12 Lifelong Machine Learning (LML)
Testing Examples Instance Space X f2(x) f1(x) f9(x) fk(x) Domain Knowledge long-term memory Consolidated MTL Knowledge Transfer Retention & Consolidation (x, f(x)) Inductive Bias Knowledge Selection Model of Classifier h f2(x) x1 xn fk(x) f5(x) Training Examples Multiple Task Learning (MTL) [R. Caruana 1997] h(x) ~ f(x) S

13 csMTL and An Environmental Example
x = weather data f(x) = flow rate Stream flow rate prediction [Gaudette, Silver, Spooner 2006]

14 Context Sensitive MTL (csMTL)
We have developed an alternative approach that is meant to overcome limitations of MTL networks: Uses a single output Context inputs associate an example with a task; or indicate absence of a primay input Develops a fluid domain of task knowledge index by the context inputs Supports consolidation of knowledge Facilitates practicing a task More easily supports tasks with vector outputs x1 xn c1 ck Primary Inputs x One output for all tasks y=f(c,x) Context Inputs c [Silver, Poirier and Currie, 2008]

15 csMTL and Tasks with Multiple Outputs
Liangliang Tu (2010) Image Morphing: Inductive transfer between tasks that have multiple outputs Transforms 30x30 grey scale images using inductive transfer Three mapping tasks NA NH NS [Tu and Silver, 2010]

16 Two more Morphed Images
Passport Angry Filtered Passport Sad Filtered

17 Domain Knowledge Network
LML via csMTL Task Rehearsal Functional transfer (virtual examples) for slow consolidation One output for all tasks f’(c,x) f1(c,x) Short-term Learning Network Representational transfer from CDK for rapid learning Long-term Consolidated Domain Knowledge Network c1 ck x1 xn Context Inputs Standard Inputs

18 LML via csMTL Consolidation via task rehearsal can be achieved very effciently: Need only train on a few virtual examples (as few as one) selected at random during each training iteration Maintains stable prior functionality while allowing representational plasticity for integration of new task [Silver, Mason and Eljabu 2015]

19 Deep Learning and LML Stacked RBMs develop a rich feature space from unlabelled examples using unsupervised algorithms [Source: Caner Hazibas – slideshare]

20 Deep Learning and LML x1 xn c1 ck Primary Inputs x One output for all tasks y=f(c,x) Context Inputs c Transfer learning and consolidation works better with a deep learning csMTL Generative models are built using an RBM stack and unlabelled examples Inputs include context and primary attributes Can produce a rich variety of features indexed by the context nodes Supervised learning used to fine-tune all or portion of weights for multiple-task knowledge transfer or consolidation [Jiang and Silver, 2015]

21 Deep Learning and LML Experiments using the MNIST dataset x1 xn c1 ck
Primary Inputs x One output for all tasks y=f(c,x) Context Inputs c [Jiang and Silver, 2015]

22 Deep Learning and LML http://ml3cpu.acadiau.ca [Wang and Silver, 2015]
[Iqbal and Silver, in press]

23 Deep Learning and LML Stimulates new ideas about:
How knowledge of the world is learned, consolidated, and then used for future learning and reasoning How best to learn and represent common background knowledge Important to Big AI problem solving ... such as reasoning

24 Knowledge Representation and Reasoning
Focuses on the representation of information that can be used for reasoning It enables an entity to determine consequences by thinking rather than acting Traditionally requires a reasoning/inference engine to answer queries about beliefs

25 Knowledge Representation and Reasoning
Reasoning could be considered “algebraic [systematic] manipulation of previously acquired knowledge in order to answer a new question” (L. Bottou 2011) Requires a method of acquiring and storing knowledge Learning from the environment is the obvious choice …

26 Learning to Reason (L2R)
Concerned with the process of learning a knowledge base and reasoning with it [Kardon and Roth 97] Reasoning is subject to the errors that can be bounded in terms of the inverse of the effort invested in the learning process Requires knowledge representations that are learnable and facilitate reasoning “This statement is false”

27 Learning to Reason (L2R)
Takes a probabilistic perspective on learning and reasoning [Kardon and Roth 97] Agent need not answer all possible knowledge queries Only those that are relevant to the environment in a (PAC) sense [Valiant 08, Juba 12&13 ]

28 Learning to Reason (L2R)
Valiant and Khardon show formally: L2R allows efficient learning of Boolean logical assertions in the PAC-sense Learned knowledge can be used to reason efficiently, and to an expected level of accuracy and confidence We wish to demonstrate that: A knowledge base of Boolean functions, is PAC learnable from examples using a csMTL network Even when the examples provide information about only portion of the input space … explore a LML approach - consolidation over time and over the input space

29 Learning to Reason (L2R)
Simple terms and clauses: ~A B C (~A v B) (~B v C) Propositional Logic Functions Input “truth table terms: A B C …True/False … 1 More complex functions: (~A v B) v (~B v C) (~A v C) Functions of Functions: ~(~A v B) v ~(~B v C) v (~A v C)

30 L2R with LML – Study 1 Consider the Law of Syllogism
KB: (A  B)∧(B  C) Q: (A  C)

31 L2R with LML – Study 1 Learning the Law of Syllogism:
Training Set, KB: A C cA cC and Query Set, Q: cB B

32 L2R with LML – Study 1 Learning the Law of Syllogism:
Training Set, KB: A C cA cC network Query Set, Q: cB B Results: Average over 30 runs: 89% correct

33 L2R with LML – Study 2 Objective: Learn the Law of Syllogism (10 literals) KB: (A∧B∨C)  (D∨E∨~F) ∧ (D∨E∨~F) (G∨(~H∧I)∨~J) Q: (A∧B∨C)  (G∨(~H∧I)∨~J) Training set: 100% of subKB examples (A∧B∨C)  (D∨E∨~F) (D∨E∨~F) (G∨(~H∧I)∨~J) Q: (A∧B∨C)  (G∨(~H∧I)∨~J) Average over 10 runs: 78% accuracy network

34 L2R with LML – Study 3 Objective: To learning the following knowledge base: Two different ways: From examples of KB (1024 in total) From examples of sub-clauses of KB (sub-KB) network Training Set: All possible sub-KB:

35 L2R with LML – Study 3 Objective: To learning the following knowledge base: Results: Test on all KB examples (over 5 runs) Mean accuracy % of examples used for training

36 Conclusion Learning to Reason (L2R) using a csMTL neural network:
Uses examples to learn a model of logical functions in a probabilistic manner Consolidates knowledge from examples that represent only portion of the input space Reasoning = testing the model using truth table of Q Relies on context nodes to select inputs that are relevant Results on simple Boolean logic domain suggests promise

37 Future work Create a scope for determining those tasks that a trained network finds TRUE Thoroughly examined the affect of a probability distribution over the input space (train and test sets) Combine csMTL with deep learning architectures to learn hierarchies of abstract features (tend to be DNF) Consider other learning algorithms Consider more complex knowledge bases – beyond propositional logic

38 Thank You! danny.silver@acadiau.ca http://tinyurl/dsilver References:
L. G. Valiant. Knowledge infusion: In pursuit of robustness in artificial intelligence. FSTTCS, , 2008. Brendan Juba. Implicit learning of common sense for reasoning. IJCAI, , 2013. Roni Khardon and D. Roth. Learning to reason. Journal of the ACM, 44(5): , 1997. D. Siver, R. Poirier, and D. Currie. Inductive transfer with context sensitive neural networks. Machine Learning - Special Issue on Inductive Transfer, Springer, 73(3): , 2008. Silver, D. and Mason, G. and Eljabu, L. 2015, Consolidation using Sweep Task Rehearsal: Overcoming the Stability-Plasticity Problem, Advances in Artificial Intelligence, 28th Conference of the Canadian Artificial Intelligence Association (AI 2015), Springer, LNAI 9091, pp Wang.T and Silver,D. 2015, Learning Paired-associate Images with An Unsupervised Deep Learning Architecture, LNAI 9091, pp   Gomes, J. and Silver,D. 2015, Learning to Reason in A Probably Approximately Correct Manner, Proceeding of the CCECE 2014, Halifax, NS, May 2015, IEEE Press, pp Silver, D. The Consolidation of Task Knowledge for Lifelong Machine Learning. Proceedings of the AAAI Spring Symposium on Lifelong Machine Learning, Stanford University, CA, AAAI, March, 2013, pp 46–48. Silver, D. and Yang, Q. and Li, L. Lifelong machine learning systems: Beyond learning algorithms. Proceedings of the AAAI Spring Symposium on Lifelong Machine Learning, Stanford University, CA, AAAI, March, 2013, pp 49–55. Silver, D. and Tu, L. Image Morphing: Transfer Learning between Tasks that have Multiple Outputs. Advances in Artificial Intelligence, 25th Conference of the Canadian Artificial Intelligence Association (AI 2012), Toronto, ON, May, 2012, Springer, LNAI 7310, pp Silver, D. and Spooner, I. and Gaudette, L Inductive Transfer Applied to Modeling River Discharge in Nova Scotia, Atlantic Geology: Journal of the Atlantic Geoscience Society, (45) 191–203.


Download ppt "Lifelong Machine Learning and Reasoning"

Similar presentations


Ads by Google