Experiments We measured the times(s) and number of expanded nodes to previous heuristic using BFBnB. Dynamic Programming Intuition. All DAGs must have.

Slides:



Advertisements
Similar presentations
Hybrid BDD and All-SAT Method for Model Checking Orna Grumberg Joint work with Assaf Schuster and Avi Yadgar Technion – Israel Institute of Technology.
Advertisements

Heuristic Search techniques
Impact of Interference on Multi-hop Wireless Network Performance Kamal Jain, Jitu Padhye, Venkat Padmanabhan and Lili Qiu Microsoft Research Redmond.
Artificial Intelligence Presentation
An Introduction to Artificial Intelligence
Traveling Salesperson Problem
“Using Weighted MAX-SAT Engines to Solve MPE” -- by James D. Park Shuo (Olivia) Yang.
Heuristics CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Introduction to Algorithms
Artificial Intelligence Chapter 9 Heuristic Search Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
Search Techniques MSc AI module. Search In order to build a system to solve a problem we need to: Define and analyse the problem Acquire the knowledge.
CSC 423 ARTIFICIAL INTELLIGENCE
1/21 Finding Optimal Bayesian Network Structures with Constraints Learned from Data 1 City University of New York 2 University of Helsinki Xiannian Fan.
Parsimony based phylogenetic trees Sushmita Roy BMI/CS 576 Sep 30 th, 2014.
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
Graduate Center/City University of New York University of Helsinki FINDING OPTIMAL BAYESIAN NETWORK STRUCTURES WITH CONSTRAINTS LEARNED FROM DATA Xiannian.
Search by partial solutions. Where are we? Optimization methods Complete solutions Partial solutions Exhaustive search Hill climbing Random restart General.
Experiments We used our optimal frontier breadth-first search algorithm to learn an optimal Bayesian network over the 23-variable data set and compared.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
CSC344: AI for Games Lecture 5 Advanced heuristic search Patrick Olivier
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Dealing with NP-Complete Problems
Using Search in Problem Solving
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
Using Search in Problem Solving
Backtracking.
Distributed Constraint Optimization * some slides courtesy of P. Modi
Heuristic Search Heuristic - a “rule of thumb” used to help guide search often, something learned experientially and recalled when needed Heuristic Function.
Informed Search Idea: be smart about what paths to try.
Subgraph Containment Search Dayu Yuan The Pennsylvania State University 1© Dayu Yuan9/7/2015.
Dana Nau: Lecture slides for Automated Planning Licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License:
A* Lasso for Learning a Sparse Bayesian Network Structure for Continuous Variances Jing Xiang & Seyoung Kim Bayesian Network Structure Learning X 1...
Introduction to search Chapter 3. Why study search? §Search is a basis for all AI l search proposed as the basis of intelligence l inference l all learning.
Parsimony-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2010.
P ROBLEM Write an algorithm that calculates the most efficient route between two points as quickly as possible.
Exact methods for ALB ALB problem can be considered as a shortest path problem The complete graph need not be developed since one can stop as soon as in.
Informed (Heuristic) Search
Li Wang Haorui Wu University of South Carolina 04/02/2015 A* with Pattern Databases.
Optimization of Wavelength Assignment for QoS Multicast in WDM Networks Xiao-Hua Jia, Ding-Zhu Du, Xiao-Dong Hu, Man-Kei Lee, and Jun Gu, IEEE TRANSACTIONS.
Inference Complexity As Learning Bias Daniel Lowd Dept. of Computer and Information Science University of Oregon Joint work with Pedro Domingos.
Learning Linear Causal Models Oksana Kohutyuk ComS 673 Spring 2005 Department of Computer Science Iowa State University.
Learning the Structure of Related Tasks Presented by Lihan He Machine Learning Reading Group Duke University 02/03/2006 A. Niculescu-Mizil, R. Caruana.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Lecture 3: Uninformed Search
1 Solving problems by searching 171, Class 2 Chapter 3.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Artificial Intelligence for Games Informed Search (2) Patrick Olivier
Parsimony-Based Approaches to Inferring Phylogenetic Trees BMI/CS 576 Colin Dewey Fall 2015.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2009.
Dependency Networks for Collaborative Filtering and Data Visualization UAI-2000 발표 : 황규백.
Searching for Solutions
Sporadic model building for efficiency enhancement of the hierarchical BOA Genetic Programming and Evolvable Machines (2008) 9: Martin Pelikan, Kumara.
3.5 Informed (Heuristic) Searches This section show how an informed search strategy can find solution more efficiently than uninformed strategy. Best-first.
CPSC 322, Lecture 5Slide 1 Uninformed Search Computer Science cpsc322, Lecture 5 (Textbook Chpt 3.5) Sept, 13, 2013.
Tuesday, March 19 The Network Simplex Method for Solving the Minimum Cost Flow Problem Handouts: Lecture Notes Warning: there is a lot to the network.
Graphs David Kauchak cs302 Spring Admin HW 12 and 13 (and likely 14) You can submit revised solutions to any problem you missed Also submit your.
An Algorithm to Learn the Structure of a Bayesian Network Çiğdem Gündüz Olcay Taner Yıldız Ethem Alpaydın Computer Engineering Taner Bilgiç Industrial.
Dept. Computer Science, Korea Univ. Intelligent Information System Lab A I (Artificial Intelligence) Professor I. J. Chung.
Lecture 3: Uninformed Search
Last time: search strategies
Lecture 11: Tree Search © J. Christopher Beck 2008.
NIM - a two person game n objects are in one pile
CO Games Development 1 Week 8 Depth-first search, Combinatorial Explosion, Heuristics, Hill-Climbing Gareth Bellaby.
Backtracking and Branch-and-Bound
Informed Search Idea: be smart about what paths to try.
Informed Search Idea: be smart about what paths to try.
Lecture 4: Tree Search Strategies
Presentation transcript:

Experiments We measured the times(s) and number of expanded nodes to previous heuristic using BFBnB. Dynamic Programming Intuition. All DAGs must have a leaf. Optimal networks for a single variable are trivial. Recursively add new leaves and select optimal parents until adding all variables. All orderings have to be considered. Recurrences. Bayesian Network Structure Learning Representation. Joint probability distribution over a set of variables. Structure. DAG storing conditional dependencies. Vertices correspond to variables. Edges indicate relationships among variables. Parameters. Conditional probability distributions. Learning. Find the network with the minimal score for complete dataset D. We often omit D for brevity. Begin with a single variable. Pick one variable as leaf. Find its optimal parents. Pick another leaf. Find its optimal parents from current. Continue picking leaves and finding optimal parents. Graph Search Formulation The dynamic programming can be visualized as a search through an order graph. The Order Graph Calculation. Score(U), best subnetwork for U. Node. Score(U) for U. Successor. Add X as a leaf to U. Path. Induces an ordering on variables. Size. 2 n nodes, one for each subset. Admissible Heuristic Search Formulation Start Node. Top node, {}. Goal Node. Bottom node, V. Shortest Path. Corresponds to optimal structure. g(U). Score(U). h(U). Relax acyclicity. Tightening Lower Bound The lower bound was calculated from a pattern database heuristic called k-cycle conflict heuristic. Particularly, Static k-cycle conflict pattern database was shown to have a good performance. Computing k-Cycle Conflict Heuristic. Its main idea is to relax the acyclicity constraint between groups of variables; acyclicity is enforced among the variables within each group. For a 8-variable problem, partition all the variables by Simple Grouping (SG) into two groups: G 1 ={X 1, X 2, X 3, X 4 }, G 2 ={X 5, X 6, X 7, X 8 }. We created the pattern databases with a backward breadth-first search in the order graph for each group. P 1 = h 1 ({X 2,X 3 }) = BestScore(X 2, {X 1, X 4 }U G 2 ) + BestScore(X 3, {X 1, X 2, X 4 }U G 2 ) Selected References 1.Yuan, C.; Malone, B.; and Wu, X Learning optimal Bayesian networks using A* search. In IJCAI ‘11, Malone, B.; Yuan,C Improving the Scalability of Optimal Bayesian Network Learning with Frontier Breadth- First Branch and Bound Search. In UAI’11, Felner, A.; Korf, R. E.; and Hanan, S Additive pattern database heuristics. Journal of Artificial Intelligence Research (JAIR) vol Malone, B, Yuan, C Evaluating Anytime Algorithms for Learning Optimal Bayesian Networks. In UAI’13, A recent breadth-first branch and bound algorithm (BFBnB) for learning Bayesian network structures (Malone et al. 2011) uses two bounds to prune the search space for better efficiency; one is a lower bound calculated from pattern database heuristics, and the other is an upper bound obtained by a hill climbing search. Whenever the lower bound of a search path exceeds its upper bound, the path is guaranteed to lead to suboptimal solutions and is discarded immediately. This paper introduces methods for tightening the bounds. The lower bound is tightened by using more informed variable groupings in creating the pattern databases, and the upper bound is tightened using an anytime learning algorithm. Empirical results show that these bounds improve the efficiency of Bayesian network learning by two to three orders of magnitude. TIGHTENING BOUNDS FOR BAYESIAN NETWORKS STRUCTURE LEARNING Xiannian Fan, Changhe Yuan and Brandon Malone Tightening Upper Bound Anytime window A* (AWA*) was shown to find high quality, often optimal, solutions very quickly, thus provided a tight upper bound. More Informed Grouping Strategies Rather than use SG (1 st half VS 2 nd half grouping), we developed more informed grouping strategies. 1. Maximizing the correlation between the variables within each group, and Minimize the correlation between groups. a)Family Grouping (FG): We created a correlation graph by Max-Min Parent Children (MMPC) algorithm, and gave weights by negative p-value; then performed graph partition. b)Parents Grouping (PG): We created a correlation graph by only considering the optimal parent set out of all the other variables for each variable, and gave weights by negative p-value; then performed graph partition. 2. Using Topological Ordering Information. a)Topology Grouping (TG): We created a correlation graph by considering the topological ordering of an anytime Bayesian Network solution by AWA*, then partitioned the variables according to the ordering. P 2 = h 2 ({X 5,X 7 }) = BestScore(X 5, {X 6, X 8 } U G 1 ) + BestScore(X 7, {X 5, X 6, X 8 } U G 1 ) Additive Pattern database heuristic : h({X 2,X 3,X 5,X 7 }) = h 1 ({X 2,X 3 })+h 2 ({X 1, X 4 }))= P 1 + P 2 E.g., how to calculate the heuristic for pattern {X 2,X 3,X 5,X 7 }? The effect of upper bounds generated by running AWA* for different amount of time on the performance of BFBnB search. The effect of different grouping strategies on the number of expanded nodes and time. The four grouping methods are the simple grouping (SG), FG, PG, and TG.