An Introduction to Black-Box Complexity

Slides:



Advertisements
Similar presentations
Computational Intelligence Winter Term 2009/10 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS 11) Fakultät für Informatik TU Dortmund.
Advertisements

Minimum Clique Partition Problem with Constrained Weight for Interval Graphs Jianping Li Department of Mathematics Yunnan University Jointed by M.X. Chen.
GAME THEORY.
C&O 355 Mathematical Programming Fall 2010 Lecture 12 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
An Introduction to... Evolutionary Game Theory
MIT and James Orlin © Game Theory 2-person 0-sum (or constant sum) game theory 2-person game theory (e.g., prisoner’s dilemma)
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 15 Game Theory.
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Online Scheduling with Known Arrival Times Nicholas G Hall (Ohio State University) Marc E Posner (Ohio State University) Chris N Potts (University of Southampton)
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
An Introduction to Game Theory Part II: Mixed and Correlated Strategies Bernhard Nebel.
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
Limitations of VCG-Based Mechanisms Shahar Dobzinski Joint work with Noam Nisan.
1 An Asymptotically Optimal Algorithm for the Max k-Armed Bandit Problem Matthew Streeter & Stephen Smith Carnegie Mellon University NESCAI, April
A new crossover technique in Genetic Programming Janet Clegg Intelligent Systems Group Electronics Department.
1 search CS 331/531 Dr M M Awais A* Examples:. 2 search CS 331/531 Dr M M Awais 8-Puzzle f(N) = g(N) + h(N)
Fast Evolutionary Optimisation Temi avanzati di Intelligenza Artificiale - Lecture 6 Prof. Vincenzo Cutello Department of Mathematics and Computer Science.
Two Broad Classes of Functions for Which a No Free Lunch Result Does Not Hold Matthew J. Streeter Genetic Programming, Inc. Mountain View, California
COMP305. Part II. Genetic Algorithms. Genetic Algorithms.
1 A Novel Binary Particle Swarm Optimization. 2 Binary PSO- One version In this version of PSO, each solution in the population is a binary string. –Each.
7/2/2015Intelligent Systems and Soft Computing1 Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Introduction,
NP-complete and NP-hard problems. Decision problems vs. optimization problems The problems we are trying to solve are basically of two kinds. In decision.
Chapter 6: Transform and Conquer Genetic Algorithms The Design and Analysis of Algorithms.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
Evolutionary Intelligence
© Negnevitsky, Pearson Education, CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University.
Scott Perryman Jordan Williams.  NP-completeness is a class of unsolved decision problems in Computer Science.  A decision problem is a YES or NO answer.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
Schemata Theory Chapter 11. A.E. Eiben and J.E. Smith, Introduction to Evolutionary Computing Theory Why Bother with Theory? Might provide performance.
Agents that can play multi-player games. Recall: Single-player, fully-observable, deterministic game agents An agent that plays Peg Solitaire involves.
CS 484 – Artificial Intelligence1 Announcements Lab 3 due Tuesday, November 6 Homework 6 due Tuesday, November 6 Lab 4 due Thursday, November 8 Current.
Boltzmann Machine (BM) (§6.4) Hopfield model + hidden nodes + simulated annealing BM Architecture –a set of visible nodes: nodes can be accessed from outside.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
Analyzing algorithms & Asymptotic Notation BIO/CS 471 – Algorithms for Bioinformatics.
Presenter: Chih-Yuan Chou GA-BASED ALGORITHMS FOR FINDING EQUILIBRIUM 1.
Computational Complexity Jang, HaYoung BioIntelligence Lab.
© Negnevitsky, Pearson Education, Lecture 9 Evolutionary Computation: Genetic algorithms Introduction, or can evolution be intelligent? Introduction,
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Exact and heuristics algorithms
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Issues on the border of economics and computation נושאים בגבול כלכלה וחישוב Speaker: Dr. Michael Schapira Topic: Dynamics in Games (Part III) (Some slides.
For Friday Finish chapter 6 Program 1, Milestone 1 due.
Genetic Algorithms What is a GA Terms and definitions Basic algorithm.
For Wednesday Read chapter 6, sections 1-3 Homework: –Chapter 4, exercise 1.
CS621: Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 5: Power of Heuristic; non- conventional search.
Optimization Problems
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Genetic Algorithm Dr. Md. Al-amin Bhuiyan Professor, Dept. of CSE Jahangirnagar University.
Artificial Intelligence By Mr. Ejaz CIIT Sahiwal Evolutionary Computation.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
Genetic Algorithms And other approaches for similar applications Optimization Techniques.
March 1, 2016Introduction to Artificial Intelligence Lecture 11: Machine Evolution 1 Let’s look at… Machine Evolution.
Optimization Problems
Chapter 14 Genetic Algorithms.
Evolutionary Algorithms Jim Whitehead
Rank Aggregation.
Optimization Problems
Metaheuristic methods and their applications. Optimization Problems Strategies for Solving NP-hard Optimization Problems What is a Metaheuristic Method?
Boltzmann Machine (BM) (§6.4)
Search.
Search.
Presentation transcript:

An Introduction to Black-Box Complexity Winfried Just Department of Mathematics Ohio University

Abstract This talk will introduce the notion of black-box complexity of an optimization problem. The major tool for studying this notion, Yao’s Minimax Principle, will be proved and applied to a simple case. Moreover, I will discuss the relevance of black-box complexity to the study of biological evolution. The exposition given here follows the textbook “Komplexitätstheorie” by Ingo Wegener, Springer Verlag, 2003.

Classical Complexity Theory In complexity theory, we study how much time or memory space any algorithm needs to solve instances of size n of a decision or optimization problem. Results are obtained either for the hardest of all instances of a given size (worst-case complexity) or for the average running time or memory usage of an algorithm, where the average is taken over all instances of size n. Worst-case complexity is the more extensively developed branch of complexity theory, but even in this branch the really big questions remain open (e.g., Is P = NP?). A common feature of all branches of classical complexity theory is that the algorithm is allowed to make usage of the complete information about the problem instance.

Why Black-Box Complexity? Think of an engineer who wants to optimize the parameter settings for a complicated system. It may be relatively easy to simulate the output of the system for any given parameter configuration, but it may not be feasible to write an algorithm that takes advantage of problem- specific knowledge (the workings of the system), and heuristic approches such as hill-climbing, simulated annealing, and evolutionary algorithms will be more cost-efficient. In these approaches, the algorithm picks random parameter settings, simulates the output of the system for these parameter settings, and then searches for better settings based on the output of the system for the settings that have been tested so far. Black-box complexity tries to model this process.

Biological evolution as optimization Biological evolution can be viewed as an optimization procedure for solving problems of locomotion, feeding, reproduction, and defense against predators. I am interested in the following general question: How difficult is it for biological evolution to hit upon certain solutions of these and similar problems? As a matter of feasibility, I want to look first at problems of optimizing chemical reaction networks that make all of the above ultimately possible.

Black-box complexity and evolution Note that biological evolution is similar to the black-box approach taken by engineers who optimize parameters with heuristic algorithms: Evolutionary mechanisms do not consciously design organisms that will be good at solving certain problems of “vital interest.” Evolution simply relies on the mechanisms of random mutation and crossover (and a few other mechanisms that are not currently well understood) to produce organisms that differ from their parents in their ability to “solve’’ problems of vital importance. The “fittest” problem solvers will in turn have the most offspring. The role of the “black box” is played here by the genotype-phenotype map, current understanding of which is very rudimentary anyway.

Ingredients of black-box optimization Problem size n Search space Sn A finite set {1, … , N} (where N depends on n) The set Fn of all possible “fitness functions” f from Sn into the set {1, … , N} The fitness function f will be unknown to the algorithm and will act as the “black box.”

The first step in black-box optimization Pick randomly x1 from Sn (with respect to a probability distribution m1). Use the black box to compute f(x1) .

Step number t in black-box optimization Given: x1, x2, … , xt-1 in Sn f(x1), f(x2), … , f(xt-1) in Sn Decide whether to terminate the algorithm. If so, output xi with f(xi) maximal. Else go to steps 2-4. Compute a probability distribution mt on Sn . This distribution must assign probability zero to each of x1, x2, … xt-1 . A black-box algorithm is deterministic if mt always chooses one xt with probability 1. Pick randomly xt from Sn (with respect to a probability distribution mt). Use the black box to compute f(xt) .

Expected black-box running time For the purpose of analyzing the complexity of black-box algorithms, we will assume that the algorithm is terminated only if the whole search space has been looked at. Note that such an algorithm will always find the optimal solution. Note also that there are only finitely many deterministic black-box algorithms A for a given search space Sn. Moreover, an arbitrary black-box algorithm Aμ can be conceptualized as picking a deterministic black-box algorithm according to a probability distribution μ on the set of all such algorithms. We can then define for a given f as the expected running time E(f, Aμ ) of Aμ on f as the expected number of black-box queries until an xt with optimal f(xt) will have been found.

Black-box complexity The black-box complexity of a given set Fn is defined as: minμ maxf E(f, Aμ ) , where f ranges over Fn and μ ranges over all probability distributions on the set of deterministic algorithms. Note that this notion is the complexity of a worst-case scenario.

Black-box complexity, Bob, and Alice One can understand black-box complexity better if one pictures a game between Bob and Alice. Alice picks a deterministic algorithm A, Bob acts as the spoiler who tries to pick a function f from Fn that he expects to be particularly difficult for A. Each player makes his or her decision without knowing the opponent’s choice. Alice then pays to Bob a dollar amount equal to the number of black-box queries until her algorithm has first found the optimum of f. This is a finite two-person zero sum game. A classical result of game theory now says that there are probability distributions μ* on the set of deterministic algorithms and ν* on the set Fn such that: maxν minμ E(fν, Aμ ) = minμ maxν E(fν, Aμ ) = E(fν*, Aμ* ) = V. The pair (ν*, μ*) is called a Nash Equilibrium. The number V is called the value of the game.

Yao’s Minimax Principle Theorem (Yao, 1977): For each probability distribution ν on Fn and for each probability distribution μ on the set of deterministic black-box algorithms we have: minA E(fν, A) < maxf E(f, Aμ ) . Proof: Note that minA E(fν, A) < minμ maxν E(fν, Aμ ) = maxν minμ E(fν, Aμ ) < maxf E(f, Aμ ) .

An application: Needle in the haystack problems Search space: Sn = {0, 1} n For a in Sn we define fa (x) = 1 if x = a and fa (x) = 0 otherwise. The class of all these functions will be denoted by Nn and called the class of needle-in-the-haystack functions. Lemma: The black-box complexity of Nn is at most 2 n-1 + 0.5. Proof: Straightforward.

An application: Needle in the haystack problems Theorem: The black-box complexity of Nn is at least 2 n-1 + 0.5. Proof: Let ν be the uniform distribution on Nn . By Yao’s Theorem, minA E(fν, A) is a lower bound for the black-box complexity maxf E(f, Aμ ) . But for a given deterministic algorithm A, the expected search time is still 2 n-1 + 0.5, and the theorem follows.

Unimodal functions It should come as no surprise that “needle-in-the-haystack” functions are difficult for black-box optimization. Here is a class of functions that appear to be especially easy for black-box algorithms: Definition: A function f on {0, 1} n is unimodal if for each nonmaximal x in {0, 1} n there is a y in {0, 1} n with Hamming distance 1 from x so that y has higher fitness than x. Let Un denote the set of all unimodal functions on {0, 1} n. Theorem: Let g(n) be an integer function such that g(n) = o(n). Then the black-box complexity of Un is at least 2 g(n).

Two challenging questions (Wegener): Problem 1: Show for some well-know NP-hard optimization problem that its black-box complexity is exponential in n without assuming that P is not equal to NP. Problem 2: Develop techniques for proving lower bounds on black- box complexity that go beyond Yao’s Minimax Principle or that do not use this theorem at all.