How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory Stanford University.

Slides:



Advertisements
Similar presentations
Clustering. How are we doing on the pass sequence? Pretty good! We can now automatically learn the features needed to track both people But, it sucks.
Advertisements

Dialogue Policy Optimisation
Neural and Evolutionary Computing - Lecture 4 1 Random Search Algorithms. Simulated Annealing Motivation Simple Random Search Algorithms Simulated Annealing.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Model Checker In-The-Loop Flavio Lerda, Edmund M. Clarke Computer Science Department Jim Kapinski, Bruce H. Krogh Electrical & Computer Engineering MURI.
 These 100 seniors make up one possible sample. All seniors in Howard County make up the population.  The sample mean ( ) is and the sample standard.
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Artificial Intelligence in Game Design Introduction to Learning.
EE663 Image Processing Edge Detection 5 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Heuristic Optimization and Dynamical System Safety Verification Todd W. Neller Knowledge Systems Laboratory Stanford University.
Optimization via Search CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Chapter 7 Sampling and Sampling Distributions
Information-Based Optimization Approaches to Dynamical System Safety Verification Todd W. Neller.
An Optimal Learning Approach to Finding an Outbreak of a Disease Warren Scott Warren Powell
Topologically Adaptive Stochastic Search I.E. Lagaris & C. Voglis Department of Computer Science University of Ioannina - GREECE IOANNINA ATHENS THESSALONIKI.
CS 188: Artificial Intelligence Spring 2007 Lecture 14: Bayes Nets III 3/1/2007 Srini Narayanan – ICSI and UC Berkeley.
Stochastic greedy local search Chapter 7 ICS-275 Spring 2007.
Maximum Entropy Model LING 572 Fei Xia 02/07-02/09/06.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
Clustering with Bregman Divergences Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, Joydeep Ghosh Presented by Rohit Gupta CSci 8980: Machine Learning.
Sampling Combinatorial Space Using Biased Random Walks Jordan Erenrich, Wei Wei and Bart Selman Dept. of Computer Science Cornell University.
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Lecture II-2: Probability Review
Metaheuristics The idea: search the solution space directly. No math models, only a set of algorithmic steps, iterative method. Find a feasible solution.
Approximation Metrics for Discrete and Continuous Systems Antoine Girard and George J. Pappas VERIMAG Workshop.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Introduction to Monte Carlo Methods D.J.C. Mackay.
DEXA 2005 Quality-Aware Replication of Multimedia Data Yicheng Tu, Jingfeng Yan and Sunil Prabhakar Department of Computer Sciences, Purdue University.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
Gaussian process modelling
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
 1  Outline  stages and topics in simulation  generation of random variates.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory Mixed Integer Problems Most optimization algorithms deal.
The X-Tree An Index Structure for High Dimensional Data Stefan Berchtold, Daniel A Keim, Hans Peter Kriegel Institute of Computer Science Munich, Germany.
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Applications of Dynamic Programming and Heuristics to the Traveling Salesman Problem ERIC SALMON & JOSEPH SEWELL.
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
CHAPTER 4, Part II Oliver Schulte Summer 2011 Local Search.
Vaida Bartkutė, Leonidas Sakalauskas
Announcement "A note taker is being recruited for this class. No extra time outside of class is required. If you take clear, well-organized notes, this.
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem Gholamreza Haffari Sharif University of Technology
Introduction to Sampling Methods Qi Zhao Oct.27,2004.
Lecture 6 – Local Search Dr. Muhammad Adnan Hashmi 1 24 February 2016.
Artificial Intelligence in Game Design Lecture 20: Hill Climbing and N-Grams.
01/26/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
Introduction to emulators Tony O’Hagan University of Sheffield.
Bayesian Optimization. Problem Formulation Goal  Discover the X that maximizes Y  Global optimization Active experimentation  We can choose which values.
Unified Adaptivity Optimization of Clock and Logic Signals Shiyan Hu and Jiang Hu Dept of Electrical and Computer Engineering Texas A&M University.
Introduction to Monte Carlo Method
Heuristic Optimization Methods
LECTURE 06: MAXIMUM LIKELIHOOD ESTIMATION
Local Search Algorithms
Clustering (3) Center-based algorithms Fuzzy k-means
More on Search: A* and Optimization
CMSC 471 – Fall 2011 Class #25 – Tuesday, November 29
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Artificial Intelligence
Local Search Algorithms
Local Search Algorithms
Presentation transcript:

How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory Stanford University

Outline Defining the problem: Will the critical satellite motor stall? Generalizing the problem: Hybrid Systems Reformulating the problem: Optimizing for failure Describing the tool we need: Information-Based Optimization Exciting Conclusion: Why should a power screwdriver be inspiring?

Stepper Motors a.k.a. “step motors”  t 

Dan Goldin, head of NASA: “Smaller, Faster, Better, Cheaper”  microsatellites, autonomy, C.O.T.S. SSDL’s OPAL: Orbiting Picosatellite Automated Launcher Problem: Will the motor stall while accelerating the picosatellite? How to find good research problems: specific  general The Problem ?

Hybrid Systems Hybrid = Discrete + Continuous Example: Bouncing Ball Fast Continuous Change  Discrete Change More Interesting Example: Mode Switching Controllers

Safety Safety property - Something that is always true about a system Another view: A set of states the system never leaves Safe/unsafe states, desired/undesired states Initial Safety property - Safety over an initial duration of time

Verification, Refutation Verification of safety: Proving that the system can never leave safe states Verification through simulation? Refutation of safety: Proving that the system can leave safe states Proof by counterexample

Stepper Motor Safety Refutation Given: Stepper motor simulator and acceleration table Bounds on stepper motor system parameters and initial state Set of stall states Find: Parameters and initial conditions such that the motor enters a stall state during acceleration

General Problem Statement Given: Hybrid system simulator for initial time duration Bounds on initial conditions (parameters and variable assignments) Set of unsafe states Find: Initial conditions such that the system enters an unsafe state during initial time

Generate and Test Tools for Initial Safety Refutation of Hybrid Systems (There has to be a better way, right?)

Distance from Unsafe States Make use of simple knowledge of problem domain to provide landscape helpful to search

Refutation through Optimization Transform refutation problem into an optimization problem with a heuristic (i.e. estimated) measure of relative safety Apply efficient global optimization

Given: Hybrid system simulator for initial time t Possible initial conditions I Heuristic evaluation function f which takes an initial condition as input and returns a relative safety ranking of the resulting trajectory Find: Initial condition x in I, such that f(x) = 0 Problem Reformulation initial condition  trajectory  ranking f simulationevaluation

f(x) is usually assumed cheap to compute. Most methods store and use very little data. Solution: Use simulation intelligently. General principle: Information gained at great cost should be treated with great value. Problem: Simulation isn’t Cheap f(6.27)=0.34 f(6.35)=0.92 f(7.11)=1.85 f(9.24)=7.90

Satisficing General optimization seeks an unknown optimum. We don’t know our optimum, but we have a goal value we’re seeking to satisfy. Satisficing (= “satisfying”, economist Herbert Simon) This knowledge can be leveraged to make our optimization more efficient.

Information-Based Approach Assume: continuous, flat functions more likely

Information-Based Optimization (Neimark and Strongin, 1966; Strongin and Sergeyev, 1992; Mockus, 1994) Previous function evaluations shape probability distribution over possible functions. But we needn’t deal with probabilities. Ranking candidates is enough. Prefer smooth functions  Prefer candidate which minimizes slope at goal value Information-Based Optimization

Problem: Only Good for One Dimension In 1-D, candidates are ranked with respect to immediate neighbors. What are “immediate neighbors” in multi- dimensional space? Intuition: Closer points have greater relevance.

Solution: Shadowing Point b shadows point a from point d if: b is closer to d than a, and the slope between a and b is greater than the slope between a and d.

Multidimensional Information-Based Optimization Choose initial point x and evaluate f(x) Iterate: Pick next point x according to ranking function g(x) and evaluate f(x) Excellent for efficiently finding zeros when not rare. Problem: Slow convergence for rare zeros, points clustered near minima

Perform a local optimization for each top level function evaluation Summarize information  tractability Multilevel Optimization: Generalize to n levels, with each level expediting search for level above Solution: Multilevel Optimization

Summary Initial safety refutation of hybrid system can be reformulated as satisficing optimization given a heuristic measure of relative safety. Information-based optimization is suited to such optimization, and can be extended to multidimensions with shadowing and sampling. Convergence to rare unsafe trajectories: Multilevel optimization

Using an Optimization Toolbox You have a set of optimization methods. You have a set of observations during optimization (e.g. function evals, local minima). Monte Carlo Optimization Monte Carlo w/ Local Optimization Information-Based Optimization Information-Based w/ Local Optimization

Challenge Problem: Method Switching Given: a set of iterative optimization procedures a distribution of optimization problems a set of optimization features Learn: a policy for dynamically switching between procedures which minimizes time to solution for such a distribution

The computer is a power tool for the mind. Power screwdrivers with Phillips bits don’t work well with slotted screws. Understand the assumptions of the tools you apply. You can design new bits suited to new tasks. One new bit can change the world of computing! Conclusion

Other Approaches Few minima: Random Local Optimization Many minima: Simulated Annealing with Local Optimization (Desai and Patil, 1996) For higher dimensions, you’re forever searching corners! Direction Set Methods: Successive 1D minimizations in different directions.

How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory, Stanford University Gettysburg College, January 21, 2000

How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory, Stanford University Colgate University, January 25, 2000

How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory, Stanford University Lafayette College, January 27, 2000

How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory, Stanford University Bowdoin College, January 31, 2000

How to Stall a Motor: Information-Based Optimization for Safety Refutation of Hybrid Systems Todd W. Neller Knowledge Systems Laboratory, Stanford University Williams College, February 11, 2000