Optimization of Motorized Mirrors with SciPy Minimization Methods

Slides:



Advertisements
Similar presentations
Introduction to Optimization Anjela Govan North Carolina State University SAMSI NDHS Undergraduate workshop 2006.
Advertisements

Nelder Mead.
Lect.3 Modeling in The Time Domain Basil Hamed
Analyzing Multivariable Change: Optimization
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
Optimization : The min and max of a function
Lecture 3 Jack Tanabe Old Dominion University Hampton, VA January 2011 Conformal Mapping.
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Introduction What is this ? What is this ? This project is a part of a scientific research in machine learning, whose objective is to develop a system,
D Nagesh Kumar, IIScOptimization Methods: M1L4 1 Introduction and Basic Concepts Classical and Advanced Techniques for Optimization.
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Results The following results are for a specific DUT device called Single Ring Micro Resonator: Figure 6 – PDL against Wavelength Plot Figure 7 – T max.
1 Seventh Lecture Error Analysis Instrumentation and Product Testing.
Optimization Methods One-Dimensional Unconstrained Optimization
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
Optimizing the Placement of Chemical and Biological Agent Sensors Daniel L. Schafer Thomas Jefferson High School for Science and Technology Defense Threat.
456/556 Introduction to Operations Research Optimization with the Excel 2007 Solver.
ENCI 303 Lecture PS-19 Optimization 2
Some Key Facts About Optimal Solutions (Section 14.1) 14.2–14.16
Nonlinear programming Unconstrained optimization techniques.
Generative Topographic Mapping by Deterministic Annealing Jong Youl Choi, Judy Qiu, Marlon Pierce, and Geoffrey Fox School of Informatics and Computing.
Department of Electrical Engineering, Southern Taiwan University Robotic Interaction Learning Lab 1 The optimization of the application of fuzzy ant colony.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Table of Contents CD Chapter 14 (Solution Concepts for Linear Programming) Some Key Facts.
An Introduction to Simulated Annealing Kevin Cannons November 24, 2005.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
GPAW Setup Optimization Center for Atomic-scale Materials Design Technical University of Denmark Ask Hjorth Larsen.
Chapter 2 Motion in One Dimension. Kinematics Describes motion while ignoring the external agents that might have caused or modified the motion (Dynamics.
Accurate 3D Modeling of User Inputted Molecules Using a Nelder Mead Algorithm Ben Parr Period 6.
Optimization Problems
Optimization via Search
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
Motion in Two Dimensions
OPERATING SYSTEMS CS 3502 Fall 2017
AuthorAID Workshop on Research Writing
Management & Planning Tools
Support Vector Machines
We propose a method which can be used to reduce high dimensional data sets into simplicial complexes with far fewer points which can capture topological.
Non-linear Minimization
System Design Ashima Wadhwa.
ME 521 Computer Aided Design 15-Optimization
Luís Filipe Martinsª, Fernando Netoª,b. 
Motion in Two Dimensions
Maria Okuniewski Nuclear Engineering Dept.
Linear Programming.
Chapter 14.
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
Local search algorithms
Optimization Problems
Kinematics Projectile Motion
Fast Refrigerant Property Calculations
Introduction to Simulated Annealing
Introduction to Artificial Intelligence Lecture 9: Two-Player Games I
More on Search: A* and Optimization
Lecture 3 Jack Tanabe Old Dominion University Hampton, VA January 2011
Xin-She Yang, Nature-Inspired Optimization Algorithms, Elsevier, 2014
Chapter 6 Network Flow Models.
More on HW 2 (due Jan 26) Again, it must be in Python 2.7.
Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner, Rudiger Dillmann.
Chapter 12 Graphing and Optimization
Chapter 4 Graphing and Optimization
Complexity Theory: Foundations
Unit II Game Playing.
Analyzing Multivariable Change: Optimization
Presentation transcript:

Optimization of Motorized Mirrors with SciPy Minimization Methods NIST Student Symposium, July 28th-29th 2016 Optimization of Motorized Mirrors with SciPy Minimization Methods Melanie Sawyer1,2, Sae Woo Nam2, Krister Shalm2, Marty Stevens2 1Fairview High School, Boulder, CO 80305 2National Institute of Standards and Technology, Applied Physics Department (#686) 325 Broadway, Boulder, CO 80305 Abstract Powell’s Conjugate Direction Method Basinhopping Algorithm Comparison of Algorithms The Powell Method, or Powell’s Conjugate Direction Method, is a derivative-free optimization method.5 It optimizes a series of variables by using search vectors (essentially searching in a direction N along a 1-dimensional line).6 Once the algorithm finds the maximum point along that search vector, it selects another direction. The algorithm continues generating search vectors until the differences in output are smaller than some tolerance.7 Figure 3: Visualization of a Powell search in 3 dimensions. The basinhopping algorithm minimizes a function by moving to various “basins”, or local minima, and ultimately determining the lowest point.10 This algorithm works excellently to avoid getting caught at local minima, since it essentially tests all the local minima in a given function, and determines the lowest.11 Basinhopping is relatively similar to simulated annealing, which is simulates heating a metal, and then slowly cooling it to different states.12 Figure 7: Visualization of basinhopping in a two-dimensional landscape. Figure 8: The path of the Nelder-Mead algorithm when implemented with SciPy The purpose of this project was to determine the algorithm that would deliver the greatest number of laser-produced photons to an optical fiber. These photons originated from a laser beam, and were reflected through a series of motorized mirrors until they reached their destination. Figure 1 demonstrates an example of a laser-mirror system. Figure 1: An example of a laser-mirror system, with 3 mirrors.1 In my setup, there were only 2 mirrors and the resulting beam was coupled into a fiber attached to a power-measurement device. Since there is already significant uncertainty present when dealing with photons, it is imperative that the maximum number of photons are coupled into the fiber optic with as much precision as possible. When these mirrors are optimized by hand, it is difficult to be precise, and often impossible to know whether the global maximum has been reached. Controlling the mirror computationally has numerous advantages, notably the precision and speed which the computer can move the mirrors and analyze the results, and the adaptability of the algorithm to be used in more complex systems. In order to find the most efficient and effective way to computationally determine the global maximum for a mirror system with 2 devices and 4 axes, I compared 10 different minimization algorithms from the SciPy minimization module, and utilizing the results, established the most effective solution. Figure 10: Runtime vs. Final Power for Powell, Nelder-Mead, and Basinhopping. Another key characteristic of an ideal optimization algorithm is the number of times it must build upon itself to find the maximum. Figure 11: Visual of number of iterations each algorithm requires to settle. From the graphs above, it is clear that the Basinhopping algorithm performs best. It has a relatively short runtime, always tends to find the max, and does not require many runs to complete. Figure 4: The path of the Powell algorithm when implemented with SciPy. Introduction Nelder-Mead Optimization Manipulation of Algorithms Optimization and minimization of functions is extremely valuable when engineering effective solutions to problems. One example of an application of minimization is with noise reduction in electrical signals. One key feature of an optimization algorithm is that it must not get tripped up by local maxima/minima, but rather identify the global maxima or minima. In order to assess the ability of various algorithms to ignore the local minima, various “test functions” are used. These functions are designed to trip up optimization algorithms with local minima, which the algorithm may believe is the global minima, when in fact it’s not. An example of such a function is shown below. Figure 2: An example of a function designed to test the effectiveness of optimization algorithms. Due to the high number of local minima, algorithms may not see the global minima. The Python SciPy module provides 11 different minimization methods. However, many of these algorithms require a Jacobian, which is essentially a matrix of partial derivatives.2 Since calculating the Jacobian for a given function takes significant time and processing power, I decided to eliminate these algorithms. Some of the remaining algorithms (such as L-BFGS) had very limited options, and couldn’t be tweaked, so I also eliminated those. The efficacy of an algorithm entirely depends on the system which it’s being used for. The Basinhopping algorithm tends to work well with complex systems with many local extrema, while Nelder-Mead works more effectively with fewer local extrema.3,4 Additionally, when using these algorithms in Python, they can be adjusted with various options and keyword arguments (discussed in detail in the “Manipulation of Algorithms” section). Nelder-Mead uses a simplex, which is a polytope, to measure values at various locations.8 The simplex for Nelder-Mead has n+1 vertices, where n is the number of dimensions of the function.9 Therefore, the simplex for a mirror system has 2*m+1 vertices, where m is the number of mirrors. (Each mirror has 2 “dimensions”: horizontal and vertical). While running various algorithms, I was using 2 mirrors, so the Nelder-Mead method generated a pentagon to determine the minimum. Figure 5: Visualization of Nelder-Mead in 2-dimensional space, thus the simplex is a triangle. The algorithm starts by generating triangles towards the outer edge, and slowly the triangle moves inwards, towards the minimum. In order to make the algorithms work efficiently, it is necessary to adjust the default SciPy settings for each algorithm. Table 1 below describes various options that can be adjusted, their effect, and the values for each that I found to be the most effective. Table 1: Description of minimization adjustments. Name Description Success value Ftol Function tolerance, i.e. the maximum difference between subsequent results to terminate the algorithm. 1x10-6 (Powell) 1x10-2 (Nelder-Mead) 1x10-5 (Basinhopping) Bounds Boundaries for inputted points. Not implemented through options, but rather as a separate helper function, checkBounds, which was called whenever the objective function was called. See below. Step Size Not implemented within the minimize function call, but rather in the objective function. It scales the input by a certain amount, thus changing the step size for searching. 1e2 (Powell) 1e5 (Nelder-Mead) 1e6 (Basinhiopping) References 1Smith, Raymond. Laser and Mirror Setup. LensDigital. Accessed 24 July 2016. 2Brett M. Averick. Computing Large Sparse Jacobian Matrices Using Automatic Differentiation. SIAM Journal of Scientific Computing. Accessed 24 July 2016. 3David J. Wales. Global Optimization by Basinhopping and the Lowest Energy Structure of Lennard-Jones Clusters Containing up to 100 Atoms. Cornell University Library. Accessed 24 July 2016 4John E. Dennis. Optimization on Microcomputers: The Nelder-Mead Simplex Algorithm. New Computing Environments: Microcomputers in Large-Scale Computing. Accessed 24 July 2016. 5,6M.J.D. Powell. A New Algorithm for Unconstrained Optimization. Nonlinear Programming: Proceedings of a Symposium Conducted by the Mathematics Research Center, the University of Wisconsin, Madison, May 4-6, 1970. Accessed 24 July 2016. 7SciPy Minimize Documentation. (http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html). Accessed 24 July 2016. 8,9John E. Dennis. Optimization on Microcomputers: The Nelder-Mead Simplex Algorithm. New Computing Environments: Microcomputers in Large-Scale Computing. Accessed 24 July 2016. 10,11David J. Wales. Global Optimization by Basinhopping and the Lowest Energy Structure of Lennard-Jones Clusters Containing up to 100 Atoms. Cornell University Library. Accessed 24 July 2016 12P. J. M. van Laarhoven. Simulated Annealing: Theory and Applications. Acta Applicandae Mathematica. Accessed 24 July 2016. Figure 6: The path of the Nelder-Mead algorithm when implemented with SciPy. Figure 9: Bounds checking mechanism. Prevented unnecessary time spent checking far-away points which don’t contain the max.