Digital Optimization Martynas Vaidelys
Optimization Mathematical optimization (optimization or mathematical programming) is the selection of a best element 𝑠 ∗ (with regard to some criteria) from some set of available alternatives 𝑆 ∗ . 𝑠 ∗ ∈ 𝑆 ∗ = 𝑠 𝑠 = arg min 𝑠∈𝑆 𝑓 𝑠 . Optimization includes finding “best available” values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains.
Objective function The function 𝑓 is called, variously: an objective function a loss function cost function (minimization) indirect utility function (minimization) a utility function (maximization) a fitness function (maximization) an energy function A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution.
Continuous vs. Digital Continuous optimization – the variables used in the objective function can assume real values, e.g., values from intervals of the real line. Analytical optimization methods Gradient descent, linear and non-linear optimization… Digital (discrete) optimization – the variables used in the mathematical program are restricted to assume only discrete values, such as the integers. Combinatorial optimization Integer programing
Continuous to Digital Some continuous problems can be transformed to digital optimization problems by: Changing objective function’s argument type to binary form Rounding real values to integers
Optimization problems Local minimums (optimums) Constraints Feasibility Existence Multi-objective, multi-variable optimization Multi-modal optimization Computing speed, some discrete optimization problems are NP-complete
Classical optimization Solution is identified by means of enumeration or differential calculus Allows an analytical solution (e.g. LS estimation) Existence of (unique) solution presumed Convergence of classical optimization methods for the solution of the corresponding first-order conditions Solution can be approximated reasonably well by standard algorithms (e.g. gradient methods) Straightforward application of standard methods, in general, will not always provide a good approximation of the global optimum
Heuristic methods Based on concepts found in nature Have become feasible as a consequence of growing computational power Although aiming at high quality solution, they cannot pretend to produce the exact solution in every case with certainty Nevertheless, a stochastic high-quality approximation of a global optimum is probably more valuable than a deterministic poor-quality local minimum provided by a classical method or no solution at all. Easy to implement to different problems Side constraints on the solution can be taken into account at low additional costs
Classical vs. Heuristic
Optimization methods Exhaustive search Simulated annealing Tabu search Ant colonies Memetic Algorithms Scatter Search
Genetic algorithm Imitates evolutionary process of species that reproduces Do not operate on a single current solution, but on a set of current solutions (population) New individuals generated with cross-over: combines part of genetic patrimony of each parent and applies a random mutation If new individual (child), inherits good characteristics from parents → higher probability to survive
Genetic algorithm
Particle swarm optimization PSO algorithm works by having a population (called a swarm) of candidate solutions (called particles). Particles are moved around in the search-space according to a few simple formulae. The movements of the particles are guided by their own best known position in the search-space as well as the entire swarm's best known position. When improved positions are being discovered these will then come to guide the movements of the swarm. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered.
Applications Task scheduling Identification of an optimal set of time lags Shortest path search Sequential ordering Identification of parameters
Conclusions In digital optimization variables are restricted to assume only discrete values Objective function is the main component in optimization process Local minimums, constraints and optimization speed are the main problems Heuristic algorithms are not guaranteed to find an optimal solution, nor are they guaranteed to run quickly. However they are still better then a low-quality deterministic local minimum provided by a classical method
Klausimai Tikslo funkcijų svarba ir jų pavyzdžiai. Kuo skiriasi skaitmeninis ir tolydus optimizavimas? Kokios optimizavimo problemos? Euristinių metodų privalumai lyginant su klasikiniais? Genetinio algoritmo principai. Skaitmeninio optimizavimo pritaikymo sritys.