Stochastic optimization of energy systems Cosmin Petra Argonne National Laboratory.

Slides:



Advertisements
Similar presentations
Copyright 2011, Data Mining Research Laboratory Fast Sparse Matrix-Vector Multiplication on GPUs: Implications for Graph Mining Xintian Yang, Srinivasan.
Advertisements

SE263 Video Analytics Course Project Initial Report Presented by M. Aravind Krishnan, SERC, IISc X. Mei and H. Ling, ICCV’09.
Electrical and Computer Engineering Mississippi State University
Multi-Area Stochastic Unit Commitment for High Wind Penetration in a Transmission Constrained Network Shmuel Oren University of California, Berkeley Joint.
Introductory Courses in High Performance Computing at Illinois David Padua.
Scalable Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory Informs Computing.
Applications of Stochastic Programming in the Energy Industry Chonawee Supatgiat Research Group Enron Corp. INFORMS Houston Chapter Meeting August 2, 2001.
OpenFOAM on a GPU-based Heterogeneous Cluster
Stochastic Optimization of Complex Energy Systems on High-Performance Computers Cosmin G. Petra Mathematics and Computer Science Division Argonne National.
Scalable Multi-Stage Stochastic Programming Cosmin Petra and Mihai Anitescu Mathematics and Computer Science Division Argonne National Laboratory DOE Applied.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Astrophysics, Biology, Climate, Combustion, Fusion, Nanoscience Working Group on Simulation-Driven Applications 10 CS, 10 Sim, 1 VR.
Kevin Kim PENSA Summer Energy Markets: Overview Energy Consumer Demand RTO Power Generators Supply Schedule.
Heterogeneous Computing Dr. Jason D. Bakos. Heterogeneous Computing 2 “Traditional” Parallel/Multi-Processing Large-scale parallel platforms: –Individual.
Stochastic optimization in high dimension Stéphane Vialle Xavier Warin Sophia 21/10/08.
Higher-order Confidence Intervals for Stochastic Programming using Bootstrapping Cosmin G. Petra Joint work with Mihai Anitescu Mathematics and Computer.
Least Cost System Operation: Economic Dispatch 1
1 ProActive performance evaluation with NAS benchmarks and optimization of OO SPMD Brian AmedroVladimir Bodnartchouk.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
On parallelizing dual decomposition in stochastic integer programming
ADLB Update Recent and Current Adventures with the Asynchronous Dynamic Load Balancing Library Rusty Lusk Mathematics and Computer Science Division Argonne.
The Asynchronous Dynamic Load-Balancing Library Rusty Lusk, Steve Pieper, Ralph Butler, Anthony Chan Mathematics and Computer Science Division Nuclear.
Scalable Stochastic Programming Cosmin G. Petra Mathematics and Computer Science Division Argonne National Laboratory Joint work with.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
MUMPS A Multifrontal Massively Parallel Solver IMPLEMENTATION Distributed multifrontal.
Optimization Under Uncertainty: Structure-Exploiting Algorithms Victor M. Zavala Assistant Computational Mathematician Mathematics and Computer Science.
Basis Light-Front Quantization: a non-perturbative approach for quantum field theory Xingbo Zhao With Anton Ilderton, Heli Honkanen, Pieter Maris, James.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Scalable Stochastic Programming Cosmin G. Petra Mathematics and Computer Science Division Argonne National Laboratory Joint work with.
1 Discussions on the next PAAP workshop, RIKEN. 2 Collaborations toward PAAP Several potential topics : 1.Applications (Wave Propagation, Climate, Reactor.
Are their more appropriate domain-specific performance metrics for science and engineering HPC applications available then the canonical “percent of peak”
Optimization for Operation of Power Systems with Performance Guarantee
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Scalable Multi-Stage Stochastic Programming
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
Fall 2000M.B. Ibáñez Lecture 01 Introduction What is an Operating System? The Evolution of Operating Systems Course Outline.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Pursuing Faster I/O in COSMO POMPA Workshop May 3rd 2010.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
BG/Q vs BG/P—Applications Perspective from Early Science Program Timothy J. Williams Argonne Leadership Computing Facility 2013 MiraCon Workshop Monday.
Nuclear structure and reactions Nicolas Michel University of Tennessee.
PARALLEL APPLICATIONS EE 524/CS 561 Kishore Dhaveji 01/09/2000.
Climate-Weather modeling studies Using a Prototype Global Cloud-System Resolving Model Zhi Liang (GFDL/DRC)
CSC 7600 Lecture 28 : Final Exam Review Spring 2010 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS FINAL EXAM REVIEW Daniel Kogler, Chirag Dekate.
Framework for MDO Studies Amitay Isaacs Center for Aerospace System Design and Engineering IIT Bombay.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
August 2001 Parallelizing ROMS for Distributed Memory Machines using the Scalable Modeling System (SMS) Dan Schaffer NOAA Forecast Systems Laboratory (FSL)
Computing Simulation in Orders Based Transparent Parallelizing Pavlenko Vitaliy Danilovich, Odessa National Polytechnic University Burdeinyi Viktor Viktorovych,
U N I V E R S I T Y O F S O U T H F L O R I D A Hadoop Alternative The Hadoop Alternative Larry Moore 1, Zach Fadika 2, Dr. Madhusudhan Govindaraju 2 1.
MESQUITE: Mesh Optimization Toolkit Brian Miller, LLNL
Belgrade, 26 September 2014 George S. Markomanolis, Oriol Jorba, Kim Serradell Overview of on-going work on NMMB HPC performance at BSC.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Efficiency of small size tasks calculation in grid clusters using parallel processing.. Olgerts Belmanis Jānis Kūliņš RTU ETF Riga Technical University.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
“NanoElectronics Modeling tool – NEMO5” Jean Michel D. Sellier Purdue University.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Ence 627 Decision Analysis for Engineering Project Portfolio Selection: “Optimal Budgeting of Projects Under Uncertainty” Javier Ordóñez.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Design and Planning Tools John Grosh Lawrence Livermore National Laboratory April 2016.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Resource Elasticity for Large-Scale Machine Learning
Analytics and OR DP- summary.
Is System X for Me? Cal Ribbens Computer Science Department
Parallel Programming in C with MPI and OpenMP
Hybrid Programming with OpenMP and MPI
Parallel Programming in C with MPI and OpenMP
Presentation transcript:

Stochastic optimization of energy systems Cosmin Petra Argonne National Laboratory

A) Project Overview Real-time optimization (power dispatch and unit commitment) of power grid in the presence of uncertainty (renewable energy, smart grid, weather) Stochastic formulations reduce both short-term (production) and long-term (reserve) costs, stabilize prices, and increase the reliability. team: Mihai Anitescu, Cosmin Petra, Miles Lubin (algorithms and implementation), Victor Zavala and Emil Constantinescu (modeling and data) Funding: DOE Applied Math ( ), DOE ASCR MMICC center ( ) DOE INCITE Award ( ) - 10 mil core hours for 2012.

B) Science Lesson What does the application do, and how? Stochastic optimization = decisions taken now are influenced by future random conditions (multiple scenarios) Unit Commitment: Determine optimal on/off schedule of thermal (coal, natural gas, nuclear) generators. Day-ahead market prices. (solved hourly) Economic Dispatch: Set real-time market prices. (solved every min.) Scenario-based parallelization The “now” decisions cause coupling PIPS suite (PIPS-IPM, PIPS-S) - parallel implementations that exploits the stochastic structure at the linear algebra level.

C) Parallel Programming Model MPI + OpenMP –Scenario computations accelerated with OpenMP (sparse linear algebra) –Inter-scenarios communication with MPI –Distributed dense linear algebra for the coupling (done with Elemental) C++ Cmake build system Runs on “Fusion” cluster, “Intrepid” BG/P Asynchronous implementation may require new programming model (X+SMP). Yeah, I know … 99.99% X will be MPI

D) Computational Methods Standard interior-point method (PIPS-IPM) and dual simplex (PIPS-S) In-house parallel linear algebra Linear algebra kernels –Sparse: MA57, WSMP, PARDISO. –Dense: LAPACK, Elemental Next: PIPS-L – Lagrangian decomposition for integer problems –“Dual decomposition” method –Based on multi-threaded integer programming kernels (CBC,SCIP) and PIPS-IPM Asynchronous – master-worker framework to deal with load imbalance in scenarios

E) I/O Patterns and Strategy I/O requirements minimal, one file per MPI process at starting. We end up with the optimal cost (a double) and decision variables (vectors of relatively small size) Restarting done by saving the intermediate iterates (vectors) Future plans: Parallel algebraic specification of the problem –Generating the input data IN PARALLEL given an algebraic/mathematical description of the problem (AMPL-like script) –Currently done in serial

F) Visualization and Analysis Output is small, no special analysis required less

G) Performance Bottlenecks to better performance? –SMP sparse kernels (PIPS-IPM) –memory bandwidth (PIPS-S) Bottlenecks to better scaling? –Dense kernels (PIPS-IPM) –load imbalance(PIPS-S, PIPS-L) Collaboration with Olaf Schenk - PARDISO – SMP sparse rhs PIPS-L – asynchronous optimization algorithms

H) Tools How do you debug your code? –cerr, cout

I) Status and Scalability PIPS-IPM scaling Efficiency likely to decrease with faster SMP scenario computations Factors that adversely affect scalability –Serial bottlenecks: dense linear algebra for the “now” decisions –Using Elemental improves scaling for some problems

I) Status and Scalability PIPS-S scaling efficiency is –31% on Fusion from 1 to 256 cores –35% on Intrepid from 2048 to 8192 cores Factors that adversely affect scalability –Serial bottleneck (“now” decisions) –Communication ( 10 collectives per iteration, cost of 1 iteration=O(ms) ) –Load imbalance Intended to be used on up to few hundred of cores PIPS-S is the first HPC implementation of simplex

J) Roadmap 2 years from now? Solve grid optimization models with –Better resolution and larger time horizon –Larger network: continental US grid –More uncertainty –Integer variables