Stochastic optimization in high dimension Stéphane Vialle Xavier Warin Sophia 21/10/08.

Slides:



Advertisements
Similar presentations
Parallel Processing with OpenMP
Advertisements

Delivering High Performance to Parallel Applications Using Advanced Scheduling Nikolaos Drosinos, Georgios Goumas Maria Athanasaki and Nectarios Koziris.
Electrical and Computer Engineering Mississippi State University
1 Advancing Supercomputer Performance Through Interconnection Topology Synthesis Yi Zhu, Michael Taylor, Scott B. Baden and Chung-Kuan Cheng Department.
8-May Electricity portfolio management -A dynamic portfolio approach to hedging economic risk in electricity generation Stein-Erik.
Modelling Developments at Power Systems Research Tom Halliburton EPOC Meeting 9 th July 2014.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
Applications of Stochastic Programming in the Energy Industry Chonawee Supatgiat Research Group Enron Corp. INFORMS Houston Chapter Meeting August 2, 2001.
DCABES 2009 China University Of Geosciences 1 The Parallel Models of Coronal Polarization Brightness Calculation Jiang Wenqian.
May 29, Final Presentation Sajib Barua1 Development of a Parallel Fast Fourier Transform Algorithm for Derivative Pricing Using MPI Sajib Barua.
Improving Reliability and Performance of Electric Power Grids by Using High Performance Computing Eugene A. Feinberg Department of Applied Mathematics.
Modelling inflows for SDDP Dr. Geoffrey Pritchard University of Auckland / EPOC.
On the convergence of SDDP and related algorithms Speaker: Ziming Guan Supervisor: A. B. Philpott Sponsor: Fonterra New Zealand.
Hydro Optimization Tom Halliburton. Variety Stochastic Deterministic Linear, Non-linear, dynamic programming Every system is different Wide variety.
The hybird approach to programming clusters of multi-core architetures.
Preliminary Analysis of the SEE Future Infrastructure Development Plan and REM Benefits.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Polyhedral Risk Measures Vadym Omelchenko, Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Market power1 ECON 4925 Autumn 2007 Electricity Economics Lecture 10 Lecturer: Finn R. Førsund.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
A NEW MARKET PLAYER: THE AGGREGATOR AND ITS INTERACTION WITH THE CONSUMER interaction Ramón Cerero, Iberdrola Distribución Paris, June 9th 2010 ADDRESS.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
ParCFD Parallel computation of pollutant dispersion in industrial sites Julien Montagnier Marc Buffat David Guibert.
1 Optimization Based Power Generation Scheduling Xiaohong Guan Tsinghua / Xian Jiaotong University.
Introduction to electricity economics1 ECON 4930 Autumn 2007 Electricity Economics Lecture 1 Lecturer: Finn R. Førsund.
1 Blue Gene Simulator Gengbin Zheng Gunavardhan Kakulapati Parallel Programming Laboratory Department of Computer Science.
A Distributed Algorithm for 3D Radar Imaging PATRICK LI SIMON SCOTT CS 252 MAY 2012.
Market power1 ECON 4925 Autumn 2006 Resource Economics Market power Lecturer: Finn R. Førsund.
Stochastic optimization of energy systems Cosmin Petra Argonne National Laboratory.
Parallelization of 2D Lid-Driven Cavity Flow
Dean Tullsen UCSD.  The parallelism crisis has the feel of a relatively new problem ◦ Results from a huge technology shift ◦ Has suddenly become pervasive.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Sep 08, 2009 SPEEDUP – Optimization and Porting of Path Integral MC Code to New Computing Architectures V. Slavnić, A. Balaž, D. Stojiljković, A. Belić,
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
Computing Simulation in Orders Based Transparent Parallelizing Pavlenko Vitaliy Danilovich, Odessa National Polytechnic University Burdeinyi Viktor Viktorovych,
Frankfurt (Germany), 6-9 June 2011 AN OPTIMISATION MODEL TO INTEGRATE ACTIVE NETWORK MANAGEMENT INTO THE DISTRIBUTION NETWORK INVESTMENT PLANNING TASK.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
CS 471 Final Project 2d Advection/Wave Equation Using Fourier Methods December 10, 2003 Jose L. Rodriguez
CS 484 Load Balancing. Goal: All processors working all the time Efficiency of 1 Distribute the load (work) to meet the goal Two types of load balancing.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Comparison of Cell and POWER5 Architectures for a Flocking Algorithm A Performance and Usability Study CS267 Final Project Jonathan Ellithorpe Mark Howison.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
Exploring Parallelism with Joseph Pantoga Jon Simington.
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Other Tools HPC Code Development Tools July 29, 2010 Sue Kelly Sandia is a multiprogram laboratory operated by Sandia Corporation, a.
Today's Software For Tomorrow's Hardware: An Introduction to Parallel Computing Rahul.S. Sampath May 9 th 2007.
NGS computation services: APIs and.
Concurrency and Performance Based on slides by Henri Casanova.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Accelerating K-Means Clustering with Parallel Implementations and GPU Computing Janki Bhimani Miriam Leeser Ningfang Mi
LLNL-PRES This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Parallel Software Development with Intel Threading Analysis Tools
MPI: Portable Parallel Programming for Scientific Computing
Constructing a system with multiple computers or processors
NGS computation services: APIs and Parallel Jobs
CS 179 Lecture 17 Options Pricing.
Blocking / Non-Blocking Send and Receive Operations
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Hybrid Programming with OpenMP and MPI
By Brandon, Ben, and Lee Parallel Computing.
Parallel Programming in C with MPI and OpenMP
Types of Parallel Computers
Parallel computing in Computational chemistry
Presentation transcript:

Stochastic optimization in high dimension Stéphane Vialle Xavier Warin Sophia 21/10/08

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 2 EDF and its customers The assets  58 nuclear power plants on 19 areas (86.6%)  14 thermal power plants (4.6%)  440 hydro plants and 220 dams (8.8%)  Solar energy, wind power (< 0.5%) Customers  Different kind of customers  A lot of style of contracts (for example swing where the producer can suspend delivery)

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 3 Assets management at EDF  Manage water stocks, fuel, customers contracts.  Goal :  maximize the expected cash flow  minimize risk  Under constraints  Satisfy the customer load  Respect pollution constraints

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 4 Asset management at EDF  Hazards :  Demand  Hydraulicity (inflows)  Weather patterns (cold means high demand)  Market prices  Assets outages. Stochastic control problem in high dimension : Number of state variable linked to :  Number of hazards  Number of stock to be dealt with

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 5 Numerical methods associated  Decomposition methods (Lemaréchal)  Very effective for time resolution  Duality gap (non convexity)  Dynamic programming (Bellman 1957)  Very general (non convex, binary …)  Face curse of dimensionnality, global risk constraints difficult to implement  Stochastic Dual Dynamic programming (Pereira)  Approximate convex Bellman values for stocks (needs convexity)  Bender cuts leads to Linear Programming problem  Global risk constraints difficult to implement

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 6 EDF process  Optimize the cost function J with approximation and keep all the optimal commands at each step (no asset constraints as ramp constraint, minimum time before restarting etc…)  Use a Monte Carlo simulator with all the assets and constraints to calcule accurate average earnings, risk measure Goal Incorporate more stocks in the optimizer to be more accurate A way to do it :  Use parallelism fo Stochastic Programming optimization  See influence of optimisation parallelisation on simulation

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 7 Dynamic programming implementation  Use Monte Carlo for simulations for hazards (flexible, easy to use for risk)  Backward algorithm (Longstaff Schwarz version)  At t = 0 interpolate J for current stock c and current uncertainty s

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 8 Algorithm problematic  Sequential in time  Rather sequential for nc nest  Parallel for c nest if all are available in memory for all (c,s)  number of points discretization in each direction.  Is the number of c to explore IDEA : parallelize the C nest by splitting the hypercube Use of communication scheme for optimisation and simulation too (commands spreads with stocks levels on processors)

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 9 Example splitting for 3 stocks PiPi Stock-1 levels Stock-2 levels Stock-3 levels PiPi tntn t n+1 Influence area on t n computations

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 10 2D example for routing (receive) P0P1P2P3 P4P5P6P7 P8P9P10P11 P12P13P14P15 P6P5P6 P5 P6 P5 P6 P5 Routing plan: What happens on P5 (for example) ? It determines all 2D-subcubes it has to receive from other processors Recv Send Proc

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 11 2D example for routing (send) P0P1P2P3 P4 P5 P6P7 P8P9P10P11 P12P13P14P15 P5 Routing plan: What happens on P5 (for example) ? Recv Send Proc It determines all 2D-subcubes it has to send to other processors: compute « influence area » of P0 compute the intersection with its t n+1 2D-subcube of data P0

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 12 2D example for routing P0P1P2P3 P4 P5 P6P7 P8P9P10P11 P12P13P14P15 P5 Routing plan: What happens on P5 (for example) ? Recv Send Proc It determines all 2D-subcubes it has to send to other processors: repeat with other processors… The routing plan of P5 is complete!  Execute it quickly!

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 13 C+ implementation Parallelization: MPI: Mpich-1, OpenMPI, IBM MPI communication routines: MPI_Issend, MPI_Irecv, MPI_Wait  overlap all communications when executing a routing plan (to speedup)  do not use “extra communication buffers” (to size up) + multithreading: Intel TBB or OpenMP  to speedup and to size up more (than using only message passing) Scientific computing libraries: Blitz++, Boost, Clapack, Sprng. Total: lines of C++ code 10% for parallelization management Parallelization can be withdrawn by preprocessing for small cas debug Same source code on PC-cluster and Blue Gene/L and Blue Gene/P

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 14 Test case presentation  Optimization and simulation on 518 days with time step of one day  One stock of water :  225 points discretizations (c)  5 commands (0 to 5000 MW each day for nc )  6 stocks of month future products with delivery of energy (peak and off peak hours)  5 points discretization for each one  5 commandes ( MW (sell) to 2000Mw (buy) tested every 2 weeks  Aggregated view of thermal assets.  Up to 225*5^6 points discretizations and 5^7 commands to tests

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 15 Results Intel 256 *2 cores, BG 8192*4 cores Comparison BG, cluster without multithreading

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 16 Results Comparison Blue Gene, Cluster multithreading

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 17 Results Some optimizations carried out for Blue Gene Should improve the results in intel. ICC should be used instead of ICC on intel Some more optimizations on Blue Gene should bring the optimization part around 1000s on 8192 mpi sessions with 4 threads

21 octobre 2008Juin 2008 Entité d'appartenanceEDF R&D 18 Conclusion Tool developped for stochastic optimization with a limited number of stocks (< 10) Will bring some reference calculation for some other methods (supposing convexity for example) giving some results on how far of optimality we are. To be tested on EDF data without approximation for asset Will be a candidate for GPU cluster optimization