F.Yuasa at ACAT2002 Multidimensional Integration Package: DICE and its parallelization F.Yuasa / KEK K.Tobimatsu / Kogakuin Univ. S.Kawabata / KEK ACAT2002.

Slides:



Advertisements
Similar presentations
Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms Chenyang Lu, John A. Stankovic, Gang Tao, Sang H. Son Presented by Josh Carl.
Advertisements

Regularized risk minimization
Vector: Data Layout Vector: x[n] P processors Assume n = r * p
Vector Operations in R 3 Section 6.7. Standard Unit Vectors in R 3 The standard unit vectors, i(1,0,0), j(0,1,0) and k(0,0,1) can be used to form any.
SE263 Video Analytics Course Project Initial Report Presented by M. Aravind Krishnan, SERC, IISc X. Mei and H. Ling, ICCV’09.
Parallel Sorting Sathish Vadhiyar. Sorting  Sorting n keys over p processors  Sort and move the keys to the appropriate processor so that every key.
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
OLAP Services Business Intelligence Solutions. Agenda Definition of OLAP Types of OLAP Definition of Cube Definition of DMR Differences between Cube and.
Introduction to Parallel Processing Final Project SHARKS & FISH Presented by: Idan Hammer Elad Wallach Elad Wallach.
CPU Processor Speed Timeline Speed =.02 Mhz Year= 1972 Transistors= 3500 It takes 66, CPU’s to equal 1 i7.
CS 584. Review n Systems of equations and finite element methods are related.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
1 Friday, October 20, 2006 “Work expands to fill the time available for its completion.” -Parkinson’s 1st Law.
CS 584. Sorting n One of the most common operations n Definition: –Arrange an unordered collection of elements into a monotonically increasing or decreasing.
NEW APPROACH TO CALCULATION OF RANGE OF POLYNOMIALS USING BERNSTEIN FORMS.
Fast integration using quasi-random numbers J.Bossert, M.Feindt, U.Kerzel University of Karlsruhe ACAT 05.
A) Transformation method (for continuous distributions) U(0,1) : uniform distribution f(x) : arbitrary distribution f(x) dx = U(0,1)(u) du When inverse.
Mutlidimensional Detective Alfred Inselberg Streeable, Progressive, Mutlidimensional Scaling Matt Williams, Tamara Munzner Rylan Cottrell.
Multivariate Methods of Data Analysis in Cosmic Ray Astrophysics A. Chilingarian, A. Vardanyan Cosmic Ray Division, Yerevan Physics Institute, Armenia.
Introduction to Longitudinal Phase Space Tomography Duncan Scott.
Today Wrap up of probability Vectors, Matrices. Calculus
Space-Filling DOEs Design of experiments (DOE) for noisy data tend to place points on the boundary of the domain. When the error in the surrogate is due.
Assignment Solving System of Linear Equations Using MPI Phạm Trần Vũ.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
Presented By Wanchen Lu 2/25/2013
1 Using the PETSc Parallel Software library in Developing MPP Software for Calculating Exact Cumulative Reaction Probabilities for Large Systems (M. Minkoff.
Opening Quiz: Sketch a graph of the following polynomial function by hand using the methods discussed on Friday (make sure to label and be specific as.
Speed-up of the ring recognition algorithm Semeon Lebedev GSI, Darmstadt, Germany and LIT JINR, Dubna, Russia Gennady Ososkov LIT JINR, Dubna, Russia.
Numerical approach to multi- loop integrals K. Kato (Kogakuin University) with E. de Doncker, N.Hamaguchi, T.Ishikawa, T.Koike, Y. Kurihara, Y.Shimizu,
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Aim: Integrating Natural Log Function Course: Calculus Do Now: Aim: How do we integrate the natural logarithmic function?
Quadratic Classifiers (QC) J.-S. Roger Jang ( 張智星 ) CS Dept., National Taiwan Univ Scientific Computing.
HPC Components for CCA Manoj Krishnan and Jarek Nieplocha Computational Sciences and Mathematics Division Pacific Northwest National Laboratory.
Parallel I/O in CMAQ David Wong, C. E. Yang*, J. S. Fu*, K. Wong*, and Y. Gao** *University of Tennessee, Knoxville, TN, USA **now at: Pacific Northwest.
Array Operations ENGR 1181 MATLAB 4.
A Convergent Solution to Tensor Subspace Learning.
Jungpyo Lee Plasma Science & Fusion Center(PSFC), MIT Parallelization for a Block-Tridiagonal System with MPI 2009 Spring Term Project.
Multivariate Analysis and Data Reduction. Multivariate Analysis Multivariate analysis tries to find patterns and relationships among multiple dependent.
The Natural Log Function: Integration Lesson 5.7.
Statistics for Business and Economics 8 th Edition Chapter 7 Estimation: Single Population Copyright © 2013 Pearson Education, Inc. Publishing as Prentice.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
V.M. Sliusar, V.I. Zhdanov Astronomical Observatory, Taras Shevchenko National University of Kyiv Observatorna str., 3, Kiev Ukraine
Optimization of Nonlinear Singularly Perturbed Systems with Hypersphere Control Restriction A.I. Kalinin and J.O. Grudo Belarusian State University, Minsk,
K means ++ and K means Parallel Jun Wang. Review of K means Simple and fast Choose k centers randomly Class points to its nearest center Update centers.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Interconnection Networks Communications Among Processors.
Robert Peterson Group Meeting 1/25/11 Topspin 3 New Features Protein Dynamic Center Non Uniform Sampling A few older features Licensing.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Introduction to Monte Carlo Method
K Kato, E de Doncker, T Ishikawa and F Yuasa ACAT2017, August 2017
CSE Social Media & Text Analytics
Assimilating Tropospheric Emission Spectrometer profiles in GEOS-Chem
Multidimensional Integration Part I
Copyright © Cengage Learning. All rights reserved.
Sec 3.6: DERIVATIVES OF LOGARITHMIC FUNCTIONS
Parallelization of Sparse Coding & Dictionary Learning
Monte Carlo Integration Using MPI
16 VECTOR CALCULUS.
Lecture 4 - Monte Carlo improvements via variance reduction techniques: antithetic sampling Antithetic variates: for any one path obtained by a gaussian.
Lecture 13: Singular Value Decomposition (SVD)
SKTN 2393 Numerical Methods for Nuclear Engineers
PROGRAMME 17 INTEGRATION 2.
Use long division to evaluate the integral. {image}
Parallel Programming in C with MPI and OpenMP
Jaroslav Křivánek, MFF UK
Probabilistic Surrogate Models
Presentation transcript:

F.Yuasa at ACAT2002 Multidimensional Integration Package: DICE and its parallelization F.Yuasa / KEK K.Tobimatsu / Kogakuin Univ. S.Kawabata / KEK ACAT , Jun at MSU, Moscow

F.Yuasa at ACAT2002 BASES Multidimensional Integration Package Stratified and Importance sampling method Singular function can be integrated Up to 100 dimensions Heavily used in GRACE framework

F.Yuasa at ACAT2002 x y (x, y) Y X (X, Y) When singularities go along a diagonal line, we need appropriate variable transformation.

F.Yuasa at ACAT2002 DICE Developed by K.Tobimatsu and S.Kawabata –First version of DICE in 1992 –Research Reports of Kogakuin Univ. No.72 (1992) Divide the integral region into 2 Ndim hypercubes Two kinds of sampling method DICE Input –Ndim, Expected Error, # of Sampling points, Maximum division level, Maximum # of iteration

F.Yuasa at ACAT2002 Level = 2 Level = 3 How to divide Hypercube Ndim=2 Regular sampling and random sampling regular

F.Yuasa at ACAT2002 Example 1

F.Yuasa at ACAT2002

Example 2

F.Yuasa at ACAT2002

Example 3

F.Yuasa at ACAT2002

Example 4 4

F.Yuasa at ACAT2002

Results of I4 PackageEps =10**(-1)Eps =10**(-2) DICE-mpi( )E-02 ( )E-02 ParInt BASES( )E-02 ( )E-02 Analytical results

F.Yuasa at ACAT2002 Results of I4 (part2) PackageEps =10**(-3)Eps =10**(-4) DICE-mpi( )E-02 ( )E-02 ParInt BASES( )E-02 ( )E-02 Analytical results

F.Yuasa at ACAT2002 More complicated Integrand # of dimensions = 4 # of lines in FORTRAN = about 300 lines Example 5

F.Yuasa at ACAT2002 Results of Example5 PackageResult# of Sample points DICE-mpi 1 processor ( )E ParInt1.1 1 processor ( )E BASES( )E

F.Yuasa at ACAT2002 Results of Example5 (part2) PackageResult# of Sample points DICE-mpi 1 processor ( ) E ParInt1.1 1 processor ( )E BASES( )E

F.Yuasa at ACAT2002 Results of Example5 (part3) PackageResult# of Sample points DICE-mpi 1 processor ( )E % ParInt1.1 1 processor We did not try BASESWe did not try

F.Yuasa at ACAT2002 Parallelization We use MPI for the parallelization. Parallelization is useful for higher dimensional integrand Parallelization is useful for complicated integrand Example 5 is calculated by the parallelized DICE

F.Yuasa at ACAT2002 Speed Up # of CPUs1248 CPU time [sec] Speed Up Example5

F.Yuasa at ACAT2002 Summary We have developed DICE. DICE is available to Vector Processor. DICE is available to Parallel Processor. We have used MPI for parallelization. For the complicated integrand, parallelization shows good scalability.