A PARALLEL FORMULATION OF THE SPATIAL AUTO-REGRESSION MODEL FOR MINING LARGE GEO-SPATIAL DATASETS HPDM 2004 Workshop at SIAM Data Mining Conference Barış.

Slides:



Advertisements
Similar presentations
SkewReduce YongChul Kwon Magdalena Balazinska, Bill Howe, Jerome Rolia* University of Washington, *HP Labs Skew-Resistant Parallel Processing of Feature-Extracting.
Advertisements

Spatial Dependency Modeling Using Spatial Auto-Regression Mete Celik 1,3, Baris M. Kazar 4, Shashi Shekhar 1,3, Daniel Boley 1, David J. Lilja 1,2 1 CSE.
Distributed Approximate Spectral Clustering for Large- Scale Datasets FEI GAO, WAEL ABD-ALMAGEED, MOHAMED HEFEEDA PRESENTED BY : BITA KAZEMI ZAHRANI 1.
1 Spatial Big Data Challenges Intersecting Cloud Computing and Mobility Shashi Shekhar McKnight Distinguished University Professor Department of Computer.
Introductory Courses in High Performance Computing at Illinois David Padua.
Parallelizing Spatial Data Mining Algorithms: A case study with Multiscale and Multigranular Classification PGAS 2006 Vijay Gandhi, Mete Celik, Shashi.
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
Predicting Locations Using Map Similarity(PLUMS): A Framework for Spatial Data Mining Sanjay Chawla(Vignette Corporation) Shashi Shekhar, Weili Wu(CS,
May 29, Final Presentation Sajib Barua1 Development of a Parallel Fast Fourier Transform Algorithm for Derivative Pricing Using MPI Sajib Barua.
AHPCRC SPATIAL DATA-MINING TUTORIAL on Scalable Parallel Formulations of Spatial Auto-Regression (SAR) Models for Mining Regular Grid Geospatial Data Shashi.
PhD Prelim Oral Exam Parallelizing Eigen-Value Computation Based Exact Spatial Auto-Regression Model Solution Baris M. Kazar Advisors: Dr. Shashi Shekhar,
PARALLEL DBMS VS MAP REDUCE “MapReduce and parallel DBMSs: friends or foes?” Stonebraker, Daniel Abadi, David J Dewitt et al.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
N Tropy: A Framework for Analyzing Massive Astrophysical Datasets Harnessing the Power of Parallel Grid Resources for Astrophysical Data Analysis Jeffrey.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Budapest, November st ALADIN maintenance and phasing workshop Short introduction to OpenMP Jure Jerman, Environmental Agency of Slovenia.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Prospector : A Toolchain To Help Parallel Programming Minjang Kim, Hyesoon Kim, HPArch Lab, and Chi-Keung Luk Intel This work will be also supported by.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Venkatram Ramanathan 1. Motivation Evolution of Multi-Core Machines and the challenges Background: MapReduce and FREERIDE Co-clustering on FREERIDE Experimental.
ET E.T. International, Inc. X-Stack: Programming Challenges, Runtime Systems, and Tools Brandywine Team May2013.
1 Babak Behzad, Yan Liu 1,2,4, Eric Shook 1,2, Michael P. Finn 5, David M. Mattli 5 and Shaowen Wang 1,2,3,4 Babak Behzad 1,3, Yan Liu 1,2,4, Eric Shook.
Venkatram Ramanathan 1. Motivation Evolution of Multi-Core Machines and the challenges Summary of Contributions Background: MapReduce and FREERIDE Wavelet.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Scaling to New Heights Retrospective IEEE/ACM SC2002 Conference Baltimore, MD.
Performance Tuning on Multicore Systems for Feature Matching within Image Collections Xiaoxin Tang*, Steven Mills, David Eyers, Zhiyi Huang, Kai-Cheung.
Software Pipelining for Stream Programs on Resource Constrained Multi-core Architectures IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEM 2012 Authors:
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
Performance Model & Tools Summary Hung-Hsun Su UPC Group, HCS lab 2/5/2004.
OpenMP – Introduction* *UHEM yaz çalıştayı notlarından derlenmiştir. (uhem.itu.edu.tr)
1 Exploring Custom Instruction Synthesis for Application-Specific Instruction Set Processors with Multiple Design Objectives Lin, Hai Fei, Yunsi ACM/IEEE.
1/30/2003 BARC1 Profile-Guided I/O Partitioning Yijian Wang David Kaeli Electrical and Computer Engineering Department Northeastern University {yiwang,
Supercomputing ‘99 Parallelization of a Dynamic Unstructured Application using Three Leading Paradigms Leonid Oliker NERSC Lawrence Berkeley National Laboratory.
Computer Science and Engineering Parallelizing Defect Detection and Categorization Using FREERIDE Leonid Glimcher P. 1 ipdps’05 Scaling and Parallelizing.
JAVA AND MATRIX COMPUTATION
Task Graph Scheduling for RTR Paper Review By Gregor Scott.
Advanced Computer Architecture Lab University of Michigan Compiler Controlled Value Prediction with Branch Predictor Based Confidence Eric Larson Compiler.
Manno, , © by Supercomputing Systems 1 1 COSMO - Dynamical Core Rewrite Approach, Rewrite and Status Tobias Gysi POMPA Workshop, Manno,
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Department of Computer Science MapReduce for the Cell B. E. Architecture Marc de Kruijf University of Wisconsin−Madison Advised by Professor Sankaralingam.
Dynamic Scheduling Monte-Carlo Framework for Multi-Accelerator Heterogeneous Clusters Authors: Anson H.T. Tse, David B. Thomas, K.H. Tsoi, Wayne Luk Source:
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Lecture 13: Basic Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
C OMPUTATIONAL R ESEARCH D IVISION 1 Defining Software Requirements for Scientific Computing Phillip Colella Applied Numerical Algorithms Group Lawrence.
Parallel Computing Presented by Justin Reschke
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
Parallel Molecular Dynamics A case study : Programming for performance Laxmikant Kale
Department of Computer Science, Johns Hopkins University Lecture 7 Finding Concurrency EN /420 Instructor: Randal Burns 26 February 2014.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Monte Carlo Linear Algebra Techniques and Their Parallelization Ashok Srinivasan Computer Science Florida State University
Auburn University
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
PARALLEL COMPUTING Submitted By : P. Nagalakshmi
Conception of parallel algorithms
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
Parallel Density-based Hybrid Clustering
Location Prediction and Spatial Data Mining (S. Shekhar)
L21: Putting it together: Tree Search (Ch. 6)
Learning with information of features
Milind A. Bhandarkar Adaptive MPI Milind A. Bhandarkar
Shashi Shekhar Weili Wu Sanjay Chawla Ranga Raju Vatsavai
Hybrid Programming with OpenMP and MPI
Department of Computer Science, University of Tennessee, Knoxville
Spatial Data Mining: Three Case Studies
Wellington Cabrera Advisor: Carlos Ordonez
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

A PARALLEL FORMULATION OF THE SPATIAL AUTO-REGRESSION MODEL FOR MINING LARGE GEO-SPATIAL DATASETS HPDM 2004 Workshop at SIAM Data Mining Conference Barış M. Kazar, Shashi Shekhar, David J. Lilja, Daniel Boley Army High Performance Computing and Research Center (AHPCRC) Minnesota Supercomputing Institute (MSI) Digital Technology Center (DTC) University of Minnesota

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets2 Overview Motivation Classical and New Data-Mining Techniques Problem Definition Our Approach Experimental Results Conclusions and Future Work

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets3 Motivation Widespread use of spatial databases  Mining spatial patterns  The 1855 Asiatic Cholera on London [Griffith] Fair Landing [NYT, R. Nader]  Correlation of bank locations with loan activity in poor neighborhoods Retail Outlets [NYT, Walmart, McDonald etc.]  Determining locations of stores by relating neighborhood maps with customer databases Crime Hot Spot Analysis [NYT, NIJ CML]  Explaining clusters of sexual assaults by locating addresses of sex-offenders Ecology [Uygar]  Explaining location of bird nests based on structural environmental variables

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets4 Key Concept: Neighborhood Matrix ( W ) W allows other neighborhood definitions distance based 8 and more neighbors Space + 4-neighborhood 6 th row Binary W 6 th row Row-normalized W Given: Spatial framework Attributes

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets5 Classical and New Data-Mining Techniques Solving Spatial Auto-regression Model  = 0, = 0 : Least Squares Problem  = 0, = 0 : Eigenvalue Problem  General case: Computationally expensive Maximum Likelihood Estimation Need parallel implementation to scale up

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets6 Related Work & Our Contributions Related work: Li, 1996  Limitations: Solved 1-D problem Our Contributions  Parallel solution for 2-D problems  Portable software  Fortran 77  An Application of Hybrid Parallelism »MPI messaging system »Compiler directives of OpenMP

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets7 A Serial Solution B Golden Section Search Calculate ML Function A Compute Eigenvalues C Least Squares Eigenvalues of W Compute Eigenvalues (Stage A )  Produces dense W neighborhood matrix,  Forms synthetic data y  Makes W symmetric  Householder transformation  Convert dense symmetric matrix to tri-diagonal matrix  QL Transformation  Compute all eigenvalues of tri-diagonal matrix

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets8 Serial Response Times (sec) Stage A is the bottleneck & Stage B and C contribute very small to response time

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets9 Problem Definition Given: A Sequential solution procedure: “Serial Dense Matrix Approach” for one-dimensional geo-spaces Find: Parallel Formulation of Serial Dense Matrix Approach for multi-dimensional geo-spaces Constraints:   N(0,  2 I) IID Reasonably efficient parallel implementation Parallel Platform Size of W (large vs. small and dense vs. sparse) Objective: Portable & scalable software

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets10 Our Approach – Parallel Spatial Auto-Regression Function vs. Data Partitioning  Function partitioning: Each processor works on the same data with different instructions  Data partitioning (applied): Each processor works on different data with the same instructions Implementation Platform:  Fortran with MPI & OpenMP API’s No machine-specific compiler directives  Portability  Help software development and technology transfer Other Performance Tuning  Static terms computed once

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets11 Contiguous P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 P1 P2 P3 P4 Round-robin with chunk size Data Partitioning in a Smaller Scale 4 processors are used and chunk size can be determined by the user W is 16-by-16 and partitioned across processors P1- ( 40 vs. 58 ) P2- (36 vs. 42) P3- (32 vs. 26) P4- ( 28 vs. 10 )

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets12 A : Contiguous for rectangular loops & round-robin with chunk-size 4 B : Contiguous C : Contiguous The arrows are also synchronization points for parallel solution A B C There are synchronization points within the boxes as well Data Partitioning & Synchronization B Golden Section Search Calculate ML Function A Compute Eigenvalues C Least Squares Eigenvalues of W

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets13 Experimental Design

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets14 Experimental Results – Effect of Load Balancing

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets15 Experimental Results- Effect of Problem Size

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets16 Experimental Results- Effect of Chunk Size Critical value of the chunk size for which the speedup reaches the maximum. This value is higher for dynamic scheduling to compensate for the scheduling overhead. The workload is more evenly distributed across processors at the critical chunk size value.

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets17 Experimental Results- Effect of # of Processors

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets18 Summary Developed a parallel formulation of spatial auto-regression model Estimates maximum likelihood of regular square tessellation 1-D and 2-D planar surface partitionings for location prediction problems Used dense eigenvalue computation and hybrid parallel programming

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets19 Future Work 1.Understand reasons of inefficiencies –Algebraic cost model for speedup measurements on different architectures 2.Fine tune implemented parallel formulation –Consider alternate parallel formulations 3.Parallelize other serial solutions using sparse-matrix techniques −Chebyshev Polynomial approximation −Markov Chain Monte Carlo Estimator

A Parallel Formulation of The Spatial Auto-Regression Model for Mining Large Geo-spatial Datasets20 Acknowledgments & Final Word Army High Performance Computing Research Center-AHPCRC Minnesota Supercomputing Institute - MSI Digital Technology Center – DTC Spatial Database Group Members ARCTiC Labs Group Members Dr. Sanjay Chawla Dr. Kelley Pace Dr. James LeSage THANK YOU VERY MUCH Questions?