The AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members December 9, 2011 1.

Slides:



Advertisements
Similar presentations
FatMax Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 LicenseCreative Commons Attribution-NonCommercial-ShareAlike 2.5.
Advertisements

By- Anjali Bhardwaj. An operating system (OS) is a collection of software that manages computer hardware resources and provides common services for computer.
What is an operating system? Is it software?
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Beowulf Supercomputer System Lee, Jung won CS843.
Copyright GeneGo CONFIDENTIAL »« MetaCore TM (System requirements and installation) Systems Biology for Drug Discovery.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Types of Parallel Computers
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
UNIX Chapter 01 Overview of Operating Systems Mr. Mohammad A. Smirat.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Reported by Richard Jones GlueX collaboration meeting, Newport News, May 13, 2009 Collaborative Analysis Toolkit for Partial Wave Analysis.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
Xuan Guo Chapter 1 What is UNIX? Graham Glass and King Ables, UNIX for Programmers and Users, Third Edition, Pearson Prentice Hall, 2003 Original Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Word Processing, Web Browsing, File Access, etc. Windows Operating System (Kernel) Window (GUI) Platform Dependent Code Virtual Memory “Swap” Block Data.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
A+ Guide to Software, 4e Chapter 1 Introducing Operating Systems.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Learning Outcomes At the end of this lesson, students should be able to: State the types of system software – Operating system – Utility system Describe.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
Technology Expectations in an Aeros Environment October 15, 2014.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
Chapter 4 COB 204. What do you need to know about hardware? 
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Progress in Multi-platform Software Deployment (Linux and Windows) Tim Kwiatkowski Welcome Consortium Members November 29,
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Sobolev Showcase Computational Mathematics and Imaging Lab.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
A+ Guide to Software Managing, Maintaining and Troubleshooting THIRD EDITION Introducing and Comparing Operating Systems Chapter 1.
Sun Fire™ E25K Server Keith Schoby Midwestern State University June 13, 2005.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
10/8: Software What is software? –Types of software System software: Operating systems Applications Creating software –Evolution of software development.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Copyright © 2006 by The McGraw-Hill Companies,
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
GPUs: Overview of Architecture and Programming Options Lee Barford firstname dot lastname at gmail dot com.
© Paradigm Publishing, Inc. 4-1 Chapter 4 System Software Chapter 4 System Software.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Operating Systems Overview Basic Computer Concepts Operating System What does an operating system do  A computer’s software acts similarly with.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Intra-Socket and Inter-Socket Communication in Multi-core Systems Roshan N.P S7 CSB Roll no:29.
CIT 140: Introduction to ITSlide #1 CSC 140: Introduction to IT Operating Systems.
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
Chapter 5 Operating Systems.
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Operating System & Application Software
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Constructing a system with multiple computers or processors
Chapter 6 Introduction to Network Operating Systems
Overview of HPC systems and software available within
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Korea Software HRD Center
Types of Parallel Computers
EmPOWERing Software Porting Code to Run on our Power AI Cluster
Presentation transcript:

The AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members December 9,

Hardware Clusters Multiprocessor / Multi-core Software Computational Environment Compilers Libraries Graphics Software Design Directory Layout The Future Overview 2

Hardware Clusters The AASPI Software was originally designed to run on U**X/Linux clusters using MPI (Message Passing Interface). Large Granularity No need for expensive interconnects. Gigabit Ethernet is sufficient. Depending on the size of the cluster, can be difficult to administer. 3

Hardware Multiprocessor / Multi-core Newer multi-core processors have become available Currently no explicit multi-threading. MPI using “Loopback” Communication Simpler to administer Can be grown into a cluster 4

Hardware Our Current Resources Older Resources diamond - Sun Enterprise 450 fluorite- Dual CPU 2.4GHz Xeon 5.2 TB storage (offline) Newer Resources Opal – Dual Quad-core 3.0GHz Xeon 16 GB, 15 TB storage Ruby – Quad Quad-core 1.6 GHz Xeon 32 GB, 11 TB storage Jade – Dual Quad-core 2.8 GHz Xeon 48 GB, 5 TB storage Hematite – Dual Quad-core 3.46 GHz Xeon 48 GB, 5 TB storage Tripolite – Dual Six-core 3.46 GHz Xeon 48 GB, 5 TB storage Corundum – Dual Quad-core 2.33GHz Xeon 32 GB, 10 TB -file server 22 Windows XP/Windows 7 64bit PC/Workstations. 5

Tape Reading Ability Our Current Resources Older Resources 8mm Exabyte DLT-8000 IBM 3590B Newer Resources LTO-4 6

Hardware Our Current Resources – Cluster Resources OSCER ( Oklahoma Supercomputing Center for Education & Research ) As a whole: 536 User Accessible Nodes, 8800 GB aggregate RAM, 100TB Usable Fast scratch storage, GFlop peak, GFlop sustained. Our own dedicated OSCER nodes / storage Dual Quad core (3 ea GHz, 3ea GHz) 16GB RAM Storage node - Dual Quad core 2.33GHz, 16GB RAM, 18TB disk storage. Muntu 1 management node, 1 head node, 14 compute nodes. Each node: 3.06 GHz Dual processor, 4GB RAM. Total disk storage: ~2TB 7

Hardware Recommendations What type of hardware do I need to run the AASPI software? The short answer: It depends. Entry level suggestion: Dual socket with Dual, Quad or Six-core CPU 2.5GHz+ 2GB /core >2 TB disk capacity 8

Software Environment − OS Operating System As shipped, we have chosen to pre-compile the AASPI software. This should work on most Redhat 4 Release 4 and higher installations. Some needed packages blas, lapack, libf2c, bzip2-libs,zlib, X11 packages for running the GUI, Mesa-libGL, Mesa-libGLU 9

Software Environment − 64 vs. 32 bit The majority of the AASPI code has been converted over to the aaspi_io framework which is compatible with the SEP files. SEPLib limited us to 32 bit code. We are still compiling most of our code as 32bit, but we have begun to release both 32bit and 64bit compiled binaries. Certain codes like the SOM code require more memory, 32bit codes are limited to 2 GB per process. However, we are still using SEP utilities for display. 10

Software Environment − Compilers We have chosen to pre-compile the AASPI software to make your life easier. However, IF you are compiling on your own… Required: A good Fortran90 compiler such as the Portland Group Fortran compiler or the Intel Fortran 90/95 compiler. We use the Intel Fortran compiler. Required: A good C/C++ compiler. GCC is fine. Required: Patience! Most of the compiling issues come from the 3 rd party packages! 11

Software Environment − Libraries The software depends on several external libraries: Seismic Unix ( Center for Wave Phenomena - Colorado School of Mines ) SEPlib ( Stanford Exploration Project ) SEPlib utilities are primarily used for display – most of the AASPI code no longer requires SEPlib (or SU) SEG-Y import/export uses aaspi_io and does not require SU or SEPlib. OpenMPI (We have used MPICH in the past) FFTW ( mostly Version 2 at the present time migrating to version 3 ) Lapack & BLAS ( The Intel Math Kernel Library could be used as a substitute ) The FOX Toolkit (GUI interface and seismic data display) 12

Software Environment − Graphics Now we have a GUI interface. Based on the FOX Toolkit. For Linux this means X-Windows based. How do we use it? Some Solutions Use a desktop Linux workstation. Use a Mac ThinAnywhere VNC Hummingbird Exceed Xming Cygwin 13

Software Design Practices/Goals Use modern programming languages Fortran 90/95 C/C++ Modular Design Maximize code re-use Use Fortran 90/95 modules/interfaces Use C++ classes/template programming Libraries Organize processes/functions into logical, reusable libraries 14

Software Layout AASPIbinbin64ext_libext_rpmext_srcincludeinclude64liblib64mansrcscripts Precompiled binaries – 32bit Non-AASPI package compiled libraries Non-AASPI RPMS Non-AASPI packages - source AASPI include files (along with others) AASPI man pages AASPI source code Scripts – program wrappers and utilities AASPI libraries & other shared libraries 15 Precompiled binaries – 64bit AASPI include files (64 bit Fortran module files) AASPI libraries – 64bit

The Future Replacing the rest of the SEPlib dependencies – We hope do develop some display tools of our own as we move away from the SEP framework. At this point, we have most of our code converted to our own API. However, we are still shaking out some of the bugs introduced by the conversion. MS Windows? – Perhaps… All of our core code should be multiplatform. MPI is available on Windows platforms via cluster services. 16

Future Computing GPU Research -- Still on our Radar We are experimenting with the newest generation of equipment – GPUs (Graphics Processing Units) We were working with CUDA (Compute Unified Device Architecture from nVidia). OpenCL looks like a more promising road for code portability. The software development target for GPU processing will be the desktop PC most likely as plug-ins for Petrel. However, OSCER does have a GPU cluster which we have not tested. Certain codes may lend themselves more naturally to GPU processing than others. 17

AASPI Software Computational Environment Tim Kwiatkowski Thank You! Questions? 18