CCS Overview Rene Salmon Center for Computational Science.

Slides:



Advertisements
Similar presentations
Multiple Processor Systems
Advertisements

Introduction to Openmp & openACC
Introductions to Parallel Programming Using OpenMP
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Beowulf Supercomputer System Lee, Jung won CS843.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve?
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
OPERATING SYSTEM OVERVIEW
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
ICER User Meeting 3/26/10. Agenda What’s new in iCER (Wolfgang) Whats new in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs,
Florida Advanced Computing Consortium A vision and a plan for research computing in Florida.
and beyond Office of Vice President for Information Technology.
Research Support Services Research Support Services.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
LONI Overview State-wide IT initiative: $25M – Gov. Mike Foster, present LONI - $40M, Gov. Kathleen Blanco, LONI - $10M, Gov. Kathleen.
Multiple Processor Systems Chapter Multiprocessors 8.2 Multicomputers 8.3 Distributed systems.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research?
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1 LONI (The Louisiana Optical Network Initiative) Tevfik Koşar Center for Computation and Technology Louisiana State University DOSAR Workshop, Arlington-TX.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
Background Computer System Architectures Computer System Software.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Ben Rogers August 18,  High Performance Computing  Data Storage  Hadoop Pilot  Secure Remote Desktop  Training Opportunities  Grant Collaboration.
Brief introduction about “Grid at LNS”
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
NL Service Challenge Plans
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Constructing a system with multiple computers or processors
Is System X for Me? Cal Ribbens Computer Science Department
University of Technology
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Types of Parallel Computers
Presentation transcript:

CCS Overview Rene Salmon Center for Computational Science

Introduction What is CCS? June 2001 Establish new collaborations Infrastructure to exchange ideas Interdisciplinary research Computational science research High end workstations HPC Hardware and Software

Software Visualization Techplot, AVS Compilers SGI, Absoft Math Libraries IMSL, BLAS Finite Element Modeling ABAQUS, PATRAN Molecular Dynamics NAMD, Gaussian, Amber, VMD Matlab, Mathematica

High Performance Computing Interconnect Shared MemoryDistributed Memory SGILinux Cluster

Shared MemoryDistributed Memory Multiprocessor Machines

SGI 4 compute nodes 32 CPUs 700MHZ R16000 MIPS 8 GB RAM Memory bandwidth 3.2GB/sec peak NUMAlink interconnect 1.6GB/sec each direction 1 TB SGI storage array Linux Cluster 34 nodes 68 CPUs 2.4 GHZ AMD Opteron 68 GB RAM Memory Bandwidth 12.8 GB/s Gigabit Ethernet interconnect 85MB/sec 1 TB storage array

Multiprocessor Machines Single OS Easier to program OpenMP Inter-process communication Multiple OS Harder to Program MPI Inter-process communication

Multiprocessor Machines High cost Complex Hardware Support contract Proprietary software Irix Compilers Low cost Commodity parts Community driven support Open source software Linux Compilers

Parallel Programming OpenMP OpenMP(Open specifications for Multiprocessing) Library and compiler directives Shared memory Process synchronization Thread based MPI MPI(Message Passing Interface) Libraries Distributed memory Process based Process synchronization Master/slave mode

Process Threads Share a single address space Access one another's variables Time & memory Interprocess communication

OpenMP program foobar …. !$omp parallel do do i=1, n z(i)=a*x(i)+b enddo end program foobar

MPI program foobar use mpi …. call MPI_INIT(…) call MPI_COMM_RANK(..,myid,..) call MPI_COMM_SIZE(..,numprocs,..) data_chunk=SIZE_X/numprocs j=1+myid*data_chunk n=j+(data_chunk-1) x_local=x(j:n) do i=1, data_chunk z_local(i)=a*x_local(i)+b enddo call MPI_GATHER(z_local, …,z,…) call MPI_FINALIZE(… end program foobar

Queuing System PBSPro Resource manager Schedules/decides when job run Allocates resources to jobs Full featured Supports preemption Priorities Supports parallel and single CPU jobs

CCS Queuing System Q1: Lowest priority Access to all Tulane community for research purposes only. Q2: Provide intellectually to the leadership of CCS Giving (or arrange) seminars Serving on CCS committees Q3: Financially support from individual grants Q3: Financially support from individual grants Personnel Computer/Software purchases Computer/Software maintenance Q4: Highest priority Faculty and students with CCS-funded projects

Grid Computing Login to Server Compile Move or prepare data Create and submit Job script to queue Monitor status Get results Move data Visualization

Grid Computing

Grids Nationally National Lambda Rail (NLR) Nationwide optical fiber infrastructure Open Science Grid DOE and NSF Roadmap Join U.S. labs and universities into a single, managed grid Goal: Build a national grid infrastructure for benefit of scientific applications

LONI: Louisiana Optical Network Initiative March of 2004 secured NLR membership Louisiana Board of Regents, Tulane, LSU State allocated $40 million to create and maintain LONI What is LONI? Statewide optical network Inter-connect universities and colleges Take advantage of NLR access 40 Gbps 1000 times faster

LONI: Louisiana Optical Network Initiative LONI Members Tulane University, Tulane HSC LSU, LSU Medical Centers in Shreveport and New Orleans Louisiana Tech University University of Louisiana at Lafayette Southern University University of New Orleans

LONI: Louisiana Optical Network Initiative Provide NLR access High-quality, high-definition videoconferencing High-speed access to data Remote visualization Remote instrumentation High Performance Computing Collaborative research projects and grants Attract better research faculty Increased potential of receiving national and international grant funding

LONI: Louisiana Optical Network Initiative End of summer 2005 $500,00.00 High Performance computer All Connected via LONI Tulane Pilot Grid SURA test bed Experience Grid Research

Accessing Resources Go to website: Resource request form Access local CCS and national Grid resources