Download presentation
Presentation is loading. Please wait.
Published byKenneth Jacobs Modified over 9 years ago
1
CCS Overview Rene Salmon Center for Computational Science
2
Introduction What is CCS? June 2001 Establish new collaborations Infrastructure to exchange ideas Interdisciplinary research Computational science research High end workstations HPC Hardware and Software
3
Software Visualization Techplot, AVS Compilers SGI, Absoft Math Libraries IMSL, BLAS Finite Element Modeling ABAQUS, PATRAN Molecular Dynamics NAMD, Gaussian, Amber, VMD Matlab, Mathematica
4
High Performance Computing Interconnect Shared MemoryDistributed Memory SGILinux Cluster
5
Shared MemoryDistributed Memory Multiprocessor Machines
6
SGI 4 compute nodes 32 CPUs 700MHZ R16000 MIPS 8 GB RAM Memory bandwidth 3.2GB/sec peak NUMAlink interconnect 1.6GB/sec each direction 1 TB SGI storage array Linux Cluster 34 nodes 68 CPUs 2.4 GHZ AMD Opteron 68 GB RAM Memory Bandwidth 12.8 GB/s Gigabit Ethernet interconnect 85MB/sec 1 TB storage array
7
Multiprocessor Machines Single OS Easier to program OpenMP Inter-process communication Multiple OS Harder to Program MPI Inter-process communication
8
Multiprocessor Machines High cost Complex Hardware Support contract Proprietary software Irix Compilers Low cost Commodity parts Community driven support Open source software Linux Compilers
9
Parallel Programming OpenMP OpenMP(Open specifications for Multiprocessing) Library and compiler directives Shared memory Process synchronization Thread based MPI MPI(Message Passing Interface) Libraries Distributed memory Process based Process synchronization Master/slave mode
10
Process Threads Share a single address space Access one another's variables Time & memory Interprocess communication
11
OpenMP program foobar …. !$omp parallel do do i=1, n z(i)=a*x(i)+b enddo end program foobar
12
MPI program foobar use mpi …. call MPI_INIT(…) call MPI_COMM_RANK(..,myid,..) call MPI_COMM_SIZE(..,numprocs,..) data_chunk=SIZE_X/numprocs j=1+myid*data_chunk n=j+(data_chunk-1) x_local=x(j:n) do i=1, data_chunk z_local(i)=a*x_local(i)+b enddo call MPI_GATHER(z_local, …,z,…) call MPI_FINALIZE(… end program foobar
13
Queuing System PBSPro Resource manager Schedules/decides when job run Allocates resources to jobs Full featured Supports preemption Priorities Supports parallel and single CPU jobs
14
CCS Queuing System Q1: Lowest priority Access to all Tulane community for research purposes only. Q2: Provide intellectually to the leadership of CCS Giving (or arrange) seminars Serving on CCS committees Q3: Financially support from individual grants Q3: Financially support from individual grants Personnel Computer/Software purchases Computer/Software maintenance Q4: Highest priority Faculty and students with CCS-funded projects
15
Grid Computing Login to Server Compile Move or prepare data Create and submit Job script to queue Monitor status Get results Move data Visualization
16
Grid Computing
17
Grids Nationally National Lambda Rail (NLR) Nationwide optical fiber infrastructure Open Science Grid DOE and NSF Roadmap Join U.S. labs and universities into a single, managed grid Goal: Build a national grid infrastructure for benefit of scientific applications
18
LONI: Louisiana Optical Network Initiative March of 2004 secured NLR membership Louisiana Board of Regents, Tulane, LSU State allocated $40 million to create and maintain LONI What is LONI? Statewide optical network Inter-connect universities and colleges Take advantage of NLR access 40 Gbps 1000 times faster
19
LONI: Louisiana Optical Network Initiative LONI Members Tulane University, Tulane HSC LSU, LSU Medical Centers in Shreveport and New Orleans Louisiana Tech University University of Louisiana at Lafayette Southern University University of New Orleans
20
LONI: Louisiana Optical Network Initiative Provide NLR access High-quality, high-definition videoconferencing High-speed access to data Remote visualization Remote instrumentation High Performance Computing Collaborative research projects and grants Attract better research faculty Increased potential of receiving national and international grant funding
21
LONI: Louisiana Optical Network Initiative End of summer 2005 $500,00.00 High Performance computer All Connected via LONI Tulane Pilot Grid SURA test bed Experience Grid Research
22
Accessing Resources Go to website: http://www.ccs.tulane.eduhttp://www.ccs.tulane.edu Resource request form Access local CCS and national Grid resources
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.