FY 2004 Allocations Francesca Verdier NERSC User Services NERSC User Group Meeting 05/29/03.

Slides:



Advertisements
Similar presentations
Cultural Heritage in REGional NETworks REGNET Project Meeting Content Group
Advertisements

High Performance Computing Course Notes Parallel I/O.
Communication-Avoiding Algorithms Jim Demmel EECS & Math Departments UC Berkeley.
Dynamic Load Balancing for VORPAL Viktor Przebinda Center for Integrated Plasma Studies.
CM-5 Massively Parallel Supercomputer ALAN MOSER Thinking Machines Corporation 1993.
Sensor Network Platforms and Tools
AMLAPI: Active Messages over Low-level Application Programming Interface Simon Yau, Tyson Condie,
Efficient I/O on the Cray XT Jeff Larkin With Help Of: Gene Wagenbreth.
Cache Coherent Distributed Shared Memory. Motivations Small processor count –SMP machines –Single shared memory with multiple processors interconnected.
MPI and C-Language Seminars Seminar Plan  Week 1 – Introduction, Data Types, Control Flow, Pointers  Week 2 – Arrays, Structures, Enums, I/O,
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
SAN DIEGO SUPERCOMPUTER CENTER Accounting & Allocation Subhashini Sivagnanam SDSC Special Thanks to Dave Hart.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 1: The new mainframe.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
CS 240A: Complexity Measures for Parallel Computation.
Introduction to client/server architecture
Storage area network and System area network (SAN)
A Day in the Life of an Application Performance Engineer Keith Lyon - Shunra Software
Measuring zSeries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Sponsored in part by Deer &
Nuclear Physics Greenbook Presentation (Astro,Theory, Expt) Doug Olson, LBNL NUG Business Meeting 25 June 2004 Berkeley.
2012/06/22 Contents  GPU (Graphic Processing Unit)  CUDA Programming  Target: Clustering with Kmeans  How to use.
Chapter 4 Performance. Times User CPU time – Time that the CPU is executing the program System CPU time – time the CPU is executing OS routines for the.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Designing and Evaluating Parallel Programs Anda Iamnitchi Federated Distributed Systems Fall 2006 Textbook (on line): Designing and Building Parallel Programs.
Lecture 8: Design of Parallel Programs Part III Lecturer: Simon Winberg.
Office of Science U.S. Department of Energy Evaluating Checkpoint/Restart on the IBM SP Jay Srinivasan
Parallel and Distributed Simulation Hardware Platforms Simulation Fundamentals.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Bulk Synchronous Parallel Processing Model Jamie Perkins.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
1 Using HPS Switch on Bassi Jonathan Carter User Services Group Lead NERSC User Group Meeting June 12, 2006.
MIMD Distributed Memory Architectures message-passing multicomputers.
NERSC NUG Meeting 5/29/03 Seaborg Code Scalability Project Richard Gerber NERSC User Services.
Planned AlltoAllv a clustered approach Stephen Booth (EPCC) Adrian Jackson (EPCC)
1 Metrics for the Office of Science HPC Centers Jonathan Carter User Services Group Lead NERSC User Group Meeting June 12, 2006.
Jordan Howell Frank Geiger. Table of Contents  Question  Overview of example  Packets  OSI Model  Network Layer  Data Link Layer  Physical Layer.
A New Parallel Debugger for Franklin: DDT Katie Antypas User Services Group NERSC User Group Meeting September 17, 2007.
Operating System Principles And Multitasking
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
System Scalability. 1. General Observations The choice of platform for an application should consider the ability to grow the application with more users.
June 30 - July 2, 2009AIMS 2009 Towards Energy Efficient Change Management in A Cloud Computing Environment: A Pro-Active Approach H. AbdelSalamK. Maly.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 8 October 23, 2002 Nayda G. Santiago.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
MPI implementation – collective communication MPI_Bcast implementation.
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Complexity Measures for Parallel Computation. Problem parameters: nindex of problem size pnumber of processors Algorithm parameters: t p running time.
NERSC User Group Meeting June 3, 2002 FY 2003 Allocation Process Francesca Verdier NERSC User Services Group Lead
Intro to Distributed Systems Hank Levy. 23/20/2016 Distributed Systems Nearly all systems today are distributed in some way, e.g.: –they use –they.
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Computational Requirements
Database Performance Measurement
Parallel Programming in C with MPI and OpenMP
Scientific Computing At Jefferson Lab
What is the Azure SQL Datawarehouse?
Intel® PCC Proposal Presentation
Complexity Measures for Parallel Computation
Different Architectures
Cohort Graduation Rate
FY 2017 Operational and Capital Budget Timeline
Chapter 2: Operating-System Structures
Cohort Graduation Rate
Complexity Measures for Parallel Computation
Chapter 2: Operating-System Structures
CS 111 – Sept Beyond the CPU and memory
Networking What are the basic concepts of networking? Three classes
Contention-Aware Resource Scheduling for Burst Buffer Systems
Presentation transcript:

FY 2004 Allocations Francesca Verdier NERSC User Services NERSC User Group Meeting 05/29/03

2 FY 2004 ERCAP Timeline Thu June 5 ERCAP 2004 opens. NERSC sends out call for proposals. Wed July 23 (midnight) Submission deadline for ERCAP requests. July 31 – Aug 20Reviews take place. Mon Sept 15Award letters ed to PIs.

NERSC User Group Meeting 05/29/033 What’s New for FY 2004? – IBM SP Allocation Units In FY04 SP accounting will use raw SP processor hours rather than MPP hours. FY04 requests and awards will use units that are a factor of 2.5 smaller than last year. We expect to allocate 44 million SP hours. Maximum startup award: 12,000  20,000 SP hours (30,000  50,000 MPP hours). Minimum production awards remain at 20,000 SP hours (50,000 MPP hours).

NERSC User Group Meeting 05/29/034 What’s New for FY 2004 – Multi-year Awards and SRUs PIs requesting more than 400,000 raw hours (1 million MPP hours) per year will be able to request resources for up to three years. For such requests, DOE will decide how many years will actually be awarded. The way Storage Resource Units (SRUs) are now computed result in SRU charges being lower by about 13.9 percent on average that they were at the start of last year. See We expect to allocate 8.5 million SRUs.

NERSC User Group Meeting 05/29/035 What’s New for FY 2004 – User List Certification

NERSC User Group Meeting 05/29/036 ERCAP Code Performance Questions The code performance questions on the Request Form have been modified to be more specific about the information that is collected, and what platforms the questions pertain to. Seaborg code performance and scalability: For each of your major codes provide: – the gigaflops achieved over a range of node counts. Include the information for the largest number of nodes the code can effectively use, as well as node counts typically used in production. –the aggregate high-water memory (in gigabytes) used by that code for that number of nodes –the average wall time (1 to 48 hours) and expected number of runs

NERSC User Group Meeting 05/29/037 ERCAP Code Performance Questions (cont.)

NERSC User Group Meeting 05/29/038 ERCAP Code Performance Questions (cont.) If you do not typically run your codes on 64 or more nodes state the reasons. Are there any bottlenecks that NERSC can help overcome? If your parallel communication bandwidth or latency requirements are not met please describe your requirements. Also state the volume and rate of your inter-node messages. If current network bandwidth to/from NERSC does not meet your needs, describe your network requirements. If you have I/O requirements that aren't being adequately met at NERSC describe the I/O rates you are currently achieving and those you would like to achieve.

NERSC User Group Meeting 05/29/039 FY 2004 Review Process The CORP will review the following requests this year: –All DOE Base requests for >=400,000 SP hours, or >= 100,000 SRUs. –DOE Base projects requesting more than 150% of what they got in fy03. –Projects for which the PI requests a review. SciDAC requests will not be reviewed.