SAN DIEGO SUPERCOMPUTER CENTER Accounting & Allocation Subhashini Sivagnanam SDSC Special Thanks to Dave Hart.

Slides:



Advertisements
Similar presentations
User Introduction to the TeraGrid 2007 SDSC NCAR TACC UC/ANL NCSA ORNL PU IU PSC.
Advertisements

MPI version of the Serial Code With One-Dimensional Decomposition Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy.
FY 2004 Allocations Francesca Verdier NERSC User Services NERSC User Group Meeting 05/29/03.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
SAN DIEGO SUPERCOMPUTER CENTER Blue Gene for Protein Structure Prediction (Predicting CASP Targets in Record Time) Ross C. Walker.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
Academic Technology Services The UCLA Grid Portal - Campus Grids and the UC Grid Joan Slottow and Prakashan Korambath Research Computing Technologies UCLA.
Is 99% Utilization of a Supercomputer a Good Thing? Scheduling in Context: User Utility Functions Cynthia Bailey Lee Department of Computer Science and.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Baylor University and Xythos EduCause Southwest 2007 Dr. Sandra Bennett Program Manager Online Teaching and Learning System Copyright Sandra Bennett 2007.
The University of Texas Research Data Repository : “Corral” A Geographically Replicated Repository for Research Data Chris Jordan.
Common Services in a network Server : provide services Type of Services (= type of servers) –file servers –print servers –application servers –domain servers.
THE QUE GROUP WOULD LIKE TO THANK THE 2013 SPONSORS.
Manager Self Service October 15, InSITE Self Service Manager Self Service Presentation This presentation is approximately 10 minutes in length.
Operating Systems Who’s in charge in there?. Types of Software Application Software : Does things we want to do System Software : Does things we need.
The Proposal Process, Best Practices & Policies Kent Milfeld TeraGrid Allocations Coordinator
Ling Guo Feb 15, 2010 Database(RDBMS) Software Review Oracle RDBMS (Oracle Cooperation) 4()6 Oracle 10g Express version DB2 (IBM) IBM DB2 Express-C SQL.
December, 2009 David Hart.  Allocation Stats  Processing  Interfaces.
December, 2009 Kent Milfeld, TG Allocations Coordinator.
Research Support Services Research Support Services.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
TeraGrid Science Gateways: Scaling TeraGrid Access Aaron Shelmire¹, Jim Basney², Jim Marsteller¹, Von Welch²,
Overview of the Computer Resource Team (CRT) Blaise Barney (LLNL) Rob Cunningham (LANL) Barbara Jennings (Sandia) PSAAP Kickoff Meeting July 8, 2008 Albuquerque,
August 2007 Advancing Scientific Discovery through TeraGrid Adapted from S. Lathrop’s talk in SC’07
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
Computational Models at the Touch of a Button Daniel Guan, San Diego Supercomputer Center, UCSD Introduction Since supercomputers are not widely available.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
SAN DIEGO SUPERCOMPUTER CENTER Impact Requirements Analysis Team Co-Chairs: Mark Sheddon (SDSC) Ann Zimmerman (University of Michigan) Members: John Cobb.
Coordinating the TeraGrid’s User Interface Areas Dave Hart, Amit Majumdar, Tony Rimovsky, Sergiu Sanielevici.
UFP/CS Update David Hart. Highlights Sept xRAC results POPS Allocations RAT follow-up User News AMIE WebSphere transition Accounting Updates Metrics,
Windows Azure Conference 2014 Designing Applications for Scalability.
NoodleBib Basics Open, Login, Create and Print Lists.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
National Partnership for Advanced Computational Infrastructure San Diego Supercomputer Center Persistent Archive for the NSDL Reagan W. Moore Charlie Cowart.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
TeraGrid Allocations Discussion John Towns Director, Persistent Infrastructure National Center for Supercomputing Applications University of Illinois.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
© 2010 Pittsburgh Supercomputing Center Pittsburgh Supercomputing Center RP Update July 1, 2010 Bob Stock Associate Director
User-Facing Projects Update David Hart, SDSC April 23, 2009.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Presented By: Asst. Prof. Navjeet Kaur Computer Department Govt College Ropar.
NEUROSCIENCE GATEWAY PORTAL Introduction Currently, Neuroscientists wishing to view data of computational model may not be able to do so. This may be due.
TeraGrid Institute: Allocation Policies and Best Practices David L. Hart, SDSC June 4, 2007.
TeraGrid User Portal Eric Roberts. Outline Motivation Vision What’s included? Live Demonstration.
MISSION CRITICAL COMPUTING SQL Server Special Considerations.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Tapping into National Cyberinfrastructure Resources Donald Frederick SDSC
The Storage Resource Broker and.
December, 2009 Kent Milfeld, TG Allocations Coordinator.
SAN DIEGO SUPERCOMPUTER CENTER Allocation Policies and Proposal Best Practices David L. Hart, TeraGrid Area Director, UFP/CS Presenter:
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
FIFE Architecture Figures for V1.2 of document. Servers Desktops and Laptops Desktops and Laptops Off-Site Computing Off-Site Computing Interactive ComputingSoftware.
Configuring SQL Server for a successful SharePoint Server Deployment Haaron Gonzalez Solution Architect & Consultant Microsoft MVP SharePoint Server
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
CPSC8985 FA 2015 Team C3 DATA MIGRATION FROM RDBMS TO HADOOP By Naga Sruthi Tiyyagura Monika RallabandiRadhakrishna Nalluri.
SAN DIEGO SUPERCOMPUTER CENTER SDSC Resource Partner Summary March, 2009.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Compute and Storage For the Farm at Jlab
CORPORATE PROFILE MUSTARD SEED SYSTEMS CORPORATION
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Introduction What is a Database?.
Cloud computing mechanisms
LO2 – Understand Computer Software
Amazon Web Services.
Presentation transcript:

SAN DIEGO SUPERCOMPUTER CENTER Accounting & Allocation Subhashini Sivagnanam SDSC Special Thanks to Dave Hart

SAN DIEGO SUPERCOMPUTER CENTER Accounting Charging of accounts: ➔ Sus ( Service units or hours of compute time) ➔ p655 (8-way) nodes are charged as follows: SUs = P x Wallclock_Hours x Num_Nodes x 8 ➔ p690 (32-way) nodes are charged as follows: SUs = P x Wallclock_Hours x Num_Processors P= Priority (normal = 1; high = 2; express = 1.8) ➔ On dsdirect: Number of SUs charged = P x 32 x Wallclock Hours x MAX(Np/32, M/Mmax) Np = No of processors, M = Memory used by job, Mmax = Max memory available on node (256GB)

SAN DIEGO SUPERCOMPUTER CENTER Checking your allocation balance ➔ To determine the allocation usage for a single user: % reslist -u username ➔ To determine the allocation usage for all users under a given account: % reslist -a grp000 ➔ To determine the allocation usage for jobs run within a particular time period: % reslist -j -u username -a grp --begindate=mm-dd-yyyy -- enddate=mm-dd-yyyy

SAN DIEGO SUPERCOMPUTER CENTER Types of Allocations UC Academic Associates Special program for UC campuses You can request a starter account in just a few minutes. Development Allocation Committee (DAC) awards up to 10,000 CPU-hours or 1 TB of disk All you need is an abstract and CV. Larger allocations awarded through merit-review of proposals by panel of computational scientists.

SAN DIEGO SUPERCOMPUTER CENTER Medium and large allocations Requests of 10, ,000 SUs reviewed quarterly. MRAC Requests of more than 200,000 SUs reviewed twice per year. LRAC Requests can span all NSF-supported resource providers Multi-year requests and awards are possible

SAN DIEGO SUPERCOMPUTER CENTER New: Storage Allocations SDSC now making disk storage and database resources available via the merit-review process SDSC Collections Disk Space >200 TB of network-accessible disk for shared data collections TeraGrid GPFS-WAN ~200 TB parallel file system attached to several TG compute systems Portion available for long-term storage allocations SDSC Database Dedicated disk/hardware for high-performance databases –Oracle, DB2, MySQL

SAN DIEGO SUPERCOMPUTER CENTER You Also Get… Quality User Support 24/7 Operations Help Desk Phone, Web, M-F, 9 a.m. - 5 p.m. Training Documentation

SAN DIEGO SUPERCOMPUTER CENTER And all this will cost you… …absolutely nothing. $0 plus the time to write your proposal

SAN DIEGO SUPERCOMPUTER CENTER Questions? allocations