National Energy Research Scientific Computing Center (NERSC)

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

1 Applications Virtualization in VPC Nadya Williams UCSD.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Truecrypt Traveler Install Is Truecrypt already Installed on this system? YesYes | NoNo How do I tell if it is?
Architecture and Implementation of Lustre at the National Climate Computing Research Center Douglas Fuller National Climate Computing Research Center /
HPSS Update Jason Hick Mass Storage Group NERSC User Group Meeting September 17, 2007.
IHEP Site Status Jingyan Shi, Computing Center, IHEP 2015 Spring HEPiX Workshop.
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
PORTABLE STORAGE DEVICES David Garcia i Alex Tiana.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Outline IT Organization SciComp Update CNI Update
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
1 The NERSC Global File System NERSC June 12th, 2006.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
National Energy Research Scientific Computing Center (NERSC) CHOS - CHROOT OS Shane Canon NERSC Center Division, LBNL SC 2004 November 2004.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
PDSF and the Alvarez Clusters Presented by Shane Canon, NERSC/PDSF
National Energy Research Scientific Computing Center (NERSC) NERSC Site Report Shane Canon NERSC Center Division, LBNL 10/15/2004.
LBNL/NERSC Site Report Cary Whitney NERSC
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
National Energy Research Scientific Computing Center (NERSC) PDSF at NERSC Thomas M. Langley NERSC Center Division, LBNL November 19, 2003.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
LBNL/NERSC/PDSF Site Report for HEPiX Catania, Italy April 17, 2002 by Cary Whitney
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
ORNL is managed by UT-Battelle for the US Department of Energy OLCF HPSS Performance Then and Now Jason Hill HPC Operations Storage Team Lead
Report from US ALICE Yves Schutz WLCG 24/01/2007.
The NERSC Global File System and PDSF Tom Langley PDSF Support Group NERSC at Lawrence Berkeley National Laboratory Fall HEPiX October 2006.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
INFN Site Report R.Gomezel October 9-13, 2006 Jefferson Lab, Newport News.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Extending the farm to external sites: the INFN Tier-1 experience
A Brief Introduction to NERSC Resources and Allocations
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
Oak Ridge Leadership Computing Facility: Summit and Beyond
Paul Kuipers Nikhef Site Report Paul Kuipers
Belle II Physics Analysis Center at TIFR
Cluster / Grid Status Update
Lattice QCD Computing Project Review
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Western Analysis Facility
Jeremy Maris Research Computing IT Services University of Sussex
Computing Facilities & Capabilities
SAM at CCIN2P3 configuration issues
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Daniel Murphy-Olson Ryan Aydelott1
Luca dell’Agnello INFN-CNAF
Oxford Site Report HEPSYSMAN
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
Accounting Information: MPI
Building 100G DTNs Hurts My Head!
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
NERSC Reliability Data
CHIPP - CSCS F2F meeting CSCS, Lugano January 25th , 2018.
Jefferson Lab Scientific Computing Update
Presentation transcript:

National Energy Research Scientific Computing Center (NERSC) HEPiX PDSF Site Report Cary Whitney NERSC Center Division, LBNL Oct 11, 2005

Changes Moved from LSF to SGE now running 6.0 release 4. Starting to look at SL4, what has been others experience. Shane Canon, PDSF Lead, has accepted a job at Oak Ridge. Cary Whitney, taking over the lead. CHOS, still a using and Shane’s still supporting, working on 2.6 version. On ESnet Bay Area MAN, 10Gb connection On ScienceNet 10Gb connection

Changes Continue USB serial console sFlow network monitoring Jumbo network One-time-passwords

Filesystems Installed Lustre 1.2.4 in two volumes. Just started moving to GPFS, one volume currently. TsiaLun – the center-wide filesystem also running GPFS, thus synergy there. NERSC, SDSC and HPSS consortium working on HSM support in GPFS. Positives: Aggregate bandwidth a plus. No NFS Negative: GPFS: Cost. But LSF costs where comparable. Lustre: Still a little green.

New Systems Jaquard DaVinci Accepted it. 360 node dual Opterons IB connected (Mellanox) PBSpro DDN Storage 30TB GPFS filesystem (Local and TsiaLun) DaVinci 32 cpu Altix GPFS mount from TsiaLun.

Fun Stuff Power outages Unplanned Planned Planned again City of Oakland Planned Support for the coming system (.5MW extra) Planned again For the system to come (approach capacity of feed ~5MW) No planned unplanned power outages at this time!