Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility 12000 Jefferson Ave. Newport News, Virginia USA 23606

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
NIKHEF Testbed 1 Plans for the coming three months.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Designing Storage Architectures for Preservation Collections Library of Congress, September 17-18, 2007 Preservation and Access Repository Storage Architecture.
TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
RAL Site Report HEPiX Fall 2013, Ann Arbor, MI 28 Oct – 1 Nov Martin Bly, STFC-RAL.
SLAC National Accelerator Laboratory Site Report A National Lab in Transition Randy Melen, Deputy CIO Computing Division, Operations Directorate SLAC National.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Design & Management of the JLAB Farms Ian Bird, Jefferson Lab May 24, 2001 FNAL LCCWS.
1 Computing & Networking User Group Meeting Roy Whitney Andy Kowalski Sandy Philpott Chip Watson 17 June 2008.
LAL Site Report Michel Jouvin LAL / IN2P3
Computing and IT Update Jefferson Lab User Group Roy Whitney, CIO & CTO 10 June 2009.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
NLIT May 26, 2010 Page 1 Computing Jefferson Lab Users Group Meeting 8 June 2010 Roy Whitney CIO & CTO.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
HEPiX Karlsruhe May 9-13, 2005 Operated by the Southeastern Universities Research Association for the U.S. Department of Energy Thomas Jefferson National.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Operated by the Southeastern Universities Research Association for the U.S. Depart. Of Energy Thomas Jefferson National Accelerator Facility Andy Kowalski.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Panoptic Capacity Planning Presented by. "Scotty, I need warp speed in 3 minutes or we're all dead!” (William Shatner - Star Trek II ‘The Wrath of Khan’)
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
Disk Farms at Jefferson Lab Bryan Hess
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
1 Cluster Development at Fermilab Don Holmgren All-Hands Meeting Jefferson Lab June 1-2, 2005.
11th October 2012Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jefferson Lab Site Report Sandy Philpott HEPiX Fall 07 Genome Sequencing Center Washington University at St. Louis.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
Compute and Storage For the Farm at Jlab
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
LQCD Computing Operations
Outline IT Division News Other News
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA Spring 2006 HEPiX – CASPUR

Contents Computing Storage –Online –Offline Network Infrastructure Grid Other projects

Computing Linux - RedHat EL4 64 bit environment 3 machines for testing Physics code –Intel EM64T –Intel Pentium D810 dual core –AMD Opteron dual core Solaris 10 –Sparc platform support planned –Small number of x86 machines – support discussed… not planned Strategy to continue with at least 2 platforms

Online Storage Panasas –work areas –Adding an 8TB shelf to our existing 5 5TB shelves StorageTek B280 systems (30TB) –Continuing in production for cache areas –NFS file services –Reliable, stable StorageTek Flex680 demo returned –2 instances of data loss in testing Dell EMC AX100s –Newest cache file systems –3 3TB systems in production, adding 7 more –Reliable, stable – EL4, ext3 –Adding 7 3TB systems

Offline Storage 2 StorageTek Powderhorn Silos Just over 2PB capacity on 10,000+ tapes 1.5PB stored now Tape rewrites underway to reuse media 9940A -> 9940B format reuse 5000 tapes (save big $$ !!) Wait for 2 nd generation Titanium drives in 2007/8, in new SL8500 silo…

Network Upgrading LAN and WAN to 10GigE LAN: Foundry BigIron RX8 mostly smooth transition, but some problems with trunking and code upgrades WAN: OC-192 expected to be installed in April

Infrastructure New Data Center is operational! 11,000 square feet Home to –SciComp batch compute farm for Physics Analysis –HPC Infiniband cluster for LQCD No generator backup; UPS only –need IPMI for quick shutdown and subsequent startup –Keep core services, file servers, etc in original Computer Center

Grid PPDG (Particle Physics Data Grid) collaboration ends SRM (Storage Resource Manager) development effort expected to continue (we are participants in SRM SciDAC2 proposal) Other grid efforts will be in conjunction with OSG (Open Science Grid) –Planning OSG VDT installation in the coming month –Investigate VOMS/GUMS –Understand job submission – Auger/LSF, PBS

Other Projects Wiki web environment Subversion code management NICE Windows Admin from CERN Enira evaluation for Cybersecurity PROOF prototype for Physics analysis –Parallel ROOT Facility –“Interactive” ROOT