2010-03-18Computing Infrastructure Arthur Kreymer 1 ● Power status in FCC (UPS1) ● Bluearc disk purchase – coming soon ● Planned downtimes – none ! ● Minos.

Slides:



Advertisements
Similar presentations
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
Advertisements

Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Introducing VMware vSphere 5.0
Minerva Infrastructure Meeting – October 04, 2011.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
DBS to DBSi 5.0 Environment Strategy Quinn March 22, 2011.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
RAL PPD Site Update and other odds and ends Chris Brew.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
CERN Physics Database Services and Plans Maria Girone, CERN-IT
Fermilab Site Report Spring 2012 HEPiX Keith Chadwick Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
STATUS OF KISTI TIER1 Sang-Un Ahn On behalf of the GSDC Tier1 Team WLCG Management Board 18 November 2014.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Scientific Computing in PPD and other odds and ends Chris Brew.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
FermiGrid Keith Chadwick. Overall Deployment Summary 5 Racks in FCC:  3 Dell Racks on FCC1 –Can be relocated to FCC2 in FY2009. –Would prefer a location.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
2-December Offline Report Matthias Schröder Topics: Monte Carlo Production New Linux Version Tape Handling Desktop Computers.
SCD Monthly Projects Meeting 2014 Scientific Linux Update Rennie Scott January 14, 2014.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
AFS Decommissioning Overview Andy Romero Core Computing Division.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester Sep14, 2015.
AFS Home Directory Migration Details Andy Romero Core Computing Division.
April 18, 2006FermiGrid Project1 FermiGrid Project Status April 18, 2006 Keith Chadwick.
CD FY10 Budget and Tactical Plan Review FY10 Tactical Plan for SCF / System Administration DocDB #3389 Jason Allen 10/06/2009.
GPCF* Update Present status as a series of questions / answers related to decisions made / yet to be made * General Physics Computing Facility (GPCF) is.
Maria Alandes Pradillo, CERN Training on GLUE 2 information validation EGI Technical Forum September 2013.
Virtualization within FermiGrid Keith Chadwick Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359.
Minos Computing Infrastructure Arthur Kreymer 1 ● Young Minos issues: – /local/stage1/minosgli temporary reprieve – need grid shared area – Retiring.
Computing Infrastructure – Minos 2009/06 ● MySQL and CVS upgrades ● Hardware deployment ● Condor / Grid status ● /minos/data file server ● Parrot status.
E835 Computing at FNAL fn835 systems are old running older operating systems fn835 administration –Time to maintain –Security issues –Thanks to Gabriele.
Computing Infrastructure – Minos 2009/12 ● Downtime schedule – 3 rd Thur monthly ● Dcache upgrades ● Condor / Grid status ● Bluearc performance – cpn lock.
Minos Computing Infrastructure Arthur Kreymer 1 ● Young Minos issues: – /local/stage1/minosgli temporary reprieve – need grid shared area – Retiring.
UCS D OSG School 11 Grids vs Clouds OSG Summer School Comparing Grids to Clouds by Igor Sfiligoi University of California San Diego.
Nov 05, 2008, PragueSA3 Workshop1 A short presentation from Owen Synge SA3 and dCache.
Minos Computing Infrastructure Arthur Kreymer 1 ● Grid – Going to SLF 5, doubled capacity in GPFarm ● Bluearc - performance good, expanding.
Computing Infrastructure Arthur Kreymer 1 ● Power status in FCC (UPS1) ● Bluearc disk purchase – still coming soon ● Planned downtimes – none.
Compute and Storage For the Farm at Jlab
Lenovo – DataCore Deployment Ready Offerings
How to Contribute to System Testing and Extract Results
Installation 1. Installation Sources
U.S. ATLAS Grid Production Experience
Building a Virtual Infrastructure
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
CREAM Status and Plans Massimo Sgaravatto – INFN Padova
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Oxford Site Report HEPSYSMAN
Welcome! Thank you for joining us. We’ll get started in a few minutes.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Subtitle of Presentation
Universita’ di Torino and INFN – Torino
Upgrading Your Private Cloud with Windows Server 2012 R2
Presentation transcript:

Computing Infrastructure Arthur Kreymer 1 ● Power status in FCC (UPS1) ● Bluearc disk purchase – coming soon ● Planned downtimes – none ! ● Minos Cluster moving to minos50-53 ● Bluearc performance – cpn working well, 1.3 Million served ● Parrot – is it dead yet ? ● Nucomp – integrated IF support ● GPCF – the New FNALU ? INFRASTRUCTURE

Computing Infrastructure Arthur Kreymer 2 ● Young Minos issues: – Minos50-54 disk - condor and /local/scratch ? – Lend minos54 to NoVA pending GPCF ? – XOC planning DocDB 7022DocDB 7022 ● Shut down minos03-24 next week, use minos50-53 – Xrootd, cal, off mon, build (test on minos27/50) ● Bluearc disk expansion (99 TB for MNS/MNV/NV) in purchasing ● Clearing daikon07 STAGE files, up to 10 TB ● Shall the Farm run directly from our Bluearc base releases ? ● Link cp1 to cpn next week ● Remove SSH keys from CVS next week ● Parrot status – pushing up daisies ? ● GPCF out for bids SUMMARY

Computing Infrastructure Arthur Kreymer 3 FCC POWER ● Power outages in FCC due to breaker trips Feb 9 and 17. – FCC has less capacity than formerly believed, not easily fixed – New Minos Cluster nodes are installed in GCC – Limited to minos01/03/05/07/12/13/16-24 – These remaining Minos Cluster nodes should turn off ASAP – Minos Servers can remain in FCC on UPS

Computing Infrastructure Arthur Kreymer 4 Computing Infrastructure – Minos 2009/12 ● We continue to reach peaks of 2000 running jobs – still want 5000, after fixing Bluearc issues ● Still using wrapper for condor_rm, condor_hold – To prevent SAZ authentication overload – Condor 7.4 upgrade will fix this internally, soon ● Will change cp1 to be a symlink to cpn around 22 March – cpn has taken out over 1.3 Million locks so far GRID

Computing Infrastructure Arthur Kreymer 5 Bluearc ● Continuing performance tests ● See monitoring plots, such as – ● /minos/scratch is now /minos/app, for consistency with Fermigrid – /minos/scratch available for compatibility for present ● Purchasing 99 TB for IF ( MNS/MNV/NOVA ) – Requisition went to purchasing Friday March 19.

Computing Infrastructure Arthur Kreymer 6 PARROT ● PARROT should no longer be needed at Fermilab ● All our code has been rsync'd to /grid/farmiapp/minos/... ● Production analysis is being done sans-parrot. ● We will soon start building releases in Bluearc, and not in AFS. ● We should start running Farm processing from Bluearc – This eliminates separate Farm builds. ● minos_jobsub should default to sans-parrot ( -pOFF )

Computing Infrastructure Arthur Kreymer 7 NUCOMP ● The NUCOMP coordination group is a year old. Happy Birthday ! ● They'd like input on OSG Grid computing needs and experience. ● XOC is a NOCOMP-related project, see DocDB 7022DocDB 7022 ● GPCF – hardware is being ordered ● We are working on revised MOU's for all experiments now. ● We are still working on approvals to hire a new Associate Scientist. ● See

Computing Infrastructure Arthur Kreymer 8 GPCF ● Initial configuration for GPCF – 8 interactive, 26 batch host – 30 TB SAN disk – Interactive systems will be virtual machines for flexibility ● Bids are coming in now – Hardware to arrive within 2 months ● Nova is the primary customer ● We will ask for at least some virtual machines for testing – 32 and 64 bit, SLF 5, etc.

Computing Infrastructure Arthur Kreymer 9 Minos Cluster migration ● New nodes are minos50 through minos54 – Each roughly 8x the capacity of an old node – 5 nodes replacing 24, roughly x2 capacity increase ● Lend one node to NOVA short term ( 2 months ) ? – They have only one node, but have 5+ on order ( GPCF ? ) – The PO is out for bids, delivery should be soon. ● Deployment plan : – Interactive tests NOW on minos50 – ENJOY ! – We reserve the right to reformat local scratch disk for a few days – Start condor batch pool next week. – Shut down most old nodes, copying scratch to new nodes – Hold the old hardware offline about a month, 'in case'

Computing Infrastructure Arthur Kreymer 10 Minos Cluster migration ● Luke Corwin and the Core Team have verified 32 bit builds on 64bit hosts – Validated a single production ntuple – Robert has heroically patched all releases to do the right thing ● – NIS will move to minos50 and some of the FCC servers – Beam logger on 05 – no longer in use – xrootd on 07 – moving to minos27 now ( minos-xrootd ? ) – offline monitor 07 ( avva ) still in use ? – build on 11 – obsolete – overlay on 12/13 – obsolete ( is calibration moved to 27 ? )