Fermilab Site Report Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy under contract No.

Slides:



Advertisements
Similar presentations
Fermilab, the Grid & Cloud Computing Department and the KISTI Collaboration GSDC - KISTI Workshop Jangsan-ri, Nov 4, 2011 Gabriele Garzoglio Grid & Cloud.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Computing Strategy Victoria White, Associate Lab Director for Computing and CIO Fermilab PAC June 24, 2011.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Impacts of Tevatron Extension Pier Oddone P5 Meeting, October 15 th, P. Oddone, P5 Meeting, October 15, 2010.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
SLUO LHC Workshop: Closing RemarksPage 1 SLUO LHC Workshop: Closing Remarks David MacFarlane Associate Laboratory Directory for PPA.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
The Grid & Cloud Computing Department at Fermilab and the KISTI Collaboration Meeting with KISTI Nov 1, 2011 Gabriele Garzoglio Grid & Cloud Computing.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
Fermilab Site Report Spring 2012 HEPiX Keith Chadwick Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359.
F COMPUTING DIVISION ALL-HANDS MEETING May 7, 2004.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
CDMS Computing Project Don Holmgren Other FNAL project members (all PPD): Project Manager: Dan Bauer Electronics: Mike Crisler Analysis: Erik Ramberg Engineering:
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Department of Energy Office of Science  FY 2007 Request for Office of Science is 14% above FY 2006 Appropriation  FY 2007 Request for HEP is 8% above.
LHC Computing, CERN, & Federated Identities
F Tevatron Run II Computing Strategy Victoria White Head, Computing Division, Fermilab June 8, 2007.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
John Womersley 1/13 Fermilab’s Future John Womersley Fermilab May 2004.
Fermilab: Present and Future Young-Kee Kim Data Preservation Workshop May 16, 2011.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Monitoring the Readiness and Utilization of the Distributed CMS Computing Facilities XVIII International Conference on Computing in High Energy and Nuclear.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Fermilab / FermiGrid / FermiCloud Security Update Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359 Keith Chadwick Grid.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Computing Strategy Victoria White, Associate Lab Director for Computing and CIO Fermilab Institutional Review June 6-9, 2011.
Recent Evolution of the Offline Computing Model of the NOA Experiment Talk #200 Craig Group & Alec Habig CHEP 2015 Okinawa, Japan.
Fabric for Frontier Experiments at Fermilab Gabriele Garzoglio Grid and Cloud Services Department, Scientific Computing Division, Fermilab ISGC – Thu,
Fermilab: Introduction Young-Kee Kim DOE KA12 (Electron Research)Review June 22-23, 2010.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
CPM 2012, Fermilab D. MacFarlane & N. Holtkamp The Snowmass process and SLAC plans for HEP.
TEVATRON DATA CURATION UPDATE Gene Oleynik, Fermilab Department Head, Data Movement and Storage 1.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
Fermilab Budget Briefing FY 2014 Intensity Frontier Proton Research KA Breakout February 28, 2013 Office of High Energy Physics Germantown, MD.
Particle Physics Sector Young-Kee Kim / Greg Bock Leadership Team Strategic Planning Winter Workshop January 29, 2013.
Margaret Votava / Scientific Computing Division FIFE Workshop 20 June 2016 State of the Facilities.
Grid site as a tool for data processing and data analysis
Overview of the Belle II computing
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

Fermilab Site Report Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359

Acknowledgements I am reporting about the work that is being done by the many talented, hardworking and dedicated people at Fermilab. The vast majority of the information in this presentation has been shamelessly copied from the Fermilab Computing Division’s contributions to the Feb 2011 DOE Scientific Computing. Please credit those individuals where credit is due and blame me for any mistakes or misunderstandings. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX1

Outline Fermilab Program –The Energy Frontier –The Intensity Frontier –The Cosmic Frontier Fermilab Computing Infrastructure –FermiGrid –Storage –HPC –Computing Facilities –Significant Infrastructure Events (since the Fall 2010 HEPiX) Windows Scientific Linux FermiCloud ITIL Structure Summary 02-May-2011Fermilab Site Report - Spring 2011 HEPiX2

Fermilab Program 02-May-2011Fermilab Site Report - Spring 2011 HEPiX3

Fermilab Program - Three Frontiers 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 4

Fermilab - The Energy Frontier 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 5 Tevatron LHC ILC, CLIC or Muon Collider Now 2016 LHC Upgrades ILC?? We are here

Tevatron RunII (CDF and D0) Computing has been key to the enormous scientific output from these experiments and has grown in importance during the 25 years of the Tevatron and will continue throughout the LHC era. –The availability of computing resources (dedicated and shared) has allowed quick and timely processing of the data and Monte Carlo simulations and supported 100’s of Tevatron users to complete their complex analyses. CDF and D0 physics analysis will continue well past the end of Tevatron operations. –We have a commitment to maintain the Tevatron data for 10 years. –This means we need to examine data preservation techniques so that the stored dataset remain useable. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 6

Computing required for Accelerator-based experiments Reconstruct Raw data -> physics summary data Analyze reconstructed data Create MC data needed for analysis Reprocess data, skim and regroup processed data Store data and distribute to collaborators worldwide Software tools & services and expert help at times (e.g. detector simulation, generators, code performance) Long-term curation of data and preservation of analysis capabilities after experiment ends 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 7

Run II Computing Plans Provision for continuation of some Run II production processing and Monte- Carlo production capability –Reprocessing efforts in 2011/2012 aimed at the Higgs –Monte Carlo production at the current rate through mid-2013 Provision for Analysis computing capability for at least 5 years –Modernization of code and hardware needs to be done now to reduce support later –Push for 2012 conferences for many results –no large drop in computing requirements through this period Curation of the data: > 10 years –Some support for continuing analyses Continued support for –Code management and science software infrastructure –Data handling for production (+MC) and Analysis Operations 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 8

Tevatron – looking ahead The collaborations expect the publication rate to remain stable for several years. Analysis activity: Expect > 100 (students+ postdocs) actively doing analysis in each experiment through Expect this number to be much smaller in 2015 though data analysis will still be on-going. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 9 D0 Publications each year 40 CDF Publications each year

CMS Offline and Computing Fermilab is the host institution for U.S. CMS Research Program Fermilab is a hub for CMS Offline and Computing –41 Senior Scientists (6 in CD) –14 Research Associates –Leadership roles in many areas in CMS Offline and Computing: Frameworks, Simulations, Data Quality Monitoring, Workload Management and Data Management, Data Operations, Integration and User Support. Fermilab hosts the U.S. CMS Tier-1 facility. –Largest of the 7 CMS Tier-1 Centers Fermilab Remote Operations Center allows US physicists to participate in monitoring shifts for CMS. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 10

CMS Tier 1 at Fermilab The CMS Tier-1 facility at Fermilab and the experienced team who operate it enable CMS to reprocess data quickly and to distribute the data reliably to the user community around the world. > 2 PB Data transferred to the Fermilab Tier-1 in May-2011Fermilab Site Report - Spring 2011 HEPiX11

CMS Physics Analysis at Fermilab U.S.CMS analysis facility at Fermilab has the highest number of CMS analysis jobs - typically ~120k-200k jobs/week. The analysis job success rate is also very high – typically >85% success rate! Fermilab is an excellent place to work on CMS physics analysis. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX12

Time …. 120 GeV Neutrino Program 8 GeV Neutrino Program Muon to electron conversion Muon (g-2) Rare Processes Nuclear EDM Intensity Frontier Energy Frontier LHC Tevatron NuMI: GeV Booster NuMI 120 GeV Booster LHC Upgrades Lepton Collider Project X GeV, GeV Neutrino Factory MC and Data production and Analysis Detector simulations and core software Accelerator modeling, physics studies, framework Preparing for the future 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 13

Fermilab - The Intensity Frontier 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 14 MINOS MiniBooNE MINERvA SeaQuest NOvA MicroBooNE Muon g-2 SeaQuest Now 2016 LBNE Mu2e Project X+LBNE , K, nuclear, … Factory ?

Fermilab Program Planning 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 15

Intensity Frontier program needs 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 16 Many experiments in many different phases of development/operations. Projections for Central Disk Storage and GRID Farm usage are from requests from the experiments.  We meet monthly with the experiments to plan, review and discuss progress. Application support for databases and simulations (GENIE, GEANT4) has been requested. CPU (cores) Disk (TB) 1 PB

Future Fermilab accelerator based program Intensity Frontier –Mu2e –LBNE (Water Cherenkov and Liquid Argon) –Possible muon g-2 experiment at Fermilab –Possible Future Project X experiments: (some of these experiments will be data intensive.) Next generation mu2e, g-2 K    experiments Other neutrino experiments Energy Frontier –Lepton collider physics studies including R&D for Muon collider Accelerator component and beam dynamics modeling –Lepton colliders, neutrino factory, Project X 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 17

Fermilab – The Cosmic Frontier 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 18 Now DM: ~10 kg COUPP, CDMS DE: SDSS P. Auger DM: ~100 kg DE: DES P. Auger Holometer? DM: ~1 ton DE: LSST BigBOSS DE: LSST ?? 2022 DM: Dark Matter DE: Dark Energy

Astrophysics at Fermilab Fermilab was a member of SDSS I and SDSS II Fermilab hosts 91 TB primary archive; 32,000 queries + 1 TB distributed per day to the general public. Through 2010, 3400 refereed papers with SDSS in title or abstract, >35,000 citations. Over half of publications are from outside the collaboration. Highest impact observatory 2002, 2003, 2006 DES follows this; Fermilab will host the secondary archive Fermilab recently joined LSST 02-May-2011Fermilab Site Report - Spring 2011 HEPiX19

Computing for Astrophysics Sloan Digital Sky Survey –Data taking completed 2008 –Commitment: Operate data archive through 2013 –Archive: 22 nodes + 65 TB disk + Enstore tape Dark Energy Survey –Operations –Commitment: Simulations – 200 slots TB Copy of DES archive TB Analysis Computing (~200 cores needed) Local Data Reprocessing (800 slots) Pierre Auger Observatory – Operations: – Commitment: Data Mirror (5 TB/yr) Host Calibration database Simulations – 100 slots Cold Dark Matter Search (SuperCDMS) – Operations through – Commitment: Data processing and storage (800 slots, 64 TB/yr) Chicagoland Observatory... (COUPP) – Operations through – Commitment: Data processing and storage (100 slots, 2 TB/yr) 02-May-2011Fermilab Site Report - Spring 2011 HEPiX20

Experimental Astrophysics -Future JDEM/WFIRST: –Steve Kent – Deputy Project Scientist –Erik Gottschalk – Science Operations Center Lead –JDEM effort is transitioning to other programs LSST: –Fermilab has just recently joined –No commitments yet. Ideas: a) Data Access Center (DAC) - Science Analysis b) Dark Energy Science Center –The Grid & Cloud Computing Department ran a pilot project to demonstrate the automated production of LSST detector simulation data using Open Science Grid resources May-2011Fermilab Site Report - Spring 2011 HEPiX 21

DES Analysis Computing at Fermilab Fermilab plans to host a copy of the DES Science Archive. This consists of two pieces –A copy of the Science database –A copy of the relevant image data on disk and tape This copy serves a number of different roles –Acts as a backup for the primary NCSA archive, enabling collaboration access to the data when the primary is unavailable –Handles queries by the collaboration, thus supplementing the resources at NCSA –Enables the Fermilab scientists to effectively exploit the DES data for science analysis To support the science analysis of the Fermilab Scientists, DES will need a modest amount of computing (of order 24 nodes). This is similar to what was supported for the SDSS project. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 22

CDMS/Minos/Minerva A fire in the Soudan Mine access shaft was detected on Thursday 17-Mar Specially trained first responders dealt with the fire and it was declared “99% out” on Sunday 20-Mar –The fire appears to have started in the timbers that line the access shaft between the 23rd and 25th levels of the mine. Water pumping stations on mine levels 12 and 22 were recommissioned on Saturday 26-Mar-2011 First inspections of the underground detectors on level 27 were on Wednesday 30-Mar Restoration of the infrastructure is underway. –Limited power has been restored to the experimental halls. –Limited networking has been restored (likely will have to replace the fiber optic cable from the surface to the detector level). –Cleanup of the fire fighting foam residue and drying out is underway. Unclear when data taking will resume. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX23

Fermilab Computing Infrastructure 02-May-2011Fermilab Site Report - Spring 2011 HEPiX24

FermiGrid – CPU resources for the user community 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 25 TeraGrid WLCG NDGF User Login & Job Submission GP 3284 slots CMS 7485 slots CDF 5600 slots D slots FermiGrid Monitoring/ Accounting Services FermiGrid Infrastructure Services FermiGrid Site Gateway FermiGrid Authenticatio n/Authorizati on Services Open Science Grid

Measured FermiGrid Availabilty(*) 02-May-2011Fermilab Site Report - Spring 2011 HEPiX26 * = Excluding building or network failures ServiceAvailabilityDowntime VOMS-HA100%0m GUMS-HA100%0m SAZ-HA (gatekeeper)100%0m Squid-HA100%0m MyProxy-HA99.943%299.0m ReSS-HA99.959%215.4m Gratia-HP99.955%233.3m Database-HA99.963%192.6m

FNAL CPU – core count for science 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 27

FermiGrid – Past Year Slot Usage 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 28 CDF CMS D0 Other Fermilab Opportunistic 23k slots

HEP – a Data Intensive Science Large datasets are now a hallmark of nearly all our scientific programs – including those involving theory and simulations. Fermilab Computing Division provides central, shared data movement and storage services for ALL lab scientific program –Some resources are dedicated (e.g. disks) –Some resources are paid for by specific lab scientific programs and they have “guaranteed use” of (e.g. number of tape drives of a specific type) –Some resources are shared across a broad set of scientific programs (e.g. public disk cache) 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 29

Data Movement and Storage Services Enstore: tape Libraries and storage system – for primary and archival data storage –Developed at Fermilab (+ part in common with dcache) –Also used at PIC (Spain) –Continuous migration to new tape technologies –Must scale to requests for many thousands of files simultaneously –26 Petabytes on Tape dCache: tape-backed and non tape-backed disk cache for data processing (Fermilab-DESY-NGDF-others collaboration) –Scales for current usage – concerns for future demands –Investigating Lustre file system as future dCache replacement broader-than-HEP open source product Space has been identified for two additional SL8500 Robots in FCC. –One of these robots has been ordered and delivery is expected in May May-2011Fermilab Site Report - Spring 2011 HEPiX 30

Data Storage at Fermilab - Tape 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 31

HPC Fermilab operates 5 HPC clusters: – The “D/s”, “J/Psi” and “Kaon” clusters for Lattice QCD calculations, – The “Cosmology” cluster for cosmological calculations, –The “Wilson” cluster for accelerator modeling. In addition to the above production clusters we also have a small GPU cluster (not shown). 02-May-2011Fermilab Site Report - Spring 2011 HEPiX32 D/s J/Psi Kaon Cosmo Winson

HPC – Price/Performance The price/performance of clusters installed at Fermilab for lattice QCD has fallen steadily over the last six years, with a halving time of around 1.5 years, as shown by the solid line in the graph at right. Product roadmaps provided by vendors of system components make clear that this trend is likely to continue for the next several years. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX33

COMPUTING FACILITIES 02-May-2011Fermilab Site Report - Spring 2011 HEPiX34

Fermilab Computing Facilities 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 35 Lattice Computing Center (LCC) High Performance Computing (HPC) Accelerator Simulation, Cosmology nodes No UPS Feynman Computing Center (FCC) High Availability Services Networking, Computer Security, BSS, , Web, Databases, Core Site Services Tape Robotic Storage UPS & Standby Power Generation ARRA project: upgrade cooling and add HA computing room - completed Grid Computing Center (GCC) High Density Computational Computing CMS, RUNII, GP batch worker nodes Lattice HPC nodes Tape Robotic Storage UPS & taps for portable generators

New FCC3 Computer Room 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 36

Computer Room Availability >99.75% in FCC and GCC (server and CPU rooms) over many years Better than the expectation based on the building infrastructure Excellent performance 02-May-2011Fermilab Site Report - Spring 2011 HEPiX37

Facility Upgrades 02-May-2011Fermilab Site Report - Spring 2011 HEPiX38

Significant Infrastructure Events  November 2010: –Overheating distribution breaker found during infrared survey of electrical distribution panels required ~8 hour shutdown of FCC computer rooms to replace. –During the startup of the systems following the distribution breaker replacement, the upstream 1400A breaker tripped. –This is the same 1400A breaker that tripped twice in February –The decision was made to replace the 1400A breaker - ~16 hour shutdown. January 2011: –FCC3 Computer Room commissioned.  February 2011: –Spanning tree issues caused at least two site wide network outages (each lasting minutes). –Failing supervisor module in 6509 repeatedly caused loss of connectivity for several production services to the remainder of the site network over a five day period. –Various mitigations were attempted, but eventually a “whole switch” replacement was performed over an ~8 hour period. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX39

Microsoft Windows Windows May-2011Fermilab Site Report - Spring 2011 HEPiX40 Windows VersionInventory Windows NT 4.03 Windows 2000 Professional9 Windows 2000 Server2 Windows Server Windows Server Windows XP Professional2046 Windows XP Tablet PC Edition7 Windows Vista Business22 Windows Vista Enterprise18 Windows Vista Ultimate1 Windows 7 Enterprise897 Windows 7 Professional1 Total3295

Scientific Linux Scientific Linux 3 was deprecated on 10-Oct-2010! We are down to four SLF3 systems to migrate. Scientific Linux 4 will be deprecated on 12-Feb- 2012! Migration off SL(F)4  Scientific Linux See Troy’s presentation later this week. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX41

Apple / Android The Computing Division is extending it’s support for Apple desktops and laptops to include support for Apple iOS based devices (iPhone/iPod/iPad). We will be also be deploying support for Android devices also… 02-May-2011Fermilab Site Report - Spring 2011 HEPiX42

Progress on FermiCloud FermiCloud has largely completed the Phase 1 Project Goals: –The selected Cloud Computing management framework was OpenNebula. –The storage performance investigation work is still ongoing. We are currently working on Phase 2 of the FermiCloud project: –We have developed and deployed an X509 AuthZ/AuthN framework for OpenNebula. –We have deployed a small number of “low impact” production services. –We are working on extending FermiCloud to host other production services. –Other items in Phase2 include Accounting, Infiniband/HPC, Idle VM detection and backfill. We are beginning the planning for Phase 3 of the FermiCloud project: –This will likely include work to support FermiCloud-HA (High Availability). –We will incorporate the “lessons learned” from the FermiGrid-HA2 deployment. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX43

ITIL Fermilab has recently made the decision to replace remedy with Service-Now. –The target go-live date is 1-Aug Anurag Chadha joined the Office of Enterprise Architecture and Configuration Management in April 2011 (replacement for Don Petravick who left in June 2010). 02-May-2011Fermilab Site Report - Spring 2011 HEPiX44

Structure In January 2011, the Fermilab Directorate announced that the Computing Division will soon become the Computing Sector: – Fermilab-2011.pdfhttp:// Fermilab-2011.pdf The sector will be lead by the Laboratory CIO (Vicky White). Within the sector, there will be two divisions instead of four quadrants. Current division management will begin interviewing internal candidates to head these two divisions in the near future. 02-May-2011Fermilab Site Report - Spring 2011 HEPiX45

Summary Fermilab has a strong, effective scientific computing program that serves a very broad range of scientific programs –We plan to continue to develop and use common solutions working with the broad community collaboratively –Lots of new opportunities for the future We have developed well-founded strategies for meeting the challenges of the current and future scientific program: Tevatron Run II  CMS and Intensity Frontier SDSS  DES  LSST Scientific Computing 02-May-2011Fermilab Site Report - Spring 2011 HEPiX 46