Computing Strategy Victoria White, Associate Lab Director for Computing and CIO Fermilab Institutional Review June 6-9, 2011.

Slides:



Advertisements
Similar presentations
Fermilab, the Grid & Cloud Computing Department and the KISTI Collaboration GSDC - KISTI Workshop Jangsan-ri, Nov 4, 2011 Gabriele Garzoglio Grid & Cloud.
Advertisements

Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Computing Strategy Victoria White, Associate Lab Director for Computing and CIO Fermilab PAC June 24, 2011.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Assessment of Core Services provided to USLHC by OSG.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
SLUO LHC Workshop: Closing RemarksPage 1 SLUO LHC Workshop: Closing Remarks David MacFarlane Associate Laboratory Directory for PPA.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
The Grid & Cloud Computing Department at Fermilab and the KISTI Collaboration Meeting with KISTI Nov 1, 2011 Gabriele Garzoglio Grid & Cloud Computing.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
US CMS/D0/CDF Jet/Met Workshop – Jan. 28, The CMS Physics Analysis Center - PAC Dan Green US CMS Program Manager Jan. 28, 2004.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
…building the next IT revolution From Web to Grid…
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
LHC Computing, CERN, & Federated Identities
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
F Tevatron Run II Computing Strategy Victoria White Head, Computing Division, Fermilab June 8, 2007.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Computing Division FY03 Budget and budget outlook for FY04 + CDF International Finance Committee April 4, 2003 Vicky White Head, Computing Division.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Fermilab Site Report Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy under contract No.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Fabric for Frontier Experiments at Fermilab Gabriele Garzoglio Grid and Cloud Services Department, Scientific Computing Division, Fermilab ISGC – Thu,
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
Fermilab Budget Briefing FY 2014 Intensity Frontier Proton Research KA Breakout February 28, 2013 Office of High Energy Physics Germantown, MD.
Particle Physics Sector Young-Kee Kim / Greg Bock Leadership Team Strategic Planning Winter Workshop January 29, 2013.
Margaret Votava / Scientific Computing Division FIFE Workshop 20 June 2016 State of the Facilities.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
(Prague, March 2009) Andrey Y Shevel
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
Presentation transcript:

Computing Strategy Victoria White, Associate Lab Director for Computing and CIO Fermilab Institutional Review June 6-9, 2011

Outline Introduction Experiment Computing Sharing strategies Computing for Theory and Simulation Science Conclusion Computing Strategy - Fermilab Institutional Review, June 6-9,

INTRODUCTION Computing Strategy - Fermilab Institutional Review, June 6-9,

Scientific Computing strategy Provide computing, software tools and expertise to all parts of the Fermilab scientific program including theory simulations (Lattice QCD and Cosmology), and accelerator modeling Work closely with each scientific program – as collaborators (where a scientist from computing is involved) and as valued customers. Create a coherent Scientific Computing program from the many parts and many funding sources – encouraging sharing of facilities, common approaches and re-use of software wherever possible Computing Strategy - Fermilab Institutional Review, June 6-9,

Core Computing – a strong base Scientific Computing relies on Core Computing services and Computing Facility infrastructure  Core Networking and network services  Computer rooms, power and cooling  , videoconferencing, web servers  Document databases, Indico, calendering  Service desk  Monitoring and alerts  Logistics  Desktop support (Windows and Mac)  Printer support  Computer Security  ….. and more All of the above is provided through overheads Computing Strategy - Fermilab Institutional Review, June 6-9,

Scientific Computing Review Multi-lab review of all scientific computing (except DAQ and online) was held on February 8 th, 2011 In-depth review of how scientific computing works and how it is funded at each lab Id= Id=3973  Password available on request This review will not cover computing in depth Computing Strategy - Fermilab Institutional Review, June 6-9,

EXPERIMENT COMPUTING STRATEGIES Computing Strategy - Fermilab Institutional Review, June 6-9,

CMS Tier 1 at Fermilab The CMS Tier-1 facility at Fermilab and the experienced team who operate it enable CMS to reprocess data quickly and to distribute the data reliably to the user community around the world. Computing Strategy - Fermilab Institutional Review, June 6-9, Fermilab also operates: LHC Physics Center (LPC) Remote Operations Center U.S. CMS Analysis Facility

CMS Offline and Computing Fermilab is a hub for CMS Offline and Computing  Ian Fisk is the CMS Computing Coordinator  Liz Sexton-Kennedy is Deputy Offline Coordinator  Patricia McBride is Deputy Computing Coordinator  Leadership roles in many areas in CMS Offline and Computing: Frameworks, Simulations, Data Quality Monitoring, Workload Management and Data Management, Data Operations, Integration and User Support. Fermilab Remote Operations Center allows US physicists to participate in monitoring shifts for CMS. Computing Strategy - Fermilab Institutional Review, June 6-9,

Computing Strategy for CMS Continue to evolve the CMS Tier 1 center at Fermilab - to meet US obligations to CMS and provide the highest level of availability and functionality for the $ Continue to ensure that the LHC Physics Center and the US CMS physics community is well supported by the Tier 3 (LPC CAF) at Fermilab Plan for evolution of the computing, software and data access models as the experiment matures – requires R&D and development  Ever higher bandwidth networks  Data on demand  Frameworks for multi-core Computing Strategy - Fermilab Institutional Review, June 6-9,

Run II Computing Strategy Production processing and Monte-Carlo production capability after the end of data taking  Reprocessing efforts in 2011/2012 aimed at the Higgs  Monte Carlo production at the current rate through mid Analysis computing capability for at least 5 years, but diminishing after end of 2012  Push for 2012 conferences for many results –no large drop in computing requirements through this period Continued support for up to 5 years for  Code management and science software infrastructure  Data handling for production (+MC) and Analysis Operations Curation of the data: > 10 years with possibly some support for continuing analyses Computing Strategy - Fermilab Institutional Review, June 6-9,

Tevatron – looking ahead Computing Strategy - Fermilab Institutional Review, June 6-9, CDF and D0 expect the publication rate to remain stable for several years. Analysis activity:  Expect > 100 (students+ postdocs) actively doing analysis in each experiment through  Expect this number to be much smaller in 2015 though data analysis will still be on-going. D0 Publications each year CDF Publications each year

“Data Preservation” for Tevatron data Data will be stored and migrated to new tape technologies for ~ 10 years  Eventually 16 PB of data will seem modest If we want to maintain the ability to reprocess and do analysis on the data there is a lot of work to be done to keep the entire environment viable  Code, access to databases, libraries, I/O routines, Operating Systems, documentation….. If there is a goal to provide “open data” that scientists outside of CDF and Dzero could use there is even more work to do. 4 th Data Preservation Workshop was held at Fermilab in May Not just a Tevatron issue Computing Strategy - Fermilab Institutional Review, June 6-9,

Intensity Frontier program needs Computing Strategy - Fermilab Institutional Review, June 6-9, Many experiments in many different phases of development/operations. MINOS MiniBooNE SciBooNE MINERvA NOvA MicroBooNE ArgoNeuT Mu2e g-2 LBNE Project X era expts CPU (cores) Disk (TB) 1 PB

Intensity Frontier strategies NuComp forum to encourage planning and common approaches where possible A shared analysis facility where we can quickly and flexibly allocate computing to experiments Continue to work to “grid enable” the simulation and processing software  Good success with MINOS, MINERvA and Mu2e All experiments use shared storage services – for data and local disk – so we can allocate resources when needed Hired two associate scientists in the past year and reassigned another scientist. Computing Strategy - Fermilab Institutional Review, June 6-9,

Cosmic Frontier experiments Continue to curate data for SDSS Support data and processing for Auger, CDMS and COUPP Will maintain an archive copy of the DES data and provide modest analysis facilities for Fermilab DES scientists.  Data management is an NCSA (NSF) responsibility  We have the capability to provide computing should this become necessary DES use Open Science Grid resources opportunistically Future initiatives still in the planning stages Computing Strategy - Fermilab Institutional Review, June 6-9, SDSS DES

SHARING STRATEGIES Computing Strategy - Fermilab Institutional Review, June 6-9,

Sharing via the Grid – FermiGrid Computing Strategy - Fermilab Institutional Review, June 6-9, TeraGrid WLCG NDGF User Login & Job Submission GRIDFarm 3284 slots CMS 7485 slots CDF 5600 slots D slots FermiGrid Monitoring/ Accounting Services FermiGrid Infrastructure Services FermiGrid Site Gateway FermiGrid Authentication /Authorization Services Open Science Grid

Computing Strategy - Fermilab Institutional Review, June 6-9, The Open Science Grid (OSG) advances science through open distributed computing. The OSG is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales. Total of 95 sites; ½ million jobs a day, 1 million CPU hours/day; 1 million files transferred/day. It is cost effective, it promotes collaboration, it is working! Open Science Grid (OSG) The US contribution and partnership with the LHC Computing Grid is provided through OSG for CMS and ATLAS

FNAL CPU – core count for science Computing Strategy - Fermilab Institutional Review, June 6-9,

Data Storage at Fermilab - Tape Computing Strategy - Fermilab Institutional Review, June 6-9,

COMPUTING FOR THEORY AND SIMULATION SCIENCE Computing Strategy - Fermilab Institutional Review, June 6-9,

High Performance (parallel) Computing is needed for Lattice Gauge Theory calculations (LQCD) Accelerator modeling tools and simulations Computational Cosmology: Computing Strategy - Fermilab Institutional Review, June 6-9, Dark energy, matter Cosmic gasGalaxies Simulations connect fundamentals with observables

Computing for the Scientific Program Computing Strategy - Fermilab Institutional Review, June 6-9, ApplicationsType of ComputingComputing Facilities EXPERIMENTEXPERIMENT Detector simulation Event simulation Event processing Data analysis. DAQ software triggers High Throughput and Small Scale Parallel (<= number of cores on a CPU) Fermilab campus grid (FermiGrid) Open Science Grid (OSG) World Wide LHC Computing Grid (WLCG) Dedicated clusters FermiCloud COMP SCICOMP SCI Accelerator modeling Lattice Quantum ChromoDynamics (LQCD) Cosmological simulation Large Scale Parallel High Performance Computing Local “mid-range” HPC clusters Leadership class machines: NERSC, ANL, ORNL, NCSA etc. Data acquisition and event triggers Custom computingCustom, programmable logic, DSPs, embedded processors.

Strategies for Simulation Science Computing Lattice QCD is the poster child  Coherent inclusive US QCD collaboration Paul MacKenzie, Fermilab leads. This allocates HPC resources.  LQCD Computing Project (HEP and NP funding) Bill Boroski, Fermilab is the Project Manager  SciDAC II project to develop the software infrastructure Accelerator modeling  Multi-institutional tools project COMPASS – Panagiotis Spentzouris, Fermilab is the PI  Also accelerator project specific modeling efforts Computational Cosmology  Computational Cosmology Collaboration (C 3 ) for mid-range computing for astrophysics and cosmology  Taskforce – Fermilab, ANL, U of Chicago - to develop strategy Computing Strategy - Fermilab Institutional Review, June 6-9,

Software – collaborative efforts Computing Strategy - Fermilab Institutional Review, June 6-9, ComPASS – Accelerator Modeling Tools project Lattice QCD project and USQCD Collaboration Open Science Grid – many aspects and some sub- projects such as Grid security, workload management Grid and Data Management tools Advanced Wide Area Network projects Dcache collaboration Enstore collaboration Scientific Linux (with CERN) GEANT core development /validation (with GEANT4 collaboration) ROOT development & support (with CERN) Cosmological Computing Data Preservation initiative (global HEP)

Experiment/Project Lifecycle and funding Computing Strategy - Fermilab Institutional Review, June 6-9, Early Period R&D, Simulations LOI, Proposals Shared services Mature phase Construction, Operations, Analysis Shared services Expt or Project specific Final data-taking and beyond Final analysis, Data preservation and access Shared services Shared services Project specific Shared services

Budget/resource allocation for There is always upward pressure for computing  more disk and more cpu leads to faster results and greater flexibility  more help with software & operations is always requested Within a fixed budget each experiment can usually optimize between tape drives, tapes, disk, cpu, servers  assuming basic shared services are provided. With so many experiments in so many different stages we intend to convene a “Scientific Computing Portfolio Management Team” to examine the needs/computing models of the different Fermilab based experiments and help in allocating the finite dollars to optimize scientific output. Computing Strategy - Fermilab Institutional Review, June 6-9,

Conclusion We have a coherent and evolving scientific computing program that emphasizes sharing of resources, re-use of code and tools, and requirements planning. Embedded scientists with deep involvement are also a key strategy for success. Fermilab takes on leadership roles in computing in many areas. We support projects and experiments at all stages of their lifecycle – but if we want to truly preserve access to Tevatron data long term much more work is needed. Computing Strategy - Fermilab Institutional Review, June 6-9,

EXTRA SLIDES Computing Strategy - Fermilab Institutional Review, June 6-9,

CMS Tier 1 at Fermilab Computing Strategy - Fermilab Institutional Review, June 6-9, The CMS Tier-1 facility at Fermilab and the experienced team who operate it enable CMS to reprocess data quickly and to distribute the data reliably to the user community around the world.

32 Ken Bloom/UNLDOE/NSF Review of LHC OperationsMarch 8, 2011 CMS Computing Model T1 centers play prominent role as they share the custodial storage of raw & reconstructed data Jobs follow data access location US T1 at Fermilab is the largest of CMS, the only T1 in the Americas T3 centers come into the picture Network capabilities of the whole computing system very important CERNCERN 1 Tier 0 USA UKUK ItalyItaly FranceFrance GermanyGermany SpainSpain TaiwanTaiwan 7 Tier 1 Data recording Primary reconstruction Partial Reprocessing First archive copy of the raw data (cold) share raw & reconstructed data (FEVT) for custodial storage each site has full AOD data perform Data Reprocessing Analysis Tasks (Skimming) Data Serving to Tier-2 centers for analysis Archive Simulation From Tier Tier 2 Monte Carlo Production Primary Analysis Facilities Computing Strategy - Fermilab Institutional Review, June 6-9, 2011

CMS Physics Analysis at Fermilab Computing Strategy - Fermilab Institutional Review, June 6-9, U.S.CMS analysis facility at Fermilab has the highest number of CMS analysis jobs: ~200k/week. Fermilab is an excellent place to work on CMS physics analysis Analysis Jobs at CMS T2/T3 sites in a week

Any Data, Anywhere, Any time: Early Demonstrator 34 Root I/O and Xrootd demonstrator to support the CMS Tier-3s and interactive use – Nebraska, Fermilab, UCSD working together Computing Strategy - Fermilab Institutional Review, June 6-9, 2011

Data on tape - total Computing Strategy - Fermilab Institutional Review, June 6-9, 2011 Other Experiments 35

Data lives a long time (and is migrated to new media many times) Computing Strategy - Fermilab Institutional Review, June 6-9, L- legacy tape $ -contributes funding

Disk Storage Services Large cache storage for D0, CDF, CMS (1, 1, 7 PB) BlueArc storage area network (1.3 PB) Lustre (distributed parallel I/O used on Lattice QCD and Cosmology clusters and CMS in test) AFS – legacy system Computing Strategy - Fermilab Institutional Review, June 6-9,

FermiCloud: Virtualization likely a key component for long term analysis The FermiCloud project is a private cloud facility built to provide a testbed and a production facility for cloud services A private cloud—on-site access only for registered Fermilab users  Can be evolved into a hybrid cloud with connections to Magellan, Amazon or other cloud provider in the future. Unique use case for cloud - on public production network, integrated with the rest of the infrastructure. Computing Strategy - Fermilab Institutional Review, June 6-9,

Data Preservation and long-term analysis: general considerations Physics Case Models Governance Technologies Computing Strategy - Fermilab Institutional Review, June 6-9,

Fermilab Computing Facilities Computing Strategy - Fermilab Institutional Review, June 6-9, Lattice Computing Center (LCC) High Performance Computing (HPC) Accelerator Simulation, Cosmology nodes No UPS Feynman Computing Center (FCC) High availability services – e.g. core network, , etc. Tape Robotic Storage ( slot libraries) UPS & Standby Power Generation ARRA project: upgrade cooling and add HA computing room - completed Grid Computing Center (GCC) High Density Computational Computing CMS, RUNII, Grid Farm batch worker nodes Lattice HPC nodes Tape Robotic Storage ( slot libraries) UPS & taps for portable generators EPA Energy Star award 2010

Facilities: more than just space power and cooling – continuous planning Computing Strategy - Fermilab Institutional Review, June 6-9, ARRA funded new high availability computer room in Feynman Computing Center

Reliable high speed networking is key Computing Strategy - Fermilab Institutional Review, June 6-9,

Chicago Metropolitan Area Network DOE Scientific Computing Review - Feb 8,

Lattice Gauge Theory: significant HPC computing at Fermilab Fermilab is a leading participant in the US lattice gauge theory computational program funded by Dept of Energy (OHEP, ONP, and OASCR). Program is overseen by the USQCD Collaboration (almost all lattice gauge theorists in the US)  USQCD’s PI is Paul Mackenzie of Fermilab. Purpose is to develop software and hardware infrastructure in the US for lattice gauge theory calculations.  Software grant through the DOE SciDAC program of ~ $2.3 M/year.  Hardware and operations funded by the LQCD Computing Project of ~$3.6M/year. Computing Strategy - Fermilab Institutional Review, June 6-9,

Computational Cosmology Computing Strategy - Fermilab Institutional Review, June 6-9, Fermilab scientists N. Gnedin and collaborators are experts in the area of computational cosmology. Fermilab hosts a small cluster (1224 cores) that is in use by the astrophysics community at Fermilab and at KICP University of Chicago. Fermilab is part of the Computational Cosmology Collaboration (C 3 ) for mid-range computing for astrophysics and cosmology. Cosmological surveys require many medium-sized simulations Medium scale clusters are used for code development Medium scale machines are used for analysis of simulations performed at leadership class facilities A Computational Cosmology Task Force with Argonne, Fermilab and University of Chicago was started recently.

DES Analysis Computing at Fermilab Computing Strategy - Fermilab Institutional Review, June 6-9, Fermilab plans to host a copy of the DES Science Archive. This consists of two pieces  A copy of the Science database  A copy of the relevant image data on disk and tape This copy serves a number of different roles  Acts as a backup for the primary NCSA archive, enabling collaboration access to the data when the primary is unavailable  Handles queries by the collaboration, thus supplementing the resources at NCSA  Enables the Fermilab scientists to effectively exploit the DES data for science analysis To support the science analysis of the Fermilab Scientists, DES will need a modest amount of computing (of order 24 nodes). This is similar to what was supported for the SDSS project.

Accelerator modeling tools at Fermilab Computing Strategy - Fermilab Institutional Review, June 6-9, Fermilab is leading the ComPASS Project:  Community Petascale Project for Accelerator Science and Simulation — a multi-institutional collaboration of computational accelerator physicists  Panagiotis Spentzouris from Fermilab is PI for ComPASS  ComPASS goals are to develop High Performance Computing (HPC) accelerator modeling tools Multi-physics, multi-scale for beam dynamics; “virtual accelerator” Thermal, mechanical, and electromagnetic; “virtual prototyping” Development and support of CHEF  general framework for single particle dynamics developed at Fermilab