Hans Wenzel CDF CAF meeting October 18 th -19 th 20011 CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.

Slides:



Advertisements
Similar presentations
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Advertisements

Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Introduzione al Software di CMS N. Amapane. Nicola AmapaneTorino, Aprile Outline CMS Software projects The framework: overview Finding more.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Hans Wenzel CMS week, CERN September 2002 ”Facility for muon analysis at FNAL" Hans Wenzel Fermilab I.What is available at FNAL right now II.What will.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
D0 Taking Stock 11/ /2005 Calibration Database Servers.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
Claudio Grandi INFN-Bologna CHEP 2000Abstract B 029 Object Oriented simulation of the Level 1 Trigger system of a CMS muon chamber Claudio Grandi INFN-Bologna.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Hans Wenzel Second Large Scale Cluster Workshop October ”The CMS Tier 1 Computing Center at Fermilab" Hans Wenzel Fermilab  The big picture.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Hans Wenzel PMG Meeting Friday Dec. 13th 2002 ”T1 status" Hans Wenzel Fermilab  near term vision for: data management and farm configuration management.
USCMS May 2002Jim Branson 1 Physics in US CMS US CMS Annual Collaboration Meeting May 2002 FSU Jin Branson.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Grid site as a tool for data processing and data analysis
PC Farms & Central Data Recording
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
UK GridPP Tier-1/A Centre at CLRC
LHC Collisions.
Nuclear Physics Data Management Needs Bruce G. Gibbard
ALICE Data Challenges Fons Rademakers Click to add notes.
Presentation transcript:

Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got there.  Near term plans.

Hans Wenzel CDF CAF meeting October 18 th -19 th Introduction  CMS is an experiment in construction for the LHC, first data expected in 2006  Until then computing at FNAL provides:  Monte Carlo Production (Trigger + physics TDR) in distributed evironment.  Host and serve the data  Provide computing and development platform for physicist (code, disk.....)

Hans Wenzel CDF CAF meeting October 18 th -19 th Our Web sites  Monitoring page, links to tools and scripts  Department web site: " New department, most people hired over the last 18month " Same is true for the hardware I will give an historical overview The CMS dpartment

Hans Wenzel CDF CAF meeting October 18 th -19 th Popcrn01 - popcrn40 GALLO,WONDER,VELVEETA, CMSUN1 + the desktop PC's: SERVERS: Spring 2000 Fall 2000 Summer 2000 WORKERS

Hans Wenzel CDF CAF meeting October 18 th -19 th  2 Dell Servers with 4 Pentium CPU's. GALLO is the IO node of the FARM and VELVEETA is the Objectivity Database host. Each has > 200 Gbyte raid attached.  40 dual CPU (popcrn01-popcrn40). For Digis production 10 are used as pile up server leaving the other 30 (60 CPU's ) as worker nodes. Currently we have a sample of minimum bias events.  STK robot (enstore)--> tapes are made by the devil (pure evil)  FBSNG batch system --> can only recommend  Hardware and software used in Production

Hans Wenzel CDF CAF meeting October 18 th -19 th Think about production! Classic way : easy to do: linux farm with batch system, very robust the processes don't care about each other, if the node or the process dies you run it again!

Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Major Challenge: Pile up  Events are big (raw event is 2MB)  Detector digitization has to take into account multiple crossings  34 = 17 minimum bias events/crossing  Calorimetry needs -5 to +3 crossings  Muon DT ought to have crossings  Tracker loopers can persist for many crossings  Typically need information from ~ 200 mb events for each signal event  Study at different luminosities infers different pileup  Therefore not sensible to include pileup in simulation but in digitization (front end of reconstruction)  Track finding in very complex environment  High magnetic field and ~ 1 rad length of tracker material:  Lots of bremstrahlung for the electrons,  TK-ECAL matching non-trivial

Hans Wenzel CDF CAF meeting October 18 th -19 th Digitisation and Pileup  High luminosity -> 17 minimum bias events in one bunchcrossing  Overlay crossings -5 to +3  200 min.bias for 1 signal event  "recycle" min.bias events

Hans Wenzel CDF CAF meeting October 18 th -19 th Pileup Solution is to sample a finite number pseudo-randomly Problems can come when one single mb event by itself would trigger the detector You would get this trigger many many times Filter the minimum bias events, but remember to take into account the removed events must sample from full range of mb events to ensure patterns do not repeat too often If you need 1 Million Signal events you would need 200 million minimum bias events Impossible with current CPU, storage etc 200mb events = 70MB massive data movement problem

Hans Wenzel CDF CAF meeting October 18 th -19 th OODigis GALLO VELVEETA Popcr31 Popcrn32 Popcrn40 Popcrn01 Popcrn30 Pile up OOHit Data AMS Servers on popcrn31-popcrn40 (pile up servers) GALLO (data server) VELVEETA (DB host) Every one talks to everyone Currentl y

Hans Wenzel CDF CAF meeting October 18 th -19 th User access to the data in Objectivity

Hans Wenzel CDF CAF meeting October 18 th -19 th Enstore STKEN Silo cmsun1 RAID 250 GB SCSI 160 AMD server AMD/Enstore interface User access to FNAL Objectivity data: Network Users in: Wisconsin CERN FNAL Texas Objects > 10 TB No disk cache (yet) Host redirection protocol allows to add more servers --> scaling+ load balancing

Hans Wenzel CDF CAF meeting October 18 th -19 th Upgrades in 2001

Hans Wenzel CDF CAF meeting October 18 th -19 th Current Networking

Hans Wenzel CDF CAF meeting October 18 th -19 th Upgraded Networking

Hans Wenzel CDF CAF meeting October 18 th -19 th Fall 2001 Production User Test

Hans Wenzel CDF CAF meeting October 18 th -19 th Near term plan  Testing and benchmarking of raid arrays  Build up a User analysis cluster: load balancing cheap expendable easy central login transparent access to disks make plenty of local disks globally available

Hans Wenzel CDF CAF meeting October 18 th -19 th FRY1 FRY5 FRY7 FRY3 FRY2 FRY6 FRY4 FRY8 SWITCH BIGMAC GigaBit 100MB RAID 250 GB SCSI 160 Proposed CMS User Cluster OS: Mosix? Fbsng batch system? Disk farm?