Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
TRIUMF SITE REPORT – Corrie Kost April Catania (Italy) Update since last HEPiX/HEPNT meeting.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
The D0 Monte Carlo Challenge Gregory E. Graham University of Maryland (for the D0 Collaboration) February 8, 2000 CHEP 2000.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
November 15, 2000US CMS Tier2 Plans Matthias Kasemann1 US CMS Software and Computing Tier 2 Center Plans Matthias Kasemann Fermilab DOE/NSF Baseline Review.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
Hans Wenzel CMS week, CERN September 2002 ”Facility for muon analysis at FNAL" Hans Wenzel Fermilab I.What is available at FNAL right now II.What will.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Spending Plans and Schedule Jae Yu July 26, 2002.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CDF computing in the GRID framework in Santander
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
November 10, 1999PHENIX CC-J Updates in Nov.991 PHENIX CC-J Updates in Nov New Hardware - N.Hayashi / RIKEN November 10, 1999 PHENIX Computing Meeting.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Hans Wenzel CMS 101 September " Introduction to the FNAL UAF and other US facilities " “work in progress Hans Wenzel Fermilab ● Introduction ●
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Hans Wenzel Second Large Scale Cluster Workshop October ”The CMS Tier 1 Computing Center at Fermilab" Hans Wenzel Fermilab  The big picture.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
UK GridPP Tier-1/A Centre at CLRC
Cluster Computers.
Presentation transcript:

Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001

Vivian O’Dell, US CMS User Facilities Status2 User Facility Hardware Tier 1: è CMSUN1 (User Federation Host) p MHz processors with ~ 1 TB RAID è Wonder (User machine) p MHz CPU linux machine with ¼ TB RAID è Production farm p Gallo, Velveeta: MHz CPU linux servers with ¼ TB RAID each p 40 dual CPU 750 MHz linux farm nodes

May 18, 2001Vivian O’Dell, US CMS User Facilities Status3 Popcrn01 - popcrn40 GALLO,WONDER,VELVEETA, CMSUN1 WORKERS:SERVERS: CMS Cluster

May 18, 2001Vivian O’Dell, US CMS User Facilities Status4 Prototype Tier 2 Status 1. Caltech/UCSD è Hardware at Each Site p 20 dual 800MHz PIII’s, 0.5 GB RAM p Dual 1GHz CPU Data Server, 2 GB RAM p 2 X 0.5 TB fast (Winchester) RAID (70 MB/s sequential) p CMS Software installed, ooHit and ooDigi tested. Plans to buy another 20 duals this year at each site. See 2. University of Florida è 72 computational nodes p Dual 1GHz PIII p 512MB PC133 MHz SDRAM p 76GB IBM IDE Disks è Sun Dual Fiber Channel RAID Array 660GB (raw) p Connected to Sun Data Server p Not yet delivered. Performance numbers to follow.

May 18, 2001Vivian O’Dell, US CMS User Facilities Status5 Tier 2 Hardware Status (CalTech)

May 18, 2001Vivian O’Dell, US CMS User Facilities Status6 UF Current (“Physics”) Tasks Full digitization of JPG Fall Monte Carlo Sample.. è Fermilab, CalTech & UCSD are working on this è Fermilab is hosting the User Federation (currently 1.7 TB) è Full sample should be processed (pileup/nopileup in ~1-2 weeks(?)) p Of course things are not optimally smooth è For up to date information see: è Full JPG sample will be hosted at Fermilab User Federation Support è Contents of the federation and how to access it is at the above url. We keep this up to date with production. JPG NTUPLE Production at Fermilab è Yujun Wu and Pal Hidas are generating the JPG NTUPLE from the FNAL user federation. They are updating information linked to the JPG web page.

May 18, 2001Vivian O’Dell, US CMS User Facilities Status7 Near Term Plans Continue User Support è Hosting User Federations. Currently hosting JPG federation with combination of disk/tape (AMS server Enstore connection working). Would like feedback. p Host MPG group User Federation at FNAL? è Continue JPG ntuple production, hosting and archiving p Would welcome better technology here. Café starting to address this problem. è Code distribution support Start Spring Production using more “grid aware” tools è More efficient use of CPU at prototype T2’s Continue commissioning 2 nd prototype T2 center Strategy for dealing with new Fermilab Computer Security è Means “kerberizing” all CMS computing p Impact on users! è Organize another CMS software tutorial this summer(?) p coinciding with kerberizing CMS machines p Need to come up with a good time. Latter ½ of August before CHEP01?

May 18, 2001Vivian O’Dell, US CMS User Facilities Status8 T1 Hardware Strategy What we are doing è Digitization of JPG fall production with Tier 2 sites è New MC (spring) production with Tier 2 sites è Hosting JPG User Federation at FNAL p For fall production, this implies ~4 TB storage (e.g. ~1 TB on disk, 3 TB on tape). è Hosting MPG User Federation at FNAL? p For fall production, this implies ~4 TB storage (~ 1 TB disk, 3 TB tape) è Also hosting User Federation from spring production, AOD or even NTUPLE for users è Objectivity testing/R&D in data hosting What we need è Efficient use of CPU at Tier 2 sites – so we don’t need additional CPU for production è Fast, efficient, transparent storage for hosting user federation p Mixture of disk/tape è R&D for RAID/disk/OBJY efficient matching p This will also serve as input to RC simulation è Build & operate R&D systems for analysis clusters

May 18, 2001Vivian O’Dell, US CMS User Facilities Status9 Hardware Plans FY01 We have defined hardware strategy for T1 for FY2001. ~ Consistent with project plan and concurrence from ASCB. è Start User Analysis Cluster at Tier 1. This will also be an R&D cluster for “data intensive” computing. è Upgrade networking for CMS cluster è Production User Federation Hosting for physics groups (more disk/tape storage) è Test and R&D systems to continue path towards full prototype T1 center. We are focusing this year on data server R&D systems. Have started writing requisitions. Plans to acquire most hardware over the next 2-3 months.

May 18, 2001Vivian O’Dell, US CMS User Facilities Status10 FY01 Hardware Acquisition Overview

May 18, 2001Vivian O’Dell, US CMS User Facilities Status11 Funding Proposal for 2001 Some costs may be overestimated – (but) also we may need to augment our farm CPU

May 18, 2001Vivian O’Dell, US CMS User Facilities Status12 Summary User facility has a dual mission è Supporting Users p Mostly successful (I think) p Open to comments/critiques and requests! è Hardware/Software R&D p We will be concentrating on this more over the next year p This will be done in tandem with T2 centers and international CMS We have developed a hardware strategy taking these two missions into account We now have 2 prototype Tier 2 centers. è CalTech/UCSD have come online è University of Florida is in the process of installing/commissioning hardware