Presentation is loading. Please wait.

Presentation is loading. Please wait.

Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.

Similar presentations


Presentation on theme: "Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001."— Presentation transcript:

1 Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001

2 Vivian O’Dell, US CMS User Facilities Status2 User Facility Hardware Tier 1: è CMSUN1 (User Federation Host) p 8 400 MHz processors with ~ 1 TB RAID è Wonder (User machine) p 4 500 MHz CPU linux machine with ¼ TB RAID è Production farm p Gallo, Velveeta: 4 500 MHz CPU linux servers with ¼ TB RAID each p 40 dual CPU 750 MHz linux farm nodes

3 May 18, 2001Vivian O’Dell, US CMS User Facilities Status3 Popcrn01 - popcrn40 GALLO,WONDER,VELVEETA, CMSUN1 WORKERS:SERVERS: CMS Cluster

4 May 18, 2001Vivian O’Dell, US CMS User Facilities Status4 Prototype Tier 2 Status 1. Caltech/UCSD è Hardware at Each Site p 20 dual 800MHz PIII’s, 0.5 GB RAM p Dual 1GHz CPU Data Server, 2 GB RAM p 2 X 0.5 TB fast (Winchester) RAID (70 MB/s sequential) p CMS Software installed, ooHit and ooDigi tested. Plans to buy another 20 duals this year at each site. See http://pcbunn.cacr.caltech.edu/Tier2/Tier2_Overall_JJB.htmhttp://pcbunn.cacr.caltech.edu/Tier2/Tier2_Overall_JJB.htm 2. University of Florida è 72 computational nodes p Dual 1GHz PIII p 512MB PC133 MHz SDRAM p 76GB IBM IDE Disks è Sun Dual Fiber Channel RAID Array 660GB (raw) p Connected to Sun Data Server p Not yet delivered. Performance numbers to follow.

5 May 18, 2001Vivian O’Dell, US CMS User Facilities Status5 Tier 2 Hardware Status (CalTech)

6 May 18, 2001Vivian O’Dell, US CMS User Facilities Status6 UF Current (“Physics”) Tasks Full digitization of JPG Fall Monte Carlo Sample.. è Fermilab, CalTech & UCSD are working on this è Fermilab is hosting the User Federation (currently 1.7 TB) è Full sample should be processed (pileup/nopileup in ~1-2 weeks(?)) p Of course things are not optimally smooth è For up to date information see: http://computing.fnal.gov/cms/Monitor/cms_production.html è Full JPG sample will be hosted at Fermilab User Federation Support è Contents of the federation and how to access it is at the above url. We keep this up to date with production. JPG NTUPLE Production at Fermilab è Yujun Wu and Pal Hidas are generating the JPG NTUPLE from the FNAL user federation. They are updating information linked to the JPG web page.

7 May 18, 2001Vivian O’Dell, US CMS User Facilities Status7 Near Term Plans Continue User Support è Hosting User Federations. Currently hosting JPG federation with combination of disk/tape (AMS server Enstore connection working). Would like feedback. p Host MPG group User Federation at FNAL? è Continue JPG ntuple production, hosting and archiving p Would welcome better technology here. Café starting to address this problem. è Code distribution support Start Spring Production using more “grid aware” tools è More efficient use of CPU at prototype T2’s Continue commissioning 2 nd prototype T2 center Strategy for dealing with new Fermilab Computer Security è Means “kerberizing” all CMS computing p Impact on users! è Organize another CMS software tutorial this summer(?) p coinciding with kerberizing CMS machines p Need to come up with a good time. Latter ½ of August before CHEP01?

8 May 18, 2001Vivian O’Dell, US CMS User Facilities Status8 T1 Hardware Strategy What we are doing è Digitization of JPG fall production with Tier 2 sites è New MC (spring) production with Tier 2 sites è Hosting JPG User Federation at FNAL p For fall production, this implies ~4 TB storage (e.g. ~1 TB on disk, 3 TB on tape). è Hosting MPG User Federation at FNAL? p For fall production, this implies ~4 TB storage (~ 1 TB disk, 3 TB tape) è Also hosting User Federation from spring production, AOD or even NTUPLE for users è Objectivity testing/R&D in data hosting What we need è Efficient use of CPU at Tier 2 sites – so we don’t need additional CPU for production è Fast, efficient, transparent storage for hosting user federation p Mixture of disk/tape è R&D for RAID/disk/OBJY efficient matching p This will also serve as input to RC simulation è Build & operate R&D systems for analysis clusters

9 May 18, 2001Vivian O’Dell, US CMS User Facilities Status9 Hardware Plans FY01 We have defined hardware strategy for T1 for FY2001. ~ Consistent with project plan and concurrence from ASCB. è Start User Analysis Cluster at Tier 1. This will also be an R&D cluster for “data intensive” computing. è Upgrade networking for CMS cluster è Production User Federation Hosting for physics groups (more disk/tape storage) è Test and R&D systems to continue path towards full prototype T1 center. We are focusing this year on data server R&D systems. Have started writing requisitions. Plans to acquire most hardware over the next 2-3 months.

10 May 18, 2001Vivian O’Dell, US CMS User Facilities Status10 FY01 Hardware Acquisition Overview

11 May 18, 2001Vivian O’Dell, US CMS User Facilities Status11 Funding Proposal for 2001 Some costs may be overestimated – (but) also we may need to augment our farm CPU

12 May 18, 2001Vivian O’Dell, US CMS User Facilities Status12 Summary User facility has a dual mission è Supporting Users p Mostly successful (I think) p Open to comments/critiques and requests! è Hardware/Software R&D p We will be concentrating on this more over the next year p This will be done in tandem with T2 centers and international CMS We have developed a hardware strategy taking these two missions into account We now have 2 prototype Tier 2 centers. è CalTech/UCSD have come online è University of Florida is in the process of installing/commissioning hardware


Download ppt "Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001."

Similar presentations


Ads by Google