Hans Wenzel CMS week, CERN September 2002 ”Facility for muon analysis at FNAL" Hans Wenzel Fermilab I.What is available at FNAL right now II.What will.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Optimisation of Grid Enabled Storage at Small Sites Jamie K. Ferguson University of Glasgow – Jamie K. Ferguson – University.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
RAL Site report John Gordon ITD October 1999
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Hans Wenzel CMS 101 September " Introduction to the FNAL UAF and other US facilities " “work in progress Hans Wenzel Fermilab ● Introduction ●
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Hans Wenzel Second Large Scale Cluster Workshop October ”The CMS Tier 1 Computing Center at Fermilab" Hans Wenzel Fermilab  The big picture.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Hans Wenzel PMG Meeting Friday Dec. 13th 2002 ”T1 status" Hans Wenzel Fermilab  near term vision for: data management and farm configuration management.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Experience of PROOF cluster Installation and operation
Ákos Frohner EGEE'08 September 2008
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

Hans Wenzel CMS week, CERN September 2002 ”Facility for muon analysis at FNAL" Hans Wenzel Fermilab I.What is available at FNAL right now II.What will be available after the upgrade III.How to access Files in mass storage (dCache,Enstore) IV.How to use the batch system V.Near term plan

Hans Wenzel CMS week, CERN September 2002 Introduction I. computing at the CMS Tier 1 center at FNAL provides: II. Monte Carlo Production (Trigger + physics TDR) in distributed environment. III. Host and serve the data, Mass storage IV. Provide computing and development platform for physicist (resources, code, disk, help, tutorials,.....) V. Evaluate new hardware, software solutions VI. Active development

Hans Wenzel CMS week, CERN September 2002 Our Web sites I. Monitoring page, links to tools and scripts II. Department web site: III. The batch system: IV. The dCache system:

Hans Wenzel CMS week, CERN September 2002 Obtaining a CMS account at FNAL I. Then click on the "CMS Account" button that will guide you through the process II. Step 1: Get a Valid Fermilab ID III. Step 2: Get a fnalu account and CMS account IV. Step 3: Get a Kerberos principal and krypto card V. Step 4: Send me to create an account on the CMS cluster and read Information for first-time CMS account users

Hans Wenzel CMS week, CERN September 2002 Help  Mailing lists:      Mailing list archives:  Webpages:

Hans Wenzel CMS week, CERN September 2002 What's available for the User at FNAL I. Currently in the process of setting up and evaluating the best solution. The current situation is far from ideal. Some annoyance also caused by software distribution based on outdated Linux version. II. linux servers: wonder, burrito, whopper, nfs cross mounted /data disks (DAS), FBSNG batch system (8 CPU’s) attached to whopper. Need to contact me to get kerberos principle matched to special batch principal III. Cmsun1: 8 way sun smp machine

BIGMAC R&D 1 TB FRY CMSUN1 WHOPPER WONDER BURITO GALLO VELVEETA RAMEN CHALUPA CHOCOLAT SNICKERS POPCRN GYOZA Production Cluster US-CMS TESTBED USER ANALYSIS ENSTORE (15 Drives) 250GB 1TB750GB 250GB ESNET (OC3) MREN (OC3) CISCO TB BATCH

Hans Wenzel CMS week, CERN September 2002 " IBM - servers " CMSUN1 " Dell -servers " Chocolat " snickers " Chimichanga " Chalupa " Winchester Raid

Hans Wenzel CMS week, CERN September 2002 " Popcorns (MC production) " frys(user) " gyoza(test)

Hans Wenzel CMS week, CERN September 2002 Integrating the Desktop I. Besides central computing make use of the powerful PC's running linux (plenty of disk, cpu, freedom..) II. Created CMS desktop workgroup containing everything you need to run CMS software on your PC ( afs....). CMS software is kept up to date in AFS. You can create your own objectivity database….

Hans Wenzel CMS week, CERN September 2002 Upgrades this year  40 more farm nodes --> 3x computing power  >20 nodes for user computing  But we will go away from dedicated farms instead dynamically assign nodes as necessary  2-5 nodes for webservers, disk cache master nodes etc.  8-12 disk servers to be part of dCache.  New fast disk system (Zambeel)  Faster higher capacity tape drives stk 9940b  Better connectivity between cms computing and e.g. mass storage.

RAMEN POPCRN GYOZA PRODUCTION CLUSTER (80 Dual Nodes) US-CMS TESTBED ENSTORE (17 DRIVES) 250GB ESNET (OC12) MREN (OC3) CISCO 6509 DCACHE (>14TB)USER ANALYSIS BIGMAC R&D 1 TB FRY

Hans Wenzel CMS week, CERN September 2002

" Enstore " STKEN " Silo " Snickers " RAID 1TB " IDE " AMD server " AMD/Enstore interface " User access to FNAL (Jets/Met, " Muons coming) Objectivity data: " Network " Users in: " Wisconsin " CERN " FNAL " Texas " Objects " > 10 TB " Now also working " with disk cache

Hans Wenzel CMS week, CERN September 2002 User Federations and FNAL  oduction.html

Hans Wenzel CMS week, CERN September 2002 Access to mass storage at FNAL (dCache and enstore)  We have two cooperating systems enstore and dCache.  Enstore: network attached tape allowing sequential access (optimized)  dCache: disk farm allowing for random access  both systems use the same name space and can be accessed via the pnfs pseudo filesystem e.g. ls /pnfs/cms lists the files.  to access enstore do setup -q stken encp encp /pnfs/cms/production/Projects/.....

Hans Wenzel CMS week, CERN September 2002 Access to mass storage at FNAL (dCache and enstore) I. to access dCache do setup dcap dccp /pnfs/cms/production/Projects/..... II. Preferred way is to access files in mass storage is dCache

Hans Wenzel CMS week, CERN September 2002 What do we expect from dCache?  making a multi-terabyte server farm look like one coherent and homogeneous storage system.  Rate adaptation between the application and the tertiary storage resources.  Optimized usage of expensive tape robot systems and drives by coordinated read and write requests. Use dccp command instead of encp!  No explicit staging is necessary to access the data (but prestaging possible and in some cases desirable).  The data access method is unique independent of where the data resides.

Hans Wenzel CMS week, CERN September 2002 What do we expect from dCache (continued)?  High performance and fault tolerant transport protocol between applications and data servers  Fault tolerant, no specialized servers which can cause severe downtime when crashing.  Can be accessed directly from your application (e.g. root TDCacheFile class).  Ams-dcache server has been developed by replacing the POSIX IO with dCache library but we found the ams server to be highly unstable. Not clear if we will continue.

Hans Wenzel CMS week, CERN September 2002 Random Access Sequential Access File Transfer Production Personal Analysis Disk Cache ENSTORE (Hierarchical Storage Manager) CMS- specific CD/ISD The current system consists of 5 x 1.2 TB (Linux) read pools and 1sun server +1/4 TB raid array as write pool. We have additional 2 servers for R&D and funding for more (>5).

Hans Wenzel CMS week, CERN September 2002 First results with dCache system No optimization yet, the default configuration will be upgraded kernel, the xfs FS and may be double Gbit connectivity. The tests include all overhead. The average file size is ~1 Gbyte the reads are equally distributed over all read pools. # of concurrent reads (40 farm nodes) Aggregate throughput (sustained over hours) Mbyte/sec Mbyte/sec Mbyte/sec

Hans Wenzel CMS week, CERN September 2002 How to access data from root  Setup e.g. Linux:  setenv ROOTSYS /afs/fnal.gov/files/code/cms/ROOT/ /i386_linux22/gcc  setenv LD_LIBRARY_PATH $\{ROOTSYS\}/lib:$LD_LIBRARY_PATH  setenv PATH $\{ROOTSYS\}/bin:$PATH  #include  int main(int argc, char **argv)  { static TROOT exclusivefit("main","B lifetime fitting");  static TRint app("app",&argc,argv,NULL,0);  // TDCacheFile hfile("/pnfs/cms/wenzel/hsimple_dcache.root","CREATE","Demo ROOT file with histograms",0);  TFile *hfile = new TDCacheFile ("dcap://stkendca3a.fnal.gov:24125/pnfs/fnal.gov/usr/cms/wenzel/hsimple_dcache.root"," READ","Demo ROOT file with histograms",0);  hfile->ls();  hfile->Print();  \

Hans Wenzel CMS week, CERN September 2002 Using the batch system  Batch system we use is FBSNG which has been especially developed for farms.  Whopper and fry 5-7  But it is a farm batch system so needs getting used to and kerberos principals necessary if you want to do anything useful. So contact me before trying to use the system 

Hans Wenzel CMS week, CERN September 2002 Fbsng (cont.)  Setup FBSNG  Create job description file: SECTION main EXEC=/afs/fnal.gov/files/home/room2/cmsprod/wenzel/test.csh QUEUE=CMS STDOUT=/data/fbs-logs/ STDERR=/data/fbs-logs/

Hans Wenzel CMS week, CERN September 2002 #!/usr/local/bin/tcsh -f echo $FBS_SCRATCH cd $FBS_SCRATCH source /usr/local/etc/setups.csh setenv PATH /bin:/usr/bin:/usr/local/bin:/usr/krb5/bin:/usr/afsws/bin /bin/date /bin/cat > stupid.file << "EOF" ! ! this is just a stupid example file ! "EOF" /bin/ls /bin/pwd cp./stupid.file /data/fbs-logs

Hans Wenzel CMS week, CERN September 2002 FBSNG (cont.)  Fbs submit test.jdf  Fbs status

Hans Wenzel CMS week, CERN September 2002 Near term plan (user) I. Hardware coming (late, tomorrow?)which needs to be installed and the farm nodes will go through a one month acceptance period. II. make the user batch system easy to use. GUI to create jobs from templates and to submit jobs. This is basically done. III. Make farm usable for interactive use. Similar to lxplus cluster at cern. Currently we are investigating two solutions to achieve load balanced login: LVS and FBSNG. Need a solution for home areas and shared data areas. Here we will investigate systems from zambeel and panasys.