The CMS Beijing Site: Status and Application

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Storage Issues: the experiments’ perspective Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 9 September 2008.
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
Physicists's experience of the EGEE/LCG infrastructure usage for CMS jobs submission Natalia Ilina (ITEP Moscow) NEC’2007.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Monitoring in EGEE EGEE/SEEGRID Summer School 2006, Budapest Judit Novak, CERN Piotr Nyczyk, CERN Valentin Vidic, CERN/RBI.
Operational Experience with CMS Tier-2 Sites I. González Caballero (Universidad de Oviedo) for the CMS Collaboration.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
30/07/2005Symmetries and Spin - Praha 051 MonteCarlo simulations in a GRID environment for the COMPASS experiment Antonio Amoroso for the COMPASS Coll.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Daniele Spiga PerugiaCMS Italia 14 Feb ’07 Napoli1 CRAB status and next evolution Daniele Spiga University & INFN Perugia On behalf of CRAB Team.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Operational Experience with CMS Tier-2 Sites I. González Caballero (Universidad de Oviedo) for the CMS Collaboration.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
CMS User Support and Beijing Site Xiaomei Zhang CMS IHEP Group Meeting March
CERN LCG1 to LCG2 Transition Markus Schulz LCG Workshop March 2004.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
May 27, 2009T.Kurca JP CMS-France1 CMS T2_FR_CCIN2P3 Towards the Analysis Facility (AF) Tibor Kurča Institut de Physique Nucléaire de Lyon JP CMS-France.
Status of BESIII Distributed Computing BESIII Collaboration Meeting, Nov 2014 Xiaomei Zhang On Behalf of the BESIII Distributed Computing Group.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
PSI CMS T3 Status & '1[ ] HW Plan March '16 Fabio Martinelli
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI solution for high throughput data analysis Peter Solagna EGI.eu Operations.
Brief introduction about “Grid at LNS”
Daniele Bonacorsi Andrea Sciabà
WLCG IPv6 deployment strategy
Kevin Thaddeus Flood University of Wisconsin
Status of BESIII Distributed Computing
The EDG Testbed Deployment Details
Real Time Fake Analysis at PIC
The Beijing Tier 2: status and plans
Xiaomei Zhang CMS IHEP Group Meeting December
LCG Service Challenge: Planning and Milestones
Report of Dubna discussion
The Status of Beijing site, and CMS local DBS
Grid Computing for the ILC
lcg-infosites documentation (v2.1, LCG2.3.1) 10/03/05
Installing and Running a CMS T3 Using OSG Software - UCR
SAM at CCIN2P3 configuration issues
Farida Fassi, Damien Mercie
CRAB and local batch submission
Computing Board Report CHIPP Plenary Meeting
CRAB Server CRAB (CMS Remote Analysis Builder)
CC IN2P3 - T1 for CMS: CSA07: production and transfer
Short update on the latest gLite status
BEIJING-LCG2 Site Report
Simulation use cases for T2 in ALICE
Survey on User’s Computing Experience
ALICE – FAIR Offline Meeting KVI (Groningen), 3-4 May 2010
N. De Filippis - LLR-Ecole Polytechnique
The EU DataGrid Fabric Management Services
Site availability Dec. 19 th 2006
Presentation transcript:

The CMS Beijing Site: Status and Application Xiaomei Zhang CMS IHEP Group Meeting   March 14 2008

Outline Introduction to new T3 site The status of T2 site 第1页 Outline Introduction to new T3 site Resource management CMSSW management User Space management The status of T2 site Data Management Job Management CMS IHEP Twiki and indico CMS IHEP grid operation

CMS IHEP Grid Env 第2页 T3 PBS 1Gb/s T2 PhEDEx VOBox PBS BDII CMS User serve only for local CMS group CMS UI can do both grid and Local Farm submission NFS Storage system T2: shared in grid, a complete grid System needed Support official MC production And part of analysis dCache storage system (complicated) support fast and enormous data transferring from remote server Remote grid env PBS WN CMS User Interface NFS WN WN 1Gb/s T2 Resource Broker PhEDEx DCACHE Storage Element SRM Computing Element Global Local VOBox computing PBS storage WN BDII

New T3 site(1) Resource: Real Architecture: 第3页 New T3 site(1) Resource: 1 UI, 5 work nodes(40 cores) 2 Xeon 5345 cpu 4 cores 16GB RAM T2: 64 cpu cores Real Architecture: Login servers: cmsui01.ihep.ac.cn lxplus.ihep.ac.cn lxslc.ihep.ac.cn - PBS server: shared with other farms WN: cws012.ihep.ac.cn~cws016.ihep.ac.cn 公共批处理服务器 用户登录机群 lxplus lxslc cmsui01 , , CMS工作结点

New T3 site(2) CMSSW management User storage space management 第4页 New T3 site(2) CMSSW management Official version, shared with T2 /opt/exp_soft/cms/ Not reasonable to have another official version in T3 User storage space management Login home: /afs/ihep.ac.cn/users/…., not big enough to run your work, less than 1GB Working directory: /work, much bigger, 20T total(?)

New T3 site(3) Usage: huge analysis data downloaded by PhEDEx to SE 第5页 New T3 site(3) Usage: huge analysis data downloaded by PhEDEx to SE One way: copy SE to local NFS disk using SE tools Another way: Add another poolnode and pool in SE for local access, mounted to local T3 nodes just like local disk Warning: data in SE is not backed up Small analysis data filtered by crab job (less than 200MB) can be copied to local UI directly Huge production data submitted by crab to outside should be copied to SE, then downloaded to local NFS disk Hurt RB, if copied directly to local UI Not able to download to local NFS disk directly

The status of T2(1) DDT08: CCRC08 The status of T2_Beijing 第6页 The status of T2(1) DDT08: New metric: download link 20MB/s, upload link 10MB/s CCRC08 Dcache upgrade:1.7->1.8, srmv1->srmv2 Vobox: slc3->slc4 Phedex:2_5_2->2_6_2 Reach the specified transferring rate with IN2P3 The status of T2_Beijing Got 5 links, two with FNAL, two with IN2P3, one with CERN Reach our goal, the rate is higher than we expected If we have better SE, the rate should be a little higher

The status of T2(2) CMSSW installation procedure Job running 第7页 The status of T2(2) CMSSW installation procedure https://twiki.cern.ch/twiki/bin/view/CMS/CMSSWdeployment Job running CMS SAM tests:OK --- test basic functions and grid sites Job robot tests:OK --- crab job tests MC production tests:OK Computing resources ready now(100 cores) Additional cmssw1_4_9 needed Ask for a cerntain number of MC jobs to come in

CMS IHEP Twiki A documentation server established by CC: 第8页 CMS IHEP Twiki A documentation server established by CC: twiki.ihep.ac.cn : documents, guide, links indico.ihep.ac.cn: register meeting schedule and upload ppt report A special space for IHEP CMS a good place for communications in CMS issues https://twiki.ihep.ac.cn/twiki/bin/view/CMS/WebHome Usage: Easy to use, user guide is in the first page Everyone can put your information in , including: your document, your guide, userful link Make twiki a home for CMS physics Directory trees (discussion)

CMS IHEP grid operation 第9页 CMS IHEP grid operation Xiaomei zhang is responsible for all the respects of CMS computing YAN Xiaofei and Chen Liangchen helps on the system installation, configuration, management Regular computing meeting once two weeks

questions Anything missing? 第10页 questions Anything missing? Any applications needed in future grid uses?