The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.

Slides:



Advertisements
Similar presentations
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
Advertisements

IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE middleware: gLite Data Management EGEE Tutorial 23rd APAN Meeting, Manila Jan.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Gestion des jobs grille CMS and Alice Artem Trunov CMS and Alice support.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
CMS User Support and Beijing Site Xiaomei Zhang CMS IHEP Group Meeting March
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
May 27, 2009T.Kurca JP CMS-France1 CMS T2_FR_CCIN2P3 Towards the Analysis Facility (AF) Tibor Kurča Institut de Physique Nucléaire de Lyon JP CMS-France.
Status of BESIII Distributed Computing BESIII Collaboration Meeting, Nov 2014 Xiaomei Zhang On Behalf of the BESIII Distributed Computing Group.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
WLCG IPv6 deployment strategy
Status of BESIII Distributed Computing
Real Time Fake Analysis at PIC
The Beijing Tier 2: status and plans
Xiaomei Zhang CMS IHEP Group Meeting December
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Overview of the Belle II computing
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Data Challenge with the Grid in ATLAS
Work report Xianghu Zhao Nov 11, 2014.
The Status of Beijing site, and CMS local DBS
CMS — Service Challenge 3 Requirements and Objectives
Installing and Running a CMS T3 Using OSG Software - UCR
CMS transferts massif Artem Trunov.
Farida Fassi, Damien Mercie
CRAB and local batch submission
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Computing Board Report CHIPP Plenary Meeting
Short update on the latest gLite status
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Conditions Data access using FroNTier Squid cache Server
Simulation use cases for T2 in ALICE
Discussions on group meeting
N. De Filippis - LLR-Ecole Polytechnique
Artem Trunov Computing Center IN2P3
The CMS Beijing Site: Status and Application
Site availability Dec. 19 th 2006
The LHCb Computing Data Challenge DC06
Presentation transcript:

The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007

Outline Site overview – cpu resource, software, storage status PhEDEx Data transfer status Job Submission status Tutorial –How to use new storage element and access SE data –How to request data in the PhEDEx –How to run jobs in T2_Beijing questions

Site overview-cpu and sw CPU resource –LCG components 2 Xeon 5345 cpu 4 cores 16GB RAM instead of 2 Xeon 5130 cpu and 4 GB RAM –Worknodes 14 WNs (2 Xeon 3.2G cpu and 2 GB RAM) 12 new WNs added (2 Xeon 5345 cpu 4 cores 16GB RAM) –CMS VO server PhDEDx/VOBOX server Squid server (used for accessing calibration data from CERN frontier server, added in future) Software –LCG middleware recently upgraded to gLite 3.1 –work nodes OS system upgraded to SLC 4.5 –CMSSW1.6.1, 1.6.2, 1.6.3, has been installed and tested, more versions will be installed and tested from CMS central software managers after Christmas A lcg-ManageVOTag bug in gLite 3.1 caused the delay Has written to Christoph Wissing for further installation

Site overview-Storage Storage –Disk-only dCache system –Greatly improve recently Past : –Headnodes and poolnodes in a single server: IBM x346 (2 Xeon 3.2G CPUs and 4 GB RAM) –one pool with 1.1 TB –it causes much trouble in PhEDEx data transferring and normal use of SE Now: –5 servers( 2 Xeon 5345 cpus 4 cores 16GB RAM) for the new SE: admin, pnfs, srm, gridftp(cmspn01), gridftp(atlaspn01) –5 pools with 10TB disk added for each experiment(10TB for CMS, 10TB for ATLAS) –Get a great performance in current PhEDEx transferring

PhEDEx Data transfer status Plan –Four links with FNAL and IN2P3 –Reach 10~20MB/s for each link, 1TB/day –Limits: Network bandwidth: ~80MB/s SE server performance: 200 streams input/output All the nodes shared with Altas experiment, challenged with Altas in furture when large volume transfers happen in Altas also need to test in the future Past vs. Current –Past: Only one link is available simultaneously Hardly reach 15MB/s, 4MB/s~8MB/s –Current Easy to maintain two links(upload and download) with FNAL Reach 32MB/s, 10MB/s~30MB/s, 1.7TB/day Four links will continue to test

Download link with FNAL Reach 32MB/s

Upload link with FNAL Reach 18MB/s

Transfer volume this week Reach more than 1.7TB per day

Past PhEDEx status Despite of bad situation, we still have transferred 30 (10) TB from FNAL (IN2P3) Also we have a very start this month for the upload link from FNAL and IN2P3

Job submission Status –Most of jobs submitted to other sites –Few jobs submitted to T2_Beijing Reasons –No data available in T2_Beijing –CMSSW versions not complete –CPU resources not very good –Support services not good Plan –More data interested will be transferred from FNAL or IN2P3 to local SE –Keep close contact with Christoph Wissing for CMSSW installation –Support Analysis, MC, reconstruction Need to discuss? Support MC from the central data operation group

How to use new storage element The name of SE:srm.ihep.ac.cn, – no change for pnfs directory, pnfs/ihep.ac.cn/data/cms Monitoring page: – pool1 pool2 pool4 pool5 pool3 srm/gridftp Gridftp(cmspn01) Gridftp(atlaspn01) pool1 pool2 pool4 pool5 pool3 Srmcp Gridftp /dcap Poolmanager (seadmin) Pnfs(sepnfs) Dcache system

SE Usage Browse edg-gridftp-ls gsiftp://srm.ihep.ac.cn/pnfs/ihep.ac.cn/data/cms Transfer –LCG Data Management Tools (LFN is available)(get/put) lcg-cr(upload to SE), lcg-cp(download from SE) –Srmcp(get/put) srmcp file:////home/zhangxm/testfile srm://srm.ihep.ac.cn:8443/srm/managerv1?SFN=/pnfs/ihep.ac.cn/data/cm s/test/testfilefile:////home/zhangxm/testfile –Gridftp(get/put) globus-url-copy file:////home/zhangxm/testfile gsiftp://srm.ihep.ac.cn/pnfs/ihep.ac.cn/data/cms/test/testfilefile:////home/zhangxm/testfile –Dccp(only get when pnfs not mounted in UI) Dccp dcap:////srm.ihep.ac.cn:22125/pnfs/ihep.ac.cn/data/cms/testfile /home/zhangxm/testfile

SE usage Posix-like access –CMSSW access Pset configuration: PoolSource{ …fileNames =“/store/…/xx.root”} CMS_PATH=/opt/exp_soft/cms TFC(storage.xml) and site_local_catalog.xml –Crab job access Fill in Dataset name from DBS in crab.cfg Pset configuration PoolSource{ …fileNames =“file:*.root”} Access from SE by crab automatically –Local access (discuss with our system administrator) pnfs is mounted in UI looking /pnfs/ihep.ac.cn/data/cms as local disk

How to subscribe data in PhEDEx Search dataset names in data->subscribe Create request in requests->create request –Fill in the dataset name –Select T2_Beijing as the destination –Submit View requests status in requests->view/manage requests – to data manager after submission –Data manager will check requests, approve/disapprove requests

How to run jobs in T2_Beijing The directory for CMSSW in lcg003: /opt/exp_soft/cms/ Compile and run with unique CMSSW version –Upgrade in time? Support three ways(as the way of accessing data from SE) –UI running, Posix-like read (dcap), output to UI –UI running, local-like read/write to SE(When PNFS mounted) –WN running, Posix-like read(dcap) or transfer(lcg-cr), output to SE (crab job automatically)

questions Data transfer? Job submission? Inform sth. through mail lists?