July 28' 2011INDIA-CMS_meeting_BARC1 Tier-3 TIFR Makrand Siddhabhatti DHEP, TIFR Mumbai July 291INDIA-CMS_meeting_BARC.

Slides:



Advertisements
Similar presentations
Cluster Computing at IQSS Alex Storer, Research Technology Consultant.
Advertisements

CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
1 CMS user jobs submission with the usage of ASAP Natalia Ilina 16/04/2007, ITEP, Moscow.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
CRAB Tutorial Federica Fanzago – Cern/Cnaf 13/02/2007 CRAB Tutorial (Cms Remote Analysis Builder)
User Experience in using CRAB and the LPC CAF Suvadeep Bose TIFR/LPC CMS101++ June 20, 2008.
IndiaCMS Collaboration meeting BARC July 2010 TIER-3 Center TIFR (User Interface)  Directly connected to the Tier2 LAN  User Space: In the Tier2 storage.
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
GRID Workload Management System Massimo Sgaravatto INFN Padova.
High Throughput Computing with Condor at Notre Dame Douglas Thain 30 April 2009.
A tool to enable CMS Distributed Analysis
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
Utilizing Condor and HTC to address archiving online courses at Clemson on a weekly basis Sam Hoover 1 Project Blackbird Computing,
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Zach Miller Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
High Throughput Computing with Condor at Purdue XSEDE ECSS Monthly Symposium Condor.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
Prof. Heon Y. Yeom Distributed Computing Systems Lab. Seoul National University FT-MPICH : Providing fault tolerance for MPI parallel applications.
A Project about: Molecular Dynamic Simulation (MDS) Prepared By Ahmad Lotfy Abd El-Fattah Grid Computing Group Supervisors Alexandr Uzhinskiy & Nikolay.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Grid Computing I CONDOR.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Compiled Matlab on Condor: a recipe 30 th October 2007 Clare Giacomantonio.
Experiences with a HTCondor pool: Prepare to be underwhelmed C. J. Lingwood, Lancaster University CCB (The Condor Connection Broker) – Dan Bradley
Part 6: (Local) Condor A: What is Condor? B: Using (Local) Condor C: Laboratory: Condor.
Nadia LAJILI User Interface User Interface 4 Février 2002.
Grid job submission using HTCondor Andrew Lahiff.
Andrey Meeting 7 October 2003 General scheme: jobs are planned to go where data are and to less loaded clusters SUNY.
GRID. Register Fill the form. Your IP (Juanjo) signature is needed and the one from the.
Report from USA Massimo Sgaravatto INFN Padova. Introduction Workload management system for productions Monte Carlo productions, data reconstructions.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
Review of Condor,SGE,LSF,PBS
User Experience in using CRAB and the LPC CAF Suvadeep Bose TIFR/LPC US CMS 2008 Run Plan Workshop May 15, 2008.
July 29' 2010INDIA-CMS_meeting_BARC1 LHC Computing Grid Makrand Siddhabhatti DHEP, TIFR Mumbai.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
Condor Week 2004 The use of Condor at the CDF Analysis Farm Presented by Sfiligoi Igor on behalf of the CAF group.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Grid2Win : gLite for Microsoft Windows Roberto.
Korea Workshop May GAE CMS Analysis (Example) Michael Thomas (on behalf of the GAE group)
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Grid Compute Resources and Job Management. 2 Grid middleware - “glues” all pieces together Offers services that couple users with remote resources through.
JSS Job Submission Service Massimo Sgaravatto INFN Padova.
1Bockjoo Kim 2nd Southeastern CMS Physics Analysis Workshop CMS Commissioning and First Data Stan Durkin The Ohio State University for the CMS Collaboration.
User Interface UI TP: UI User Interface installation & configuration.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
1 Tutorial:Initiation a l’Utilisation de la Grille EGEE/LCG, June 5-6 N. De Filippis CMS tools for distributed analysis N. De Filippis - LLR-Ecole Polytechnique.
Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA Grid2Win : gLite for Microsoft Windows Elisa Ingrà - INFN.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
May 27, 2009T.Kurca JP CMS-France1 CMS T2_FR_CCIN2P3 Towards the Analysis Facility (AF) Tibor Kurča Institut de Physique Nucléaire de Lyon JP CMS-France.
Interfacing SCMSWeb with Condor-G – A Joint PRAGMA-Condor Effort.
Belle II Physics Analysis Center at TIFR
IW2D migration to HTCondor
Creating and running applications on the NGS
Grid2Win: Porting of gLite middleware to Windows XP platform
CRAB and local batch submission
HTCondor Tutorial YuChul Yang (Kyungpook National University)
NGS computation services: APIs and Parallel Jobs
CMSSW-Lite : Official Merge of CMSSW/XDAQ
Genre1: Condor Grid: CSECCR
GENIUS Grid portal Hands on
GRID Workload Management System for CMS fall production
Presentation transcript:

July 28' 2011INDIA-CMS_meeting_BARC1 Tier-3 TIFR Makrand Siddhabhatti DHEP, TIFR Mumbai July 291INDIA-CMS_meeting_BARC

Functionality The node is equipped with the following functionality : * A Standard UI Package (gLite 3.2). * CRAB (CRAB Ver. 2_7_9). * Access to Centrally installed CMSSW of Tier-2. * Local Job Submission (Condor ver. 7_6_0). * Linked with Dedicated Storage (~20 TB). * Root (ver ) 40+ ui.indiacms.res.in

Opening Root Files from T2_IN_TIFR go to your working diretory, source /opt/exp_soft/cms/cmsset_default.csh ln -sf $LCG_LOCATION/lib64/libdpm.so libshift.so.2.1 export LD_PRELOAD= ${GLOBUS_LOCATION}/lib/libglobus_gssapi_gsi_gcc64dbgpthr.so export SCRAM_ARCH=slc5_amd64_gcc434 cmsenv export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH voms-proxy-init -voms cms root -b root rfio:/dpm/indiacms.res.in/home/cms/store/mc/Winter10/ZccToLL_M-40_PtC1- 15_TuneZ2_7TeV-madgraph- pythia6/AODSIM/E7TeV_ProbDist_2010Data_BX156_START39_V8- v1/0014/98EAF36D-171D-E011-94B D476F8.root

Accessing root files in T2_IN_TIFR in CMSSW go to your working diretory, cmsenv voms-proxy-init -voms cms setenv LD_PRELOAD ${GLOBUS_LOCATION}/lib/libglobus_gssapi_gsi_gcc64dbgpthr.so process.source = cms.Source("PoolSource", fileNames = cms.untracked.vstring( 'rfio:/dpm/indiacms.res.in/home/cms/store/mc/Winter10/ZccToLL_M-40_PtC1- 15_TuneZ2_7TeV-madgraph- pythia6/AODSIM/E7TeV_ProbDist_2010Data_BX156_START39_V8- v1/0014/98EAF36D-171D-E011-94B D476F8.root' #'/store/mc/Winter10/ZccToLL_M-40_PtC1-15_TuneZ2_7TeV-madgraph- pythia6/AODSIM/E7TeV_ProbDist_2010Data_BX156_START39_V8- v1/0014/98EAF36D-171D-E011-94B D476F8.root' ))

Local Job Submission : A Condor Stuff * Currently condor scheduler is deployed on Tier-3 machine to facilitate the local users for executing the job locally. * The CMSSW job submission where Tier-2 data storage can be accessed directly from Tier-3, by batch job submission, has been tested.

A Sample Condor Job Submission Script universe=vanilla Executable = test_batch_condor.sh Log = 2011_2.log Output = 2011_2.out Error = 2011_2.error Notify_user = Queue

test_batch_condor.sh #!/bin/sh -f source /home/makrand/.bashrc export DPM_HOST=se01.indiacms.res.in export DPNS_HOST=se01.indiacms.res.in export GLOBUS_LOCATION=/opt/globus cd /home/makrand/CMSSW_4_2_2/src/Test/TestCrab/test_condor export SCRAM_ARCH=slc5_amd64_gcc434 source /opt/exp_soft/cms/cmsset_default.sh export PATH=/opt/exp_soft/cms/common:/opt/external/usr/bin:/bin:/usr/bin eval `scramv1 runtime -sh` export LD_PRELOAD=/opt/globus/lib/libglobus_gssapi_gsi_gcc64dbgpthr.so cmsRun -p testsummer11_422_cfg.py

Condor Utility Commands condor_submit condor_q : To check the status of jobs. condor_status : To get the status of cluster. condor_rm : To remove the submitted job. condor_history : To get the history of jobs. condor_run : To run shell commands.

Environment for crab job submission It is a tool to submit the CMSSW jobs to grid, including the stored data. go to your working diretory cd /home/makrand/CMSSW_x_y_z/src/Test/CrabTest source /opt/glite/etc/profile.d/grid-env.csh source /home/CRAB_2_7_9_pre1/crab.csh voms-proxy-init –voms cms cmsenv export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/external/usr/lib crab -create -submit all -cfg crab.cfg

Conclusion * Please Let us know it you have any problem while using these facilities at Tier-3 (ui.indiacms.res.in). * Please send a mail to if you need an account in Tier-3.