M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

“Managing a farm without user jobs would be easier” Clusters and Users at CERN Tim Smith CERN/IT.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Dr Mohamed Menacer College of Computer Science and Engineering Taibah University CS-334: Computer.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
©Brooks/Cole, 2003 Chapter 7 Operating Systems Dr. Barnawi.
Computer Organization and Architecture
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Israel Cluster Structure. Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS.
CS364 CH08 Operating System Support TECH Computer Science Operating System Overview Scheduling Memory Management Pentium II and PowerPC Memory Management.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Staging to CAF + User groups + fairshare Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE Offline week,
Utilizing Condor and HTC to address archiving online courses at Clemson on a weekly basis Sam Hoover 1 Project Blackbird Computing,
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
Instructor: Yuzhuang Hu Memory Hierarchy.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
CERN - IT Department CH-1211 Genève 23 Switzerland Castor External Operation Face-to-Face Meeting, CNAF, October 29-31, 2007 CASTOR2 Disk.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
 CASTORFS web page - CASTOR web site - FUSE web site -
October 8, 2002P. Nilsson, SPD General Meeting1 Paul Nilsson, SPD General Meeting, Oct. 8, 2002 New tools and software updates Test beam analysis Software.
Nurcan Ozturk University of Texas at Arlington US ATLAS Transparent Distributed Facility Workshop University of North Carolina - March 4, 2008 A Distributed.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
Chapter 11: Operating System Support Dr Mohamed Menacer Taibah University
The CMS CERN Analysis Facility (CAF) Peter Kreuzer (RWTH Aachen) - Stephen Gowdy (CERN), Jose Afonso Sanches (UERJ Brazil) on behalf.
Proper use of CC at Lyon J. Brunner. Batch farm / bqs BQS classes Only for test jobs !! Regular Exceptional resource needs Very long jobs Large memory/scratch.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
CTB 解析 解析環境 解析ソフトウェ ア SCT data 解析. CERN CERN Advanced STORage Manager lxplus ICEPP CTB Real Data CTB MC Data DC2 Data … ATLAS CMS ALICE …. SW lxatl tsukuba.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
Max Baak 1 Efficient access to files on Castor / Grid Cern Tutorial Max Baak, CERN 30 October 2008.
Dynamic staging to a CAF cluster Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Good user practices + Dynamic staging to a CAF cluster Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CUF,
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
Patrick Gartung 1 CMS 101 Mar 2007 Introduction to the User Analysis Facility (UAF) Patrick Gartung - Fermilab.
Starting Analysis with Athena (Esteban Fullana Torregrosa) Rik Yoshida High Energy Physics Division Argonne National Laboratory.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
GRID COMPUTING.
Welcome to Indiana University Clusters
PARADOX Cluster job management
FileStager test results
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Bernd Panzer-Steindel, CERN/IT
Model (CMS) T2 setup for end users
SAM at CCIN2P3 configuration issues
William Stallings Computer Organization and Architecture
Practical aspects of multi-core job submission at CERN
Machine Learning Workshop
The ATLAS software in the Grid Alessandro De Salvo <Alessandro
Artem Trunov and EKP team EPK – Uni Karlsruhe
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Introduction to Athena
Presentation transcript:

M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009

M. Schott (CERN) Page 2 CAT Tier-3 Computing Resources Interactive nodes: -5 machines with 8 CPU cores and 16 GB total memory, with access to AFS and castor, for interactive analysis work. -These are accessed via LSF using the atlasinter queue. Batch queues: -Two dedicated batch queues, atlascatshort (1 hour) and atlascatlong (10 hours) with a certain number of dedicated LSF batch job slots. Castor disk pool: A 40 TB disk pool atlt3 for storing DPDs, ntuples etc used in CAT analysis. No tape backup AFS scratch disk space allocated to CAT team members.

M. Schott (CERN) Page 3 Data Organization CAT Tier-3 resources -Only for CERN people -Access via rfio -Two locations -Group-Space (5TB per group) /castor/cern.ch/grid/atlas/atlt3/ with the following groups: compperf, higgs, simulation:, sm, susy, top -Scratch-Space -/castor/cern.ch/grid/atlas/atlt3/scratch/ -nsmkdir /castor/cern.ch/grid/atlas/atlt3/scratch/ -nschmod 750 /castor/cern.ch/grid/atlas/atlt3/scratch/ -Setting Environment: export STAGE_HOST=castoratlast3 export STAGE_SVCCLASS=atlascernuserdisk CERN User Disk -For all users -Access via rfio, xrootd -Setting Environment (This defines the castor-disk. The directory is only “fake”) export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlas export STAGE_SVCCLASS=atlt3

M. Schott (CERN) Page 4 Tutorial – Putting and Retrieving Files login lxplus Setup Athena cmt co PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis cd PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnaly sis-r198000/cmt cmt config source setup.sh gmake cd /tmp/ Now we have just our analysis algorithm. We set now the variables for accessing the usual CERN user disk export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlast3 export STAGE_SVCCLASS=atlascernuserdisk And copy some files to our castor user disk rfcp /castor/cern.ch/user/d/ddmusr03/STEP09/mc PythiaZee_Mll20t o60_1Lepton.merge.AOD.e379_s462_r635_t53_tid059207/AOD _ pool.root.1./ rfcp AOD _00001.pool.root.1 /castor/cern.ch/user/ / /Tutorial

M. Schott (CERN) Page 5 Tutorial – Putting and Retrieving Files Now we copy the same file to the CAT scratch disk. First we have to set the environment variables export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlas export STAGE_SVCCLASS=atlt3 Actually the first two lines are not needed at this point anymore but we will leave them for completeness. rfcp AOD _00001.pool.root.1 /castor/cern.ch/grid/atlas/atlt3/scratch/ /Tutorial/ Checking the content can be done with the usual rfdir-command, e.g. rfdir /castor/cern.ch/grid/atlas/atlt3/scratch/ /Tutorial/

M. Schott (CERN) Page 6 Submitting jobs (1) Our queues are atlascatshort and atlascatlong and can be seens via bqueues To access the interactive machines just type bsub -Is -q atlasinter zsh We simply exit with quit Now we want to submit a job on our Tier-3 queues. We go to our example code, e.g. cd PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnaly sis-r198000/cat Here we change the file runAthena.sh which should automatically setup the Athena environment and then starts an athena job. Remember that when sending the job to a queue, the job will be started on a scratch-directory which will be delete after then job.

M. Schott (CERN) Page 7 Submitting jobs (2) Havening changesd the runAthena.sh, we can submit the job via bsub -q atlascatlong source runAthena.sh ~/scratch0/Athena/15.5.1/PhysicsAnalysis/AnalysisCommon/Performance Analysis/PerformanceAnalysis-r198000/cat/runPerformanceAnalysis.py To see the status of our job, we simply type bjobs Now we can play around with different access modes, i.e. accessing a file via rfio or xrootd. For that we simply change the prefix of the file in the InputCollection of runPerformance.py. Xrootd is accessed via root://castoratlast3/ Rfio is accessed via rfio:// Keep in mind that you might have to initialize the environment variables on the batch- job! The performance between rfio and xrootd can be checked when looking in the PerformanceResults.log file, which is produced when the job is finished...

M. Schott (CERN) Page 8 Submitting jobs (3) You should observe that xrootd is much faster than rfio, but we cannot use xrootd on our tier-3 scratch-disks...which brings us to Max Baak‘s famous Filestager