Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April 6-7 2009.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
A crash course in njit’s Afs
Windows Server MIS 424 Professor Sandvig. Overview Role of servers Performance Requirements Server Hardware Software Windows Server IIS.
Section 6.1 Explain the development of operating systems Differentiate between operating systems Section 6.2 Demonstrate knowledge of basic GUI components.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
16 th May 2006Alessandra Forti Storage Alessandra Forti Group seminar 16th May 2006.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Data management at T3s Hironori Ito Brookhaven National Laboratory.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
M. Schott (CERN) Page 1 CERN Group Tutorials CAT Tier-3 Tutorial October 2009.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
VO Box Issues Summary of concerns expressed following publication of Jeff’s slides Ian Bird GDB, Bologna, 12 Oct 2005 (not necessarily the opinion of)
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Using Ganga for physics analysis Karl Harrison (University of Cambridge) ATLAS Distributed Analysis Tutorial Milano, 5-6 February 2007
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
Bestman & Xrootd Storage System at SLAC Wei Yang Andy Hanushevsky Alex Sim Junmin Gu.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
ATLAS Tier3s Santiago González de la Hoz IFIC-Valencia S. González de la Hoz, II PCI 2009 Workshop, Valencia, 18/11/2010 ATLAS Tier3s (Programa de Colaboración.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
Atlas Tier 3 Overview Doug Benjamin Duke University.
a brief summary for users
Xiaomei Zhang CMS IHEP Group Meeting December
Belle II Physics Analysis Center at TIFR
A full demonstration based on a “real” analysis scenario
Extended OSG client for WLCG
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Ákos Frohner EGEE'08 September 2008
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Client/Server and Peer to Peer
The LHCb Computing Data Challenge DC06
Presentation transcript:

Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April

What do we have, ATLAS specific  Xrootd Storage  275 usable TB on 10 Sun X4500 thumpers, format under ZFS and run Xrootd ZFS is a high performance, multiple parity software RAID, detects silent data corruption  Good for reading dominated data accessing. Optimized for analysis activities  See Andy Hanushevsky’s talk about how Xrootd works  LSF batch system  ~ 600 cores purchased by Western Tier 2, Redhat Linux 4, 64bit, 2GB/core  Neal Adam will talk about LSF batch system usage  Interactive user login machine: atlint01.slac.stanford.edu  8 core Intel 3Ghz, 8GB, Redhat Linux 4, 32bit  Xrootd File System (XrootdFS), for easy to use (and for its Posix FS interface)  DQ2 client tools (dq2-ls/get/put, etc.)  Special, temporary hardware resource needs can be arranged  Out of SLAC’s general computing resource pool  Relatively easy to setup for local users  Hard for users coming from Grid

What do we have, cont’d  Open Science Grid Middleware + Selected LHC Computing Grid (LCG) tools  Computing Element, Storage Element, monitoring, accounting, client tools, etc.  LCG Utils  LCG File Catalog (LFC) service, LFC clients tools/Python API  ATLAS DDM (Distributed Data Management) site services and DQ2 client  DQ2 Tier 2 site service  DQ2 client tools  Info on the Web  WT2 web site:  SLAC ATLAS experiment web page:  ATLAS HyperNews forums: Look for “Grid Jobs at SLAC” and “Non-Grid jobs at SLAC”  Additional Resources  ATLAS releases in SLAC AFS group space  SLAC ATLAS group NFS server

OSG middleware/proxies/ATLAS DQ2 service LSF Master WN Cisco RHEL 4/64 NFS servers Grid user home Local user space Xrootd server SUN X4500 Xrootd redirector (head node) Cisco switch User Interactive machine: XrootdFS, dq2 client SLAC Firewall Internet Free Zone Cisco 10 Gbit 1Gbit 4 Gbit 1Gbit WT2 Architecture (user’s view) ~ 600 cores, 275 TB usable

ATLAS DQ2 sites hosted by SLAC  Each ATLAS Tier 1 / 2 site has multiply DQ2 sites ATLAS uses DQ2 site name/types (Space Token) for data categorization  SLACXRD_DATADISK, _MCDISK  “real” data and MC data, long term (months)  SLACXRD_USERDISK, _GROUPDISK  short term, mainly used by Panda user jobs  SLACXRD_PRODDISK  Panda production jobs, very short term  SLACXRD  Initial SLAC DQ2 site. Not longer used by ATLAS central DDM  Data/Datasets in SLACXRD_* sites are available to users  Treat DQ2 datasets in these sites as readonly.  Storing in Xrootd space. Deleting may create disaster  SLACXRD_PRODDISK: data come and go, don’t depend on them  Use dq2 client tools (dq2-ls) to check dataset availability  what datasets are available at SLACXRD_* sites, and location in storage  which sites have an interesting dataset  Request datasets to be moved to SLACXRD_*   your request to HyperNews. We will try to help/discuss possibilities

Interactive Machine, what can you do  atlint01.slac.stanford.edu  Current interactive machine  Run interactive job  Debug Athena jobs  Run ROOT analysis: $ root –b –l root [0] TXNetFile f(“root://atl-xrdr//atlas/xrootd/file.root”)  Submit batch jobs  Organize your own Xrootd data via XrootdFS  Xrootd space are mounted under Unix file system tree, like NFS disks /xrootd/atlas/atlasdatadisk /xrootd/atlas/atlasmcdisk …  Support most unix commands, “cd”, “ls”, “rm”, “cp”, “cat”, “find”, etc.  Transfer individual data file between CERN and SLAC using bbcp  Use DQ2 client tools dq2-ls, dq2-get, dq2-put

SLAC Computing in General  Obtain a SLAC computing account  SLAC computing is centrally managed by Scientific Computing and Computing Services  For ATLAS users, pls follow instructions at for an accounthttp://wt2.slac.stanford.edu  AFS  Home directories of SLAC unix computing accounts are located in AFS.  AFS access permissions are controlled by ACL (access control list) and AFS token fs listacl $HOME/dir make sure you have an AFS token: /usr/local/bin/qtoken, /usr/local/bin/kinit  AFS is good for long term storage of documents, code.  AFS is NOT GOOD for data. Use NFS or Xrootd instead.  AFS space unit is “ASF volume”, mounted any where under /afs/slac.stanford.edu fs listquota $HOME/dir  To request increasing AFS volume size, or request a new AFS volume:  Unix interactive machines For general usage. rhel4-32.slac.stanford.edu, rhel4-64, noric  for any Unix

SLAC Computing in General, cont’d  Grid computing:  After registered person certificate with ATLAS, you will be assigned a SLAC Grid account o This is NOT your SLAC local Unix account o Currently only dq2-get and dq2-put use this Grid account o Not activated automatically, unix-admin if can’t use dq2-get/put with SLACXRD_*  FYI, OSG CE is osgserv01.slac.stanford.edu, SE (SRM) is osgserv04.slac.stanford.edu  Windows account, SLAC recommends Microsoft Exchange . See:  Computing security  General help :