Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.

Slides:



Advertisements
Similar presentations
EHarmony in Cloud Subtitle Brian Ko. eHarmony Online subscription-based matchmaking service Available in United States, Canada, Australia and United Kingdom.
Advertisements

Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
Configurations Management System Chris Boyd.  Time consuming task of provisioning a number of systems with STIG compliance  Managing a number of systems.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
1 Networks, advantages & types of What is a network? Two or more computers that are interconnected so they can exchange data, information & resources.
Tier 3 Plan and Architecture OSG Site Administrators workshop ACCRE, Nashville August Marco Mambelli University of Chicago.
Tier 3g Infrastructure Doug Benjamin Duke University.
Brainstorming Thin-provisioned Tier 3s.
Module 3: Preparing for Cluster Service Installation.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Grid Engine Riccardo Rotondo
Akimbi Slingshot Virtual Cluster Management Trevor Grove DRC-SCS – CSCF University of Waterloo.
Chapter 3.  Help you understand different types of servers commonly found on a network including: ◦ File Server ◦ Application Server ◦ Mail Server ◦
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Grid Appliance – On the Design of Self-Organizing, Decentralized Grids David Wolinsky, Arjun Prakash, and Renato Figueiredo ACIS Lab at the University.
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
Guide to Linux Installation and Administration, 2e1 Chapter 2 Planning Your System.
Configuration Management with Cobbler and Puppet Kashif Mohammad University of Oxford.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
2  Supervisor : MENG Sreymom  SNA 2012_Group4  Group Member  CHAN SaratYUN Sinot  PRING SithaPOV Sopheap  CHUT MattaTHAN Vibol  LON SichoeumBEN.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks C. Loomis (CNRS/LAL) M.-E. Bégin (SixSq.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
Tier 3 architecture Doug Benjamin Duke University.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
ITGS Network Architecture. ITGS Network architecture –The way computers are logically organized on a network, and the role each takes. Client/server network.
Virtualization Technology and Microsoft Virtual PC 2007 YOU ARE WELCOME By : Osama Tamimi.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Using CVMFS to serve site software Sarah Williams Indiana University 2/01/121.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
The CernVM Infrastructure Insights of a paradigmatic project Carlos Aguado Sanchez Jakob Blomer Predrag Buncic.
US Atlas Tier 3 Overview Doug Benjamin Duke University.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Beowulf Design and Setup Section 2.3.4~2.7: Adam.
Introduce Caching Technologies using Xrootd Wei Yang 10/28/14ATLAS TIM 2014 Univ. Chicago1.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Cloud Installation & Configuration Management. Outline  Definitions  Tools, “Comparison”  References.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
FIFE Architecture Figures for V1.2 of document. Servers Desktops and Laptops Desktops and Laptops Off-Site Computing Off-Site Computing Interactive ComputingSoftware.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Atlas Tier 3 Overview Doug Benjamin Duke University.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
Bentley Systems, Incorporated
ALICE & Clouds GDB Meeting 15/01/2013
Use of HLT farm and Clouds in ALICE
Open OnDemand: Open Source General Purpose HPC Portal
Doug Benjamin Duke University On Behalf of the Atlas Collaboration
Dag Toppe Larsen UiB/CERN CERN,
High Availability Linux (HA Linux)
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
Open Source Toolkit for Turn-Key AI Cluster (Introduction)
Grid Engine Diego Scardaci (INFN – Catania)
Presentation transcript:

Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University

Philosophy behind our assembly sequence for non-grid Tier 3’s Instructions are broken up into sequences that should take no longer than 1-2 hours Testing is performed after each assembly sequence to ensure things are correct before proceeding Check lists will be written. Interactive cluster is setup and tested before the Batch cluster Xrootd storage tested before Grid storage

Assembly sequence Initial steps prior to setup/installation – Collect the required public network addresses – Determine the private network names Hardware preparation – On machines with RAID card, configure storage at BIOS level – Prepare kick start files for OS installation

Head node installation Head node installation via kick start Private network configuration Virtual machine installation for Puppet Master VM installation for LDAP server Configuration of following services on head node – Dnsmasq (provides dns services to machines on private network) – firewall (for public interface) – Add local user accounts – NFSV4 mounts for configuration files and AtlaslocalRootBase – Ganglia Client – Squid Proxy cache – Condor software – Creation of atlasadmin account in LDAP Test Head node configuration

NFS node setup Kick start file installation of OS configuration of disks Private network configuration (including NAT) and testing Installation of local accounts Creation of NFS V4 exported file systems Installation of Ganglia server and web server Installation LDAP client authentication Firewall configuration Installation/Configuration of Condor config files

Interactive node (installation) Node installation via kickstart file Private network configuration Local accounts Addition of extra libraries need for Atlas software Installation of manageTier3SW by atlasadmin account on NFS mounted disks Installation of CernVM-FS for Atlas software and conditions DB Simple Atlas Test analysis jobs (to be written)

Batch Node installation Installation via kickstart files initially Static IP address on Private network Condor software installation and configuration Installation of required additional libraries for Atlas software CernVM-FS installation/configuration Condor system testing

Xrootd storage system Xrootd storage system is installed on head node and batch nodes using managedT3SW and the VDT configuration scripts Test xrootd system with simple copies and read back Simple condor jobs processing data stored within Xrootd system.

Where to find details Tier 3 configuration wiki currently at ANL

US Atlas Tier 3 support 0.5 FTE support from US Atlas funds (some support from NFS – MRI for the next year) Rely heavily on community support across the Atlas Collaboration Atlas wide support mailing list: All Tier 3 site admin’s are strongly encouraged to join. This is how we can make community support work. Usage examples will be put into the Atlas Analysis work book and in the Atlas Tier 3 wiki Ultimate location for instructions – Atlas Tier 3 wiki :

Conclusions Instructions for setup non-grid Tier 3 are taking shape Will rely on community support to augment the 0.5 FTE of paid support for Tier 3’s within US Atlas Your help is greatly appreciated and welcomed