THE UNIVERSITY OF MELBOURNE Melbourne, Australia ATLAS Tier 2 Site Status Report Marco La Rosa

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Grid Developers’ use of FermiCloud (to be integrated with master slides)
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Steve Traylen PPD Rutherford Lab Grid Operations PPD Christmas Lectures Steve Traylen RAL Tier1 Grid Deployment
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Analysis in STEP09 at TOKYO Hiroyuki Matsunaga University of Tokyo WLCG STEP'09 Post-Mortem Workshop.
INFSO-RI Enabling Grids for E-sciencE Introduction to Grid Computing, EGEE and Bulgarian Grid Initiatives - Plovdiv,
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.
Presenter Name Facility Name UK Testbed Status and EDG Testbed Two. Steve Traylen GridPP 7, Oxford.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
STATUS OF KISTI TIER1 Sang-Un Ahn On behalf of the GSDC Tier1 Team WLCG Management Board 18 November 2014.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Accounting in LCG/EGEE Can We Gauge Grid Usage via RBs? Dave Kant CCLRC, e-Science Centre.
Scientific Computing in PPD and other odds and ends Chris Brew.
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
ATLAS Computing Wenjing Wu outline Local accounts Tier3 resources Tier2 resources.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
1 Grid Operations Jinny Chien ASGC June 09, Academia Sinica Slides adapted from the EGEE training material repository:
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Virtual machines ALICE 2 Experience and use cases Services at CERN Worker nodes at sites – CNAF – GSI Site services (VoBoxes)
SAM at CCIN2P3 configuration issues
Overview of IPB responsibilities in EGEE-III SA1
Simulation use cases for T2 in ALICE
Presentation transcript:

THE UNIVERSITY OF MELBOURNE Melbourne, Australia ATLAS Tier 2 Site Status Report Marco La Rosa

THE UNIVERSITY OF MELBOURNE Architecture 39 P4 2.4 CPUs (30 x 1GB RAM, 9 x 512MB RAM) Management Node  NFS (home directories, CA certificates), NIS, NAT, Torque and Maui-3.2.6p16 (S. Traylen), Ganglia Compute Element DPM SE (1.7 TB, head node / disk server together 1 unit) R-GMA Mon Node WN MgtCESE Mon.hpc.unimelb.edu.au.unimelb.edu.au Private LAN /24

THE UNIVERSITY OF MELBOURNE Virtual Organisation's ATLAS and BIOMED VOs supported. Fairshare based scheduling. Torque node properties identify which nodes can run ATLAS jobs (those with 1GB RAM) maui.cfg ----snip---- GROUPCFG[atlas]FSTARGET=5PRIORITY=10 GROUPCFG[atlasprd]FSTARGET=100PRIORITY=1000 GROUPCFG[atlassgm]FSTARGET=10PRIORITY=10000 MAXPROC=2 ----snip---- ATLAS: 16,000 jobs, hours, 63.43% BIOMED: 761 jobs, hours, 94.79%

THE UNIVERSITY OF MELBOURNE Networks Need to sustain > 160Mbps from Academia Sinica (.tw) to University of Melbourne (.au) Currently, max throughput: ~90Mbps (iperf) 15 MB/s recently (FTS)...Min Tsai Taiwan to Singapore Limit reached East coast 10Gbps RTT (west): ~140ms RTT (east): ~250ms

THE UNIVERSITY OF MELBOURNE ATLAS Installed Software: VO-atlas-tier-T2 VO-atlas-cloud-TW VO-atlas-release VO-atlas-offline VO-atlas-offline VO-atlas-production VO-atlas-production How much disk space is required for ATLAS software? What is the ATLAS Policy for software removal? Where do I find the results of Athena validation tests? How do I find out what's happening with ATLAS production?

THE UNIVERSITY OF MELBOURNE General Issues Better integration b/w CIC portal and GOCDB  Why do I need to set a scheduled downtime in both?  GOCDB time is out by one hour for Melbourne, Australia timezone (we are in daylight savings ATM).  CIC portal times shown in localtime; auto-converted to UTC? ATLAS users and parallel threaded globus-url-copy (silent re-write or error message informing them of error). ATLAS jobs to upload data from site-SE to elsewhere – why? Job efficiency – at what point can jobs be killed?

THE UNIVERSITY OF MELBOURNE Java and Australian CA Certificates. Root CA cert 4096 bits On the client: /usr/java/j2sdk1.4.2_12/jre/lib/security/java.security security.provider.1=sun.security.provider.Sun security.provider.2=org.bouncycastle.jce.provider.BouncyCastleProvider security.provider.3=com.sun.net.ssl.internal.ssl.Provider security.provider.4=com.sun.rsajca.Provider security.provider.5=com.sun.crypto.provider.SunJCE security.provider.6=sun.security.jgss.SunProvider Perhaps the Grid can upgrade to Java-1.5? rgmasc SFT, edg-mkgridmap... what else? General Issues

THE UNIVERSITY OF MELBOURNE Funding secured for the next 5 years. Discussions with the University and APAC about support, hosting, contributions (people, CPU, disk). Tentative plan:  minor purchase March '07  major purchase June '07 Looking Forward

THE UNIVERSITY OF MELBOURNE Thankyou Marco La Rosa