ATLAS Installation System 2 (status update) Alessandro De Salvo 28-5-2013 A. De Salvo – 28 May 2013 Status.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

The LHC experiments AuthZ Interoperation requirements GGF16, Athens 16 February 2006 David Kelsey CCLRC/RAL, UK
PanDA Integration with the SLAC Pipeline Torre Wenaus, BNL BigPanDA Workshop October 21, 2013.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 3: Operating Systems Computer Science: An Overview Tenth Edition.
HTCondor and the European Grid Andrew Lahiff STFC Rutherford Appleton Laboratory European HTCondor Site Admins Meeting 2014.
10/01/2007 Installing and Using ATLAS Distribution in LNF Release
Production Planning Eric van Herwijnen Thursday, 20 june 2002.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
CVMFS AT TIER2S Sarah Williams Indiana University.
ATLAS Software Infrastructure Frederick Luehring Indiana University US ATLAS Tier 1&2 Meeting at Harvard August 8, 2006.
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
WLCG Service Report ~~~ WLCG Management Board, 27 th October
F. Brasolin / A. De Salvo – The ATLAS benchmark suite – May, Benchmarking ATLAS applications Franco Brasolin - INFN Bologna - Alessandro.
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within.
Event Data History David Adams BNL Atlas Software Week December 2001.
Efi.uchicago.edu ci.uchicago.edu Towards FAX usability Rob Gardner, Ilija Vukotic Computation and Enrico Fermi Institutes University of Chicago US ATLAS.
Efi.uchicago.edu ci.uchicago.edu FAX meeting intro and news Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Federated Xrootd.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
CERN IT Department CH-1211 Genève 23 Switzerland t MSG status update Messaging System for the Grid First experiences
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
My Name: ATLAS Computing Meeting – NN Xxxxxx A Dynamic System for ATLAS Software Installation on OSG Sites Xin Zhao, Tadashi Maeno, Torre Wenaus.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Direct gLExec integration with PanDA Fernando H. Barreiro Megino CERN IT-ES-VOS.
David Adams ATLAS DIAL/ADA JDL and catalogs David Adams BNL December 4, 2003 ATLAS software workshop Production session CERN.
Changes to CernVM-FS repository are staged on an “installation box" using a read/write file system interface. There is a dedicated installation box for.
Alex Undrus – Nightly Builds – ATLAS SW Week – Dec Preamble: Code Referencing Code Referencing is a vital service to cope with 7 million lines of.
OSG Production Report OSG Area Coordinator’s Meeting Nov 17, 2010 Dan Fraser.
MultiJob pilot on Titan. ATLAS workloads on Titan Danila Oleynik (UTA), Sergey Panitkin (BNL) US ATLAS HPC. Technical meeting 18 September 2015.
August 30, 2002Jerry Gieraltowski Launching ATLAS Jobs to either the US-ATLAS or EDG Grids using GRAPPA Goal: Use GRAPPA to launch a job to one or more.
The ATLAS Cloud Model Simone Campana. LCG sites and ATLAS sites LCG counts almost 200 sites. –Almost all of them support the ATLAS VO. –The ATLAS production.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES GGUS Ticket review T1 Service Coordination Meeting 2010/10/28.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Service Availability Monitor tests for ATLAS Current Status Tests in development To Do Alessandro Di Girolamo CERN IT/PSS-ED.
Upgrading the Cloud Storage Benchmark Framework for ROOT 6 Compatibility By Surya Seetharaman Openlab Summer Intern 2015 IT Department Data Storage and.
1 CMS Software Installation, Bockjoo Kim, 23 Oct. 2008, T3 Workshop, Fermilab CMS Commissioning and First Data Stan Durkin The Ohio State University for.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
WLCG Service Report ~~~ WLCG Management Board, 23 rd November
US Atlas Tier 3 Overview Doug Benjamin Duke University.
Update on Titan activities Danila Oleynik (UTA) Sergey Panitkin (BNL)
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES CVMFS deployment status Ian Collier – STFC Stefan Roiser – CERN.
LHCbDirac and Core Software. LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS.
Testing CernVM-FS scalability at RAL Tier1 Ian Collier RAL Tier1 Fabric Team WLCG GDB - September
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Andrea Sciabà Ideal information system - CMS Andrea Sciabà IS.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
WMS baseline issues in Atlas Miguel Branco Alessandro De Salvo Outline  The Atlas Production System  WMS baseline issues in Atlas.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Discussion on data transfer options for Tier 3 Tier 3(g,w) meeting at ANL ASC – May 19, 2009 Marco Mambelli – University of Chicago
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
GGUS summary (3 weeks) VOUserTeamAlarmTotal ALICE7029 ATLAS CMS LHCb Totals
Alessandro De Salvo Mayuko Kataoka, Arturo Sanchez Pineda,Yuri Smirnov CHEP 2015 The ATLAS Software Installation System v2 Alessandro De Salvo Mayuko Kataoka,
CERN IT Department CH-1211 Genève 23 Switzerland t CMS SAM Testing Andrea Sciabà Grid Deployment Board May 14, 2008.
Installing the ATLAS Offline Software Frederick Luehring Indiana University June 20, 2007 “Setting Up a Tier 3 Center (I)” Session.
Best 20 jobs jobs sites.
Panda-based Software Installation
The ATLAS software in the Grid Alessandro De Salvo <Alessandro
FDR readiness & testing plan
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
Presentation transcript:

ATLAS Installation System 2 (status update) Alessandro De Salvo A. De Salvo – 28 May 2013 Status

Problematic resources [1]  Infrastucture problems  ANALY_AM-04-YERPHI, ANALY_GLASGOW, ANALY_GLASGOW_XROOTD, RAL-LCG2_SL6  [pilot] (failed to remove file) Using grid catalog type: UNKNOWN  No news yet  ANALY_BU_ATLAS_Tier2 (All jobs!)  [pilot] Pilot has decided to kill looping job  ANALY_DUKE_CLOUD  /sw-mgr: line 3059: /home/osg/app/atlas_app/atlas_rel/software/17.3.4/cmtsite/setup.sh: No space left on device  ANALY_NECTAR, Australia-NECTAR (many jobs)  [dispatcher] lost heartbeat  CERN_RELEASE  [pilot] cp failed with output: ec = 256, output = [i] Setting up xrootd version: test/3.2.4/x86_64-slc5- gcc41-opt [i] CPU Arch: x86_64 [i] OS Type: slc5 [i] GCC Version: 4.1 [i] Set up the xrootd paths: /afs/cern.ch/project/xrootd/software/test/3.2  ru-PNPI  [pilot] Job killed by signal 15: Signal handler has set job result to FAILED, ec =

Problematic resources [2]  Other problems  BNL_ATLAS_2  Pacman installation (for KV validation) failed with  Copy [copy $PACMAN_LOCATION/htmls/sky.gif -> htmls] has failed.  BNL_ATLAS_RCF  NFS site  BNL_CLOUD  Jobs activated, never running 3