Software framework and batch computing Jochen Markert.

Slides:



Advertisements
Similar presentations
Chapter 20 Oracle Secure Backup.
Advertisements

MAP REDUCE PROGRAMMING Dr G Sudha Sadasivam. Map - reduce sort/merge based distributed processing Best for batch- oriented processing Sort/merge is primitive.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
OpenVMS System Management A different perspective by Andy Park TrueBit b.v.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
AGENDA Tools used in SQL Server 2000 Graphical BOL Enterprise Manager Service Manager CLI Query Analyzer OSQL BCP.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
STAR Software Basics Introduction to the working environment Lee Barnby - Kent State University.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overview of software tools for gLite installation & configuration.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Rsv-control Marco Mambelli – Site Coordination meeting October 1, 2009.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Software Engineering in Robotics Packaging and Deployment of Systems Henrik I. Christensen –
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Ryan Krause Ryan Radschlag Hartford Jt. #1 School District.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Experience with analysis of TPC data Marian Ivanov.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Wenjing Wu Computer Center, Institute of High Energy Physics Chinese Academy of Sciences, Beijing BOINC workshop 2013.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
A. Freise FINESSE + FINESSE + future plans and work in progress Andreas Freise 22nd July 2006.
The CRI compute cluster CRUK Cambridge Research Institute.
Computing Division Requests The following is a list of tasks about to be officially submitted to the Computing Division for requested support. D0 personnel.
File sharing requirements of remote users G. Bagliesi INFN - Pisa EP Forum on File Sharing 18/6/2001.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
Getting Started on Emerald Research Computing Group.
Page 1 Printing & Terminal Services Lecture 8 Hassan Shuja 11/16/2004.
CBM Simulation Walter F.J. Müller, GSI CBM Simulation Week, May 10-14, 2004 Tasks and Concepts.
| nectar.org.au NECTAR TRAINING Module 9 Backing up & Packing up.
Liverpool Experience of MDC 1 MAP (and in our belief any system which attempts to be scaleable to 1000s of nodes) broadcasts the code to all the nodes.
Simulation Status for Year2 Running Charles F. Maguire Software Meeting May 8, 2001.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen Budapest
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Managing Data & Information Procedures & Techniques.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
The HADES Oracle database and its interfaces for experimentalists Ilse Koenig, GSI Darmstadt for the HADES collaboration.
19 Copyright © 2004, Oracle. All rights reserved. Database Backups.
StoRM + Lustre Proposal YAN Tian On behalf of Distributed Computing Group
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
“Port Monitor”: progress & open questions Torsten Wilde and James Kohl Oak Ridge National Laboratory CCA Forum Quarterly Meeting Santa Fe, NM ~ October.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Tracking, Computing & other Stuff. Correlation of detector hits The track segments of inner and outer MDCs are matched on Cluster level The track segments.
Working with HGeant Part 1 by Ilse Koenig:  Overview of HGeant  How to install it  How to run it -Initialisation (geaini file, geometry from Oracle.
Software & Computing Jochen Markert. Where are we now?  Full simulation chain  Full DST chain New DST output formatNew DST output format Successfully.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
HYDRA Framework. Setup of software environment Setup of software environment Using the documentation Using the documentation How to compile a program.
Advanced Computing Facility Introduction
Compute and Storage For the Farm at Jlab
April12 DST calibration status
NA61/NA49 virtualisation:
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
UBUNTU INSTALLATION
Work report Xianghu Zhao Nov 11, 2014.
Overview – SOE PatchTT December 2013.
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Concurrent Version Control
Welcome to our Nuclear Physics Computing System
湖南大学-信息科学与工程学院-计算机与科学系
Welcome to our Nuclear Physics Computing System
Working with HADES WebDB HADES analysis workshop, Darmstadt, Germany, 31 January - 1 February 2012 Ilse Koenig, GSI Darmstadt Outline: Version management.
Presentation transcript:

Software framework and batch computing Jochen Markert

Shutdown of /lustre + LSF batch farm The shut down of the old batch farm is scheduled for the 14 th of December! Please backup important stuff to tape. Data in use should be copied to /hera/hades/user. Clean before and do not copy full blast. Take into account that this procedure takes time ! Starting at 12 th of December is not a good strategy. Users performing analysis of old beam times should be aware that from this date on the old software packages are not any longer available ( /misc/hadessoftware is not visible from ikarus and prometheus running GrigEngine If really strongly required we have to back port and old hydra-8-21 to 64bit and squeeze64. This requires a bit of effort 11/22/12 2

Diskspace /lustre 11/22/12 still 118 TB !!!! 3

Diskspace /hera 420 TB: hld : 137 TB dst : 184 TB sim : 70 TB user: 31 TB 11/22/12 4

Resources All new software (hydra2+hgeant2+ more) is installed in /cvmfs/hades.gsi.de/install All official parameter files are located in /cvmfs/hades.gsi.de/param All packages are installed in dependence of the corresponding ROOT version (current ) Each package has it's own environment script: /cvmfs/hades.gsi.de/install/ /hydra2-2.8/defall.sh The software installations are visible from all squeeze64 desktops + pro.hpc.gsi.de cluster and the new batch farm prometheus/hera 11/22/12 5

New Batch farm Installation of the HADES software: The software is build on lxbuild02.gsi.de and stored local /cvmfs/hades.gsi.de After installation the software has to be published to enable the user access it The publish command runs basically a rsync to the cvmfs server From the server the software will be distributed to all hosts and seen as /cvmfs/hades.gsi.de lxbuild02.gsi.de /cvmfs/hades.gsi.de lxbuild02.gsi.de /cvmfs/hades.gsi.de Cvmfs server lxb320.gsi.de lxb321.gsi.de lxb322.gsi.de lxb323.gsi.de lxb324.gsi.de publish distribute 11/22/12 6

How to compute Each user performing analysis should have an own account at GSI (please fill the prepared form....) User data should be stored at /hera/hades/user/username (please use the linux username) Log in to pro.hpc.gsi.de cluster : This cluster has /lustre + /hera mounted (do you data transfer) Is supposed to be used for daily work and submission of batch jobs This cluster is not directly reachable from outside GSI 11/22/12 7

How to run on batch farm Submit jobs from pro.hpc.gsi.e Jobs running on batch farm should only use software from /cvmfs/hades.gsi.de and data from /hera/hades. user homedirs or /misc/hadessoftware etc are not supported and will crash your jobs Start batch computing from script examples in svn: svn checkout GE Scripts for PLUTO, UrQMD, HGeant, dsts and sim dsts are provided Compile and run local tests on pro.hpc.gsi.de send massive parallel jobs after test to the farm Standard users can run up to 400 jobs in parallel Merge you histogram files using hadd or hadd.pl (parallel hadd by Jan) Avoid tons of small files on /hera they will slow down the performance (merge them or zip root files using hzip) 11/22/12 8

example loop 11/22/12 9

example loop batch script 11/22/12 10 The user is supposed to work in his home dir The current working dir is synchronized to submission dir on /hera/hades/user... works with file lists  flexible allows to combine input files into one job on the fly  better efficiency on the batch farm enhanced batch farm debugging output provided by log files

Documentation hydra2 online documentation (ROOT + doxygen) Batch farm Data storage Monitoring Software 11/22/12 11

Installation Procedure The installation procedure installs on 32 or 64 bit systems 1 tar.gz file (150 Mb source code … needs some time to compile, located at /misc/hadessoftware/etch32 ) From tarball gsl ROOT Cernlib Garfield All admin scripts All environment scripts UrQmd UrQmd converter From SVN Hydra2 HGeant2 hzip hadd.pl Pluto ORACLE client has to be installed separately (from tar file or full installer) 11/22/12 12

New stuff to come On analysis macro level: Event mixing framework by Szymon Multiple scattering Minimizing for leptons by Wolfgang (matching on the RICH Mirror + global vertex use) Add util functions for vertex + secondary vertex calculations + transformations to libParticle.so (stuff contained in Alex's macros for example) On DST level: Second iteration of cluster finder + kickplane corrections Enable full switch to Kalman Filter Additional data objects for close pair rejection (to be developed) backtracking MDC  RICH for ring finding 11/22/12 13