Tier1 Grid from users point of view: urge of standards Dr James Cunha Werner Babar UK Grid Meeting.

Slides:



Advertisements
Similar presentations
Your university or experiment logo here BaBar Status Report Chris Brew GridPP16 QMUL 28/06/2006.
Advertisements

B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
EasyGrid: the job submission system that works! James Cunha Werner GridPP18 Meeting – University of Glasgow.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
Grid in action: from EasyGrid to LCG testbed and gridification techniques. James Cunha Werner University of Manchester Christmas Meeting
INFN - Ferrara BaBarGrid Meeting SPGrid Efforts in Italy BaBar Collaboration Meeting - SLAC December 11, 2002 Enrica Antonioli - Paolo Veronesi.
The Grid Constantinos Kourouyiannis Ξ Architecture Group.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Basic Grid Job Submission Alessandra Forti 28 March 2006.
1 Use of the European Data Grid software in the framework of the BaBar distributed computing model T. Adye (1), R. Barlow (2), B. Bense (3), D. Boutigny.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
New Babar Software Installation Acquiring Know How and Reliability.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
BaBar WEB job submission with Globus authentication and AFS access T. Adye, R. Barlow, A. Forti, A. McNab, S. Salih, D. H. Smith on behalf of the BaBar.
JetWeb on the Grid Ben Waugh (UCL), GridPP6, What is JetWeb? How can JetWeb use the Grid? Progress report The Future Conclusions.
James Cunha Job Submission for Babar Analysis James Werner Resources:
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
EasyGrid Job Submission System and Gridification Techniques James Cunha Werner Christmas Meeting University of Manchester.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
- Iain Bertram R-GMA and DØ Iain Bertram RAL 13 May 2004 Thanks to Jeff Templon at Nikhef.
Steve Traylen Particle Physics Department EDG and LCG Status 9 th December 2003
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
Steve Traylen PPD Rutherford Lab Grid Operations PPD Christmas Lectures Steve Traylen RAL Tier1 Grid Deployment
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
RAL Site Report Castor Face-to-Face meeting September 2014 Rob Appleyard, Shaun de Witt, Juan Sierra.
Andrew McNab - Manchester HEP - 11 May 2001 Packaging / installation Ready to take globus from prerelease to release. Alex has prepared GSI openssh.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.
Tier1A Status Andrew Sansum 30 January Overview Systems Staff Projects.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
BaBarGrid UK Distributed Analysis Roger Barlow Montréal collaboration meeting June 22 nd 2006.
INFN - Ferrara BaBar Meeting SPGrid: status in Ferrara Enrica Antonioli - Paolo Veronesi Ferrara, 12/02/2003.
Daniele Spiga PerugiaCMS Italia 14 Feb ’07 Napoli1 CRAB status and next evolution Daniele Spiga University & INFN Perugia On behalf of CRAB Team.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Review Please turn in your homework and practicals Packages, installation, rpm command Apache – Quick and easy way to set up a web server to play around.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
John Gordon Grid Accounting Update John Gordon (for Dave Kant) CCLRC e-Science Centre, UK LCG Grid Deployment Board NIKHEF, October.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Five todos when moving an application to distributed HTC.
GDB Meeting CERN 09/11/05 EGEE is a project funded by the European Union under contract IST A new LCG VO for GEANT4 Patricia Méndez Lorenzo.
Accounting Update John Gordon. Outline Multicore CPU Accounting Developments Cloud Accounting Storage Accounting Miscellaneous.
Grid development at University of Manchester Hardware architecture: - 1 Computer Element and 10 Work nodes Software architecture: - EasyGrid to submit.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
BaBar-Grid Status and Prospects
GridPP10 Meeting CERN June 3 rd 2004
Eleonora Luppi INFN and University of Ferrara - Italy
Summary on PPS-pilot activity on CREAM CE
Bulk production of Monte Carlo
CREAM Status and Plans Massimo Sgaravatto – INFN Padova
lcg-infosites documentation (v2.1, LCG2.3.1) 10/03/05
The CREAM CE: When can the LCG-CE be replaced?
Artem Trunov and EKP team EPK – Uni Karlsruhe
Universita’ di Torino and INFN – Torino
MonteCarlo production for the BaBar experiment on the Italian grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

Tier1 Grid from users point of view: urge of standards Dr James Cunha Werner Babar UK Grid Meeting

Users Requirements PhD students with 3 years scholarship. Researchers with fixed-term contract. Researchers with deadlines and competition. THEY NEED AN OPERATIONAL AND RELIABLE ENVIRONMENT TO DO THEIR WORK.

The service provide by RAL for Babar Grid UK Months to install LCG properly. Months to develop an initialisation script. Lack of adequate procedures  Poor service. USERS LOOKING FOR OTHER RESOURCES: SLAC, GRIDKA, ETC User’s waste of time. Idle resources.

Grid at Babar Elba meeting

TauUsers reprocessing: opportunity lost!

Jenny’s request Date: Mon, 4 Apr :58: (BST) From: Jenny Williams To: James Werner Subject: TauUser for CM2 ok, it works. Requirements: for running with analysis-24: Beta V BetaMiniUser V BetaPid V …

Date: Mon, 4 Apr :58: From: Steve Traylen To: jamwer Cc: Chris Brew Subject: Re: [BABARGRID-UK] Jobs in Waiting forever... On Mon, Apr 04, 2005 at 10:11:30AM or thereabouts, jamwer wrote: > Dear colleagues, > Last week I submitted one dataset (26 jobs) to bohr and the jobs > were waiting for 4 days. I killed all of them and submitted again in my > farm bfb... and they still waiting. > Submission was fine: > > JOB SUBMIT OUTCOME > The job has been successfully submitted to the Network Server. > Use edg-job-status command to check job current status. Your job > identifier (edg_jobId) is: > > - Chris, James I should add, it is only lcgrb01.gridpp.rl.ac.uk that appears to have this problem. There are not reports from other RBs of them going into this state. I'll keep you updated as I get news. Looking for other RBs that support babar there is also grid008g.cnaf.infn.it egee-rb-01.cnaf.infn.it It would be good to break there RB as well. CNAF has the expertise locally to fix this kind of thing. Steve Operational problems At RAL

RAL operational again Date: Fri, 6 May :25: From: Steve Traylen To: Babar Grid UK Cc: James Werner Subject: lcgrb01 looks to be okay now. Hi James and others. lcgrb01.gridpp.rl.ac.uk the RB at RAL that was having problems now looks to be okay. It was okay before I went away two weeks ago and still appears to be. The fault looked to be a bad a interaction between globus and nscd. Please feel free to use lcgrb01 and as normal post questions to

Initialisation script From : Sent : 17 February :00:07 To : Subject : Re: VO-based environment settings Dear Artem, Your question is very important if we want to establish a worldwide grid. LCG grid software defines envvar VO_BABAR_SW_DIR to point the configuration directory, where initialisation scripts, tars etc are stored. At Manchester we defined the script $VO_BABAR_SW_DIR/babar-grid-setup-env.sh to initialise $BFROOT, $BFARCH,... and call all scripts from hepix (group_siteSpecs.conf.sh, group_aliases.sh, group_sys.conf.sh, and bashrc). If you do not have the release installed, them a tar should be untared following to provide the necessary infrastructure. We do not use this, because our babar software is installed at AFS. The next step is set 00_FD_BOOT to your last version of condition and configuration database. At this point, you will be able to run BetaMiniApp without any problem, in any computer in the world with follow this elementary standard. I am running Tau11 in parallel in 26 computers from different farms, which allow me analyse more tham 1 million events per hour. For more information, see Best regards, James

From : Sent : 17 February :41:40 To : Subject : RE: VO-based environment settings Hi, As someone who sits on both sides of this fence (site admin and grid application developer/user) James's solution is, I think, the only practical one and the one I've been pushing. …

Date: Mon, 9 May :59: (BST) From: jamwer To: Subject: [BABARGRID-UK] Grid needs standards Would you please write a script for analysis-24, called. $VO_BABAR_SW_DIR/babar-grid-setup-env-analysis-24.sh which initialise all babar environment and 00_FD_BOOT. The commands users have to run after run your script will be: local=`pwd` cd /afs/rl.ac.uk/bfactory/dist/releases/analysis-24 srtpath analysis-24 $BFARCH cd $local ln -s $BFROOT/dist/releases/analysis-24 PARENT edg-rm --vo babar cp lfn:jamwer_bfb.tier2.hep.man.ac.uk_BetaMiniApp_16 file:///tmp/BetaMiniApp chmod 777 /tmp/BetaMiniApp /tmp/BetaMiniApp JobTau11-Run4-OnPeak-R14-1.tcl rm /tmp/BetaMiniApp I am trying to run using the same parameters I had in the batch system and it is not working.We need a standard way to initialise the environment, if we want to allow users in grid in any site. Let me know when you have the job done, or if you have a best way to do it. Best regards, James

Date: Tue, 10 May :51: To: jamwer Cc: Subject: RE: [BABARGRID-UK] Grid needs standards Hi James, I've not dealt with this because I'm away at the HEPiX Workshop at the moment and this will need some dicussion before it's implemented. The script you suggest is very highly taylored to your specific needs and will have to very much more generalised before it can go into use. Also as you say in the subject line "Grid needs standards" but those standards need to be agreed and useful for many people. I suggest you report this as a suggestion to the main BaBarGrid list where we can discuss it and find a general solution which will work for more situations than just yours. …

Publishing site resources/releases > GlueHEPSup= Babar, Atlas,... <= different softwares > GlueOS= RH7.2, RH7.3 or SL3... <= Operating System > GlueAplic= BetaMiniApp, Moose,... <= Available Application > GlueReleases= , d, etc <= Releases available > GlueCondDB= local, AMS, xrootd,... <= Cond & Config DB > GlueBackgroundDB= local, AMS, xroot,... <= Background DB > GlueBbk= local, xrootd,... <= Experimental Data We would be able to seach the configuration we want to run the software and optimise resources. I am able to know how many jobs are in queue, and what would be the best strategy. If a massive software (taking days) we can use data remotely through xrootd: them GlueBbk=xrootd would be used. If a program test use GlueBbk=local, and only a few sites would be able to run it. A consulta fornecera a lista com o nome dos CE com o release disponivel.