Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme.

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
J. Hanton - P. Herquet - F. Lequeux - A. Romeyer1 CONDOR-G Installation July 2004 : one independent PC for Grid FTP as a client to UCL August 2004 : complete.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Common Services in a network Server : provide services Type of Services (= type of servers) –file servers –print servers –application servers –domain servers.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Operating Systems Who’s in charge in there?. Types of Software Application Software : Does things we want to do System Software : Does things we need.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Current Job Components Information Technology Department Network Systems Administration Telecommunications Database Design and Administration.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
Site Report HEPiX/HEPNT 17 April 2002 Catania Paul Kuipers.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
Block1 Wrapping Your Nugget Around Distributed Processing.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
2-3 April 2001HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Computer and Network Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH-LBC RTTC meeting,
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Scientific Computing Facilities for CMS Simulation Shams Shahid Ayub CTC-CERN Computer Lab.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Alain Romeyer - Sept Light Higgs search in SUSY cascades Introduction (Previous studies) Plans for this analysis Simulation chain Reconstruction.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
UK GridPP Tier-1/A Centre at CLRC
US CMS Testbed.
Presentation transcript:

Alain Romeyer - 15/06/20041 CMS farm Mons Final goal : included in the GRID CMS framework To be involved in the CMS data processing scheme Starting point : follow the UCL CMS farm installation Means : install all the CMS simulation and reconstruction software Make them available on all the computer of the farm -> NFS Distribute the jobs in a “clever” way -> CONDOR Store the produced events -> RAID system disk space Near future : Connection to the GRID -> Globus

Alain Romeyer - 15/06/20042 Cluster architecture Server Router Computer … 1 Gb/s 100 Mb/s Outer world Raid disk (2.4 TB) OS : Redhat CERN Cms Cms cms01.umh.ac.be Static IP XXX

Alain Romeyer - 15/06/20043 Status Current status : 3 PCs (P4 2.4 GHz processor – 2 GB memory) -> 1 server -> 2 for job execution 01/04 : OS installation on each PC -> OK 01-02/04 : CMS software environment -> OK (CMS84 : OSCAR_2_4_6 – ORCA_7_6_1) Programs are distributed via NFS -> OK 03/04 : Job submission and management -> OK (CONDOR 6_6_2) 04-05/04 : Event production -> OK

Alain Romeyer - 15/06/20044 Production tests Event production for CMS : Pile-up events ( evts - ~ 1 min / evt) -> 17 hours jobs SUSY events ( evts - ~ 6 min / evt) -> 4.5 day jobs Mix pileup and SUSY events + digitisation (detector response) ( evts - ~ 25 s / evt) -> 7 hour jobs Farm seems operational and stable over ~ 1 week Cacti monitoring tool ( Works only partially… don’t know why !!!

Alain Romeyer - 15/06/20045 Remaining issues CONDOR configuration / firewall daemon wait for job submission on eth1 (local network) But we want to submit jobs from the public network (eth0) CMS01 Public network Local network eth1 eth0 -> playing with iptables rules ? CONDOR daemons How to send from cms02/03 to the UMH mail server ? -> IP masquerading on cms01 (router)…

Alain Romeyer - 15/06/20046 Before moving to GRID… Installation of required dependencies VersionStatus OSRH 7.3OK Gcc2.96OK Java1_4_2_04OK Ant1.6.1OK Junit3.8.1NOT WORKING Bison1.35OK Jakarta4.1.30OK MySQL OK Do we need particular things in terms of : QoS Security …?