16.10.061 IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
The DataGrid Project NIKHEF, Wetenschappelijke Jaarvergadering, 19 December 2002
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
How to join the CYCLOPS VO Marco Verlato CYCLOPS Second Training Workshop 5-7 May 2008 Chania, Greece.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting,
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
The DCS lab. Computer infrastructure Peter Chochula.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
30/07/2005Symmetries and Spin - Praha 051 MonteCarlo simulations in a GRID environment for the COMPASS experiment Antonio Amoroso for the COMPASS Coll.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Current Status Work Package Three By: Mohamed Elshamy.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
TP: Grid site installation BEINGRID site installation.
Ismayilov Ali Institute of Physics of ANAS Creating a distributed computing grid of Azerbaijan for collaborative research NEC'2011.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Brief introduction about “Grid at LNS”
Status of BESIII Distributed Computing
The Beijing Tier 2: status and plans
Grid site as a tool for data processing and data analysis
Moroccan Grid Infrastructure MaGrid
Kolkata Status and Plan
Partner Status HPCL-University of Cyprus
Computing Board Report CHIPP Plenary Meeting
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
The LHCb Computing Data Challenge DC06
Presentation transcript:

IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica

Activities of Romanian group in LHCb LHCb Romanian group mainly involved in the Calorimeter project. There were 3 types of contributions: –Physicists participated in tests, simulation and reconstruction of events: ECAL: commissioning of the fotomultiplicators and implementation of a monitoring system for testing the fotomultiplicators, HCAL. HCAL: installation of the LED + HV system and testing of the connections of PMT fibers to the LHCb detector –Technicians were involved in construction/installation of various supports and electronics racks –Software contributions: Simulation of the coding/decoding of data received from the TELL1 boards for HCAL and ECAL Gaudi Viewer project

GRID Activities Installation of the LHCb GRID cluster at IFIN- HH started around middle First step: start with a small number of computers (6 machines) and develop the necessary infrastructure Second step: acquisition of a larger number of computers.

Hardware configuration Current hardware configuration includes : 3 P4 2.8 GHz 2 GB RAM 3 dual processor 3 GHz Xeons, 2 GB RAM (worker nodes), 80 GB HDD 15 dual processor 3 GHz Xeons LV, 2 GB RAM, 80 GB HDD (worker nodes) 2 Pentium D 3 GHz, 2 Gb RAM (CE, SE) PCI IDE Controller for 8 additional HDD for a total 5 TB of storage. (SE) UPS

Software configuration SLC bits installed on everything. Ready to move to SLC4 when LHCb will switch to the new OS. From GRID middleware point of view we have installed the standard software packages, corresponding to the typical CE, SE, WN and UI configurations. Using PBS batch system. Additional services to ease the management of the cluster: gateway (firewall, for NAT-ing worker nodes), DHCP+TFTP server, DNS.

Network configuration SE Internet Switch CE GW WN UI

Node installation procedure Configuring the necessary services (DNS, DHCP) Every machine is network installed and configure (DHCP + TFTP for loading a PXE pre-booting environment, then rpm packets installed from a local server via HTTP) GRID middleware installed in the usual way: /opt/glite/yaim/scripts/install_node config_file

Site status Site has GRID certificate and user are members of the DTEAM and LHCb VO. The site was just registered and started to be tested (RO-11-NIPNE). The site started receiving test jobs and its status can be viewed at: We’ve just submitted our request for certification and in the following days the Site Functional Tests should be run. With the help of the LHCb team from CERN we will try to run our first LHCb jobs in the following days. The LHCb applications has been successfully installed on our machines.

Plans Mr. Eduard Pauna is going to be fully responsible for the management of the LCG site (I will be based at CERN for the next couple of years). New hardware acquisition scheduled in couple of months: ~10 machines are estimated to be bought (>50 CPU target). Optimizing the network link (hoping for a higher speed line). Optimizing the storage (if it proves to be a bottleneck) by buying a better controller for the HDDs Small optimizations for LHCb software