Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.

Slides:



Advertisements
Similar presentations
Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Advertisements

CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Techniques for establishing and maintaining constant temperature in ICT systems in order to reduce energy consumption Mihail Radu Cătălin Truşcă, Ş. Albert,
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Ali Zahir Site Admin / Faculty Member ALICE T1/2 March.
Future development at INCDTIM computing center Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR TECHNOLOGIES.
José M. Hernández CIEMAT Grid Computing in the Experiment at LHC Jornada de usuarios de Infraestructuras Grid January 2012, CIEMAT, Madrid.
Current Status of the Grid Computing for Physics in Romania Horia Hulubei National Institute of R&D in Physics and Nuclear Engineering (IFIN-HH)
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Klaster obliczeniowy WLCG – cz.I Alice::WTU::LCG - skład: VOBOX  alicluster.if.pw.edu.plVM: saturn.if.pw.edu.pl CREAM-CE  aligrid.if.pw.edu.pl VM: saturn.if.pw.edu.pl.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2011 System manager: Pînzaru.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
MICROSOFT TESTS /291/293 Fairfax County Adult Education Courses 1477/1478/1479.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
Status of Tokyo LCG tier-2 center for atlas / H. Sakamoto / ISGC07 Status of Tokyo LCG Tier 2 Center for ATLAS Hiroshi Sakamoto International Center for.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Mihnea Dulea, IFIN-HH R-ECFA Meeting, National Physics Library IFIN-HH, Magurele Romanian participation in WLCG M. Dulea Elementary Particles.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
RECENT DEVELOPMENTS IN THE CONTRIBUTION OF DFCTI/IFIN-HH TO THE WLCG COLLABORATION Department of Computational Physics and Information Technologies (DFCTI)
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
13 January 2004GDB Geneva, Milos Lokajicek Institute of Physics AS CR, Prague LCG regional centre in Prague
GRID OPERATIONS IN ROMANIA
SuperB – INFN-Bari Giacinto DONVITO.
The Beijing Tier 2: status and plans
Operations and plans - Polish sites
Moroccan Grid Infrastructure MaGrid
UK GridPP Tier-1/A Centre at CLRC
UTFSM computer cluster
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru Ciprian 2013

About RO-16-UAIC RO-16-UAIC The Alexandru Ioan Cuza University of IaiThe Alexandru Ioan Cuza University of Iai is the oldest higher education institution in Romania. With over students and 800 academic staff, the university enjoys high prestige at national and international level and cooperates with over 250 universities world-wide. This work was supported by CONDEGRID project

INFRASTRUCTURE  3 Gigabit Ethernet switches with 48 Gigabit ports and two Ten Gigabit Ethernet ports  50 servers used for WN with 8 cores (2.66GHz),16GB RAM, 2 Gigabit interfaces, 160 GB disk storage per computer  1 server used for NFS, DHCP, DNS with 8 cores (2.66GHz), 24 GB RAM, 4* 160 GB disks storage  1 storage with 8 TB in Raid 6 used for atlas software  1 storage server used for SE with 16 cores (2.4GHz), 24GB RAM(1333MHz), 6 Gigabit interfaces and 80TB in Raid 6  1 storage used for SE with 50TB in Raid 6  2 servers will be used for back-up virtualization system in CREAM, BDII, UI, SQUID,with 12 cores (2.66GHz), 32GB RAM, 2 Ten Gigabit Ethernet This work was supported by CONDEGRID project

Network infrastructure This work was supported by CONDEGRID project

INFRASTRUCTURE -- electric  One UPS have power 54 KW and the other two by 18 KW. (the battery for UPS was replaced in this year)  One generator with a output power by 100KW. This work was supported by CONDEGRID project

RO-16-UAIC- software Virtual Organization : atlas, ops, dteam Operations system: Scientific Linux bit for WN : Scientific Linux bit for CREAM, UI, APEL, BDII and SE Middleware: EMIv2 for WN, APEL, UI, BDII, CREAM and SE This work was supported by CONDEGRID project

MONITORING AND RESULTS kSI2K-hours This work was supported by CONDEGRID project

MONITORING AND RESULTS No of jobs in last 12 hours This work was supported by CONDEGRID project

LHCONE in Romania -RoEduNet This work was supported by CONDEGRID project  Configured as separate VRF on RoEduNet network and peers with GEANT VRF since Nov 2012  Connects following sites: RO-16-UAIC /25 (located in Iasi) 15 Nov 2012 RO-14-ITIM /27 (located in Cluj) 27 Nov 2012  Following sites will be connected soon RO-07-IFIN and RO-11-IFIN ( /24, located in Bucharest -Magurele)

Future plans for RO-16-UAIC We are improving the storage capacity for analyzing ATLAS data in the near future --- from 130TB to 160TB Increase the no. of WN from 400 core to 500 core Acknowledgement: This work was supported by CONDEGRID project: National contribution to the development of the LCG computing grid for elementary particle physics