PIC port d’informació científica Luis Diaz (PIC) ‏ Databases services at PIC: review and plans.

Slides:



Advertisements
Similar presentations
ITEC474 INTRODUCTION.
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
INTRODUCTION TO ORACLE Lynnwood Brown System Managers LLC Oracle High Availability Solutions RAC and Standby Database Copyright System Managers LLC 2008.
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen Status of Database Services.
DB server limits (process/sessions) Carlos Fernando Gamboa, BNL Andrew Wong, TRIUMF WLCG Collaboration Workshop, CERN Geneva, April 2008.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Database Hardware Resources at Tier-1 Sites Gordon D. Brown Rutherford Appleton Laboratory 3D/WLCG Workshop CERN, Geneva 11 th -14 th November 2008.
1 Recovery and Backup RMAN TIER 1 Experience, status and questions. Meeting at CNAF June of 2007, Bologna, Italy Carlos Fernando Gamboa, BNL Gordon.
Tripwire Enterprise Server – Getting Started Doreen Meyer and Vincent Fox UC Davis, Information and Education Technology June 6, 2006.
1 RAL Status and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, CERN, November 2009.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Simplify your Job – Automatic Storage Management Angelo Session id:
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
CERN IT Department CH-1211 Geneva 23 Switzerland t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
Luca Canali, CERN Dawid Wojcik, CERN
Service Review and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, Barcelona (ES), April 2009.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Indiana University’s Name for its Sakai Implementation Oncourse CL (Collaborative Learning) Active Users = 112,341 Sites.
Databases at TRIUMF Andrew Wong CANADA’S NATIONAL LABORATORY FOR PARTICLE AND NUCLEAR PHYSICS Owned and operated as a joint venture by a consortium of.
CASTOR Databases at RAL Carmine Cioffi Database Administrator and Developer Castor Face to Face, RAL February 2009.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
1© Copyright 2012 EMC Corporation. All rights reserved. EMC PERFORMANCE OPTIMIZATION FOR MICROSOFT FAST SEARCH SERVER 2010 FOR SHAREPOINT EMC Symmetrix.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Database Administrator RAL Proposed Workshop Goals Dirk Duellmann, CERN.
DB Questions and Answers open session Carlos Fernando Gamboa, BNL WLCG Collaboration Workshop, CERN Geneva, April 2008.
1 Installation Training Everything you need to know to get up and running.
Summary of the status and plans of FTS and LFC back-end database installations at Tier-1 Sites Gordon D. Brown Rutherford Appleton Laboratory WLCG Workshop.
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
ASM Configuration Review Luca Canali, CERN-IT Distributed Database Operations Workshop CERN, November 26 th, 2009.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
CERN IT Department CH-1211 Genève 23 Switzerland t Possible Service Upgrade Jacek Wojcieszuk, CERN/IT-DM Distributed Database Operations.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
© 2006 EMC Corporation. All rights reserved. The Host Environment Module 2.1.
Padova, 5 October StoRM Service view Riccardo Zappi INFN-CNAF Bologna.
BNL Oracle database services status and future plans Carlos Fernando Gamboa, John DeStefano, Dantong Yu Grid Group, RACF Facility Brookhaven National Lab,
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
PIC port d’informació científica DateText1 November 2009 (Elena Planas) PIC Site review.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
Scalable Oracle 10g for the Physics Database Services Luca Canali, CERN IT January, 2006.
LHC Logging Cluster Nilo Segura IT/DB. Agenda ● Hardware Components ● Software Components ● Transparent Application Failover ● Service definition.
Database CNAF Barbara Martelli Rome, April 4 st 2006.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Replicazione e QoS nella gestione di database grid-oriented Barbara Martelli INFN - CNAF.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Deployment and operations of a high availability infrastructure for relational databases in a heterogeneous Tier 1 workload environment Carlos Fernando.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
TRIUMF Site Report for HEPiX, JLAB, October 9-13, 2006 – Corrie Kost TRIUMF SITE REPORT Corrie Kost & Steve McDonald Update since Hepix Spring 2006.
ASGC incident report ASGC/OPS Jason Shih Nov 26 th 2009 Distributed Database Operations Workshop.
DB Questions and Answers open session (comments during session) WLCG Collaboration Workshop, CERN Geneva, 24 of April 2008.
Jean-Philippe Baud, IT-GD, CERN November 2007
Database Services at CERN Status Update
Elizabeth Gallas - Oxford ADC Weekly September 13, 2011
Status of the Accelerator Online Operational Databases
NGS Oracle Service.
Scalable Database Services for Physics: Oracle 10g RAC on Linux
Oracle Database Monitoring and beyond
Acutelearn Technologies Tivoli Storage Manager(TSM) Training Tivoli Storage Manager Basics: Tivoli Storage Manager Overview Tivoli Storage Manager concepts.
Introduction of Week 6 Assignment Discussion
Oracle Storage Performance Studies
ASM-based storage to scale out the Database Services for Physics
Scalable Database Services for Physics: Oracle 10g RAC on Linux
Presentation transcript:

PIC port d’informació científica Luis Diaz (PIC) ‏ Databases services at PIC: review and plans

PIC port d’informació científica 2 Index Global view Databases hosted Structure, general information –Servers –Redundancy –Storage Detail of each cluster Plans

PIC port d’informació científica 3 Global view

PIC port d’informació científica 4 DB HOSTED Oracle DBStreams –ATLASYes –LHCb/LFCYes –LFC-ATLASNo (see plans) ‏ –FTSNo All this DB are: 10G R2 ( ), 64 bits -- RACs with ASM Mysql DB –LHCbdb (dirac). Pic manage the following subcategories LHCb DIRAC Web site LHCb DIRAC Monitoring, Accounting and Logging system

PIC port d’informació científica 5 STRUCTURE/GENERAL INFO Servers –All the nodes hosting oracle databases are identic: Linux RedHat bits Fujitsu siemens: Primergy Advanced Blade Ecosystem BX600 8 G RAM QuadeCore intel CPU 1,6GHz –DIRAC mysql Scientific Linux CERN SLC release 4.6 (Beryllium) ‏ Dell 1850 QuadeCore Intel(R) Xeon(TM) CPU 3.20GHz 4 G RAM –Repartition: 3 blades for 3D Atlas2 blades for FTS 2 blades for 3D de LHCb2 blades for LFC 1 blade for DIRAC

PIC port d’informació científica 6 STRUCTURE/GENERAL INFO Redundancy (bounding, multipath, rac) ‏ Blade 1 st level of redundancy bounding (3 lan card)‏ Performance increase FC Switch 2 nd level Both switches are interconnected All the nodes are connected to both switches Storage 3 rd level 2 Storage device controllers Into the storage device last level – see next slide

PIC port d’informació científica 7 STRUCTURE/GENERAL INFO Storage –Provider: NETAPP 1 st : Production device –5 array of 14 discs, 300 Go each one –3 spare disks, 2 parity disks and 2 double parity disks –Total space for data: 12,4Tb 2 nd : Backup device –1 array of 14 disks, 750 G SATA II –1 array de 7 disks, 750 G DATA II –2 spare disks, 2 parity disks and 2 double parity disks –Total space for data: 9Tb

PIC port d’informació científica 8 STRUCTURE/GENERAL INFO Storage –Performances test / ORION./orion_linux_em64t -run advanced -write 0 -matrix basic -duration testname mytest -num_disks 50 -cache_size=10000 – Small IO size: 8 KB – Large IO size: 1024 KB – IO Types: Small Random IOs, Large Random IOs – Simulated Array Type: CONCAT – Write: 0% – Cache Size: MB – Duration for each Data Point: 120 seconds – Small Columns:, 0 – Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96 – Total Data Points: 51 – Name: /dev/dm-8 Size: – Name: /dev/dm-3 Size: – Name: /dev/dm-9 Size: – Name: /dev/dm-4 Size: – 4 FILEs found. – Maximum Large Small=0 and Large=96 – Maximum Small Small=228 and Large=0 – Minimum Small Small=6 and Large=0

PIC port d’informació científica 9 STRUCTURE/GENERAL INFO

PIC port d’informació científica 10 3D Databases ATLAS –Service description Metadata of ATLAS experiment cond DB. –Access requirements All opened LFC-LHCb –Service description Metadata of LHCb experiment cond DB. –Access requirements (not FULL opened like ATLAS) ‏ Opened from PIC to all other TIER1 Open for CERN: OEM monitoring

PIC port d’informació científica 11 Local Databases LFC-ATLAS –Service description Catalogue/index of files used by ATLAS experiment. –Access requirements Full opened FTS –Service description Transfert Log, history, service given to PIC's VO for LHC Fts monitoring (web interface) ‏ –Access requirements Only open to 6 internal frontend servers

PIC port d’informació científica 12 Common details ALL DB –Disponibility requirement 24/24 7/7 –Scheduled jobs Backup rman jobs –Monitoring Nagios checks: –RAC check (include instance, listener and vip checks), Block corrupt check and streams check Oem –All the standard checks Ganglia –DB Growing sum(bytes) from dba_segment where owner like '%ATLAS/LHC%' – sessions (and all the standard ganglia checks: »Memory..cpu load, red load, etc...) ‏ Cern Monitor ( ‏

PIC port d’informació científica 13 Plans 1 of 2 HardWare plans –During the period 2009/2010, we don't planed to upgrade any cluster. –We will wait for the 1 st year taking data from accelerator. If, as expected, the cluster give good results, no change will be done ; but at the same time, we will check for CPU load with ATLAS cluster, and add CPU if we see that it is needed. Configuration plans –For security reason, we'll change ATLAS listener port from 1521 to another. –Implement the backup/restore procedure with NETAPP storage device, as soon as it works. –LFC-ATLAS streams: We plan to test streams internaly. Then we'll be abble to decide if we'll implement it in production.

PIC port d’informació científica 14 Plans Miscellaneous –Periodical tasks: Restore tests: From backup file, we move all files in other cluster and from scratch we recover... –The procedure to move files from ASM volume is: DBMS_FILE_TRANSFERT()‏ Performance report: Using data from OEM, Cern Monitor, AWR and Ganglia. –Stress tests With ATLAS mainly (see Xavi Espinal talk) ‏ –Oracle VM We have one Oracle VM installed with FTS Preproduction on it. We are testing this solution: the main objective is to reduce the electrical cost due to many servers connected.

PIC port d’informació científica 15 THANKS FOR YOUR ATTENTION