3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005.

Slides:



Advertisements
Similar presentations
GridPP7 – June 30 – July 2, 2003 – Fabric monitoring– n° 1 Fabric monitoring for LCG-1 in the CERN Computer Center Jan van Eldik CERN-IT/FIO/SM 7 th GridPP.
Advertisements

Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
CERN - IT Department CH-1211 Genève 23 Switzerland t LCG Persistency Framework CORAL, POOL, COOL – Status and Outlook A. Valassi, R. Basset,
DataGrid is a project funded by the European Union 22 September 2003 – n° 1 EDG WP4 Fabric Management: Fabric Monitoring and Fault Tolerance
D. Düllmann - IT/DB LCG - POOL Project1 POOL Release Plan for 2003 Dirk Düllmann LCG Application Area Meeting, 5 th March 2003.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
October 2003 Iosif Legrand Iosif Legrand California Institute of Technology.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
Institute of Computer Science AGH Performance Monitoring of Java Web Service-based Applications Włodzimierz Funika, Piotr Handzlik Lechosław Trębacz Institute.
Deploying Dynamics Applications Thomas Hansen – Director, appSolutions a|s
Database monitoring and service validation Dirk Duellmann CERN IT/PSS and 3D
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
CMS Calibration-Alignment Databases for The Magnet Test and Beyond Vincenzo Innocente On Behalf of CMS Collaboration 3D Workshop October 2005.
GRID job tracking and monitoring Dmitry Rogozin Laboratory of Particle Physics, JINR 07/08/ /09/2006.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
LCG 3D StatusDirk Duellmann1 LCG 3D Throughput Tests Scheduled for May - extended until end of June –Use the production database clusters at tier 1 and.
Fermilab Oct 17, 2005Database Services at LCG Tier sites - FNAL1 FNAL Site Update By Anil Kumar & Julie Trumbo CD/CSS/DSG FNAL LCG Database.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
FroNtier: High Performance Database Access Using Standard Web Components in a Scalable Multi-tier Architecture Marc Paterno Fermilab CHEP 2004 Sept. 27-Oct.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
Ramiro Voicu December Design Considerations  Act as a true dynamic service and provide the necessary functionally to be used by any other services.
Olof Bärring – WP4 summary- 4/9/ n° 1 Partner Logo WP4 report Plans for testbed 2
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
CERN IT Department CH-1211 Genève 23 Switzerland t ES Future plans for CORAL and COOL Andrea Valassi (IT-ES) For the Persistency Framework.
4/5/2007Data handling and transfer in the LHCb experiment1 Data handling and transfer in the LHCb experiment RT NPSS Real Time 2007 FNAL - 4 th May 2007.
ALICE, ATLAS, CMS & LHCb joint workshop on
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
CMS Conditions Data Access using FroNTier Lee Lueking CMS Offline Software and Computing 5 September 2007 CHEP 2007 – Distributed Data Analysis and Information.
Lemon Monitoring Miroslav Siket, German Cancio, David Front, Maciej Stepniewski CERN-IT/FIO-FS LCG Operations Workshop Bologna, May 2005.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Database Server Concepts and Possibilities Lee Lueking D0 Data Browser Workshop April 8, 2002.
CERN - IT Department CH-1211 Genève 23 Switzerland t CORAL COmmon Relational Abstraction Layer Radovan Chytracek, Ioannis Papadopoulos (CERN.
Application Development
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
April 2003 Iosif Legrand MONitoring Agents using a Large Integrated Services Architecture Iosif Legrand California Institute of Technology.
Database authentication in CORAL and COOL Database authentication in CORAL and COOL Giacomo Govi Giacomo Govi CERN IT/PSS CERN IT/PSS On behalf of the.
3D Project Status Dirk Duellmann, CERN IT For the LCG 3D project Meeting with LHCC Referees, March 21st 06.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
ATLAS Database Access Library Local Area LCG3D Meeting Fermilab, Batavia, USA October 21, 2004 Alexandre Vaniachine (ANL)
CORAL CORAL a software system for vendor-neutral access to relational databases Ioannis Papadopoulos, Radoval Chytracek, Dirk Düllmann, Giacomo Govi, Yulia.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
FroNtier Stress Tests at Tier-0 Status report Luis Ramos LCG3D Workshop – September 13, 2006.
03/09/2007http://pcalimonitor.cern.ch/1 Monitoring in ALICE Costin Grigoras 03/09/2007 WLCG Meeting, CHEP.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
ATLAS FroNTier cache consistency stress testing David Front Weizmann Institute 1September 2009 ATLASFroNTier chache consistency stress testing.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
1 Evaluation of Cooperative Web Caching with Web Polygraph Ping Du and Jaspal Subhlok Department of Computer Science University of Houston presented at.
Interstage BPM v11.2 1Copyright © 2010 FUJITSU LIMITED INTERSTAGE BPM ARCHITECTURE BPMS.
DBS Monitor and DAN CD Projects Report July 9, 2003.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
FroNTier at BNL Implementation and testing of FroNTier database caching and data distribution John DeStefano, Carlos Fernando Gamboa, Dantong Yu Grid Middleware.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
CERN - IT Department CH-1211 Genève 23 Switzerland t Service Level & Responsibilities Dirk Düllmann LCG 3D Database Workshop September,
Dario Barberis: ATLAS DB S&C Week – 3 December Oracle/Frontier and CondDB Consolidation Dario Barberis Genoa University/INFN.
CERN IT Department CH-1211 Genève 23 Switzerland t ES CORAL Server & CORAL Server Proxy: Scalable Access to Relational Databases from CORAL.
Towards Dynamic Database Deployment LCG 3D Meeting November 24, 2005 CERN, Geneva, Switzerland Alexandre Vaniachine (ANL)
Database Replication and Monitoring
Lee Lueking WLCG Workshop DB BoF 22 Jan. 2007
Database Readiness Workshop Intro & Goals
Sergio Fantinel, INFN LNL/PD
Conditions Data access using FroNTier Squid cache Server
Workshop Summary Dirk Duellmann.
LHCb Conditions Database TEG Workshop 7 November 2011 Marco Clemencic
Presentation transcript:

3D Testing and Monitoring Lee Lueking LCG 3D Meeting Sept. 15, 2005

Aug. 25, 2005Conditions DB Deployment2 Outline Overview Schedule and Milestones (CMS Side) Workplan (w/ 3D side)

Aug. 25, 2005Conditions DB Deployment3 Software Stack EDM EventSetup assures the correct non-event data is accessed and available for the user application. POOL-ORA (Object Relational Access) is used to map C++ objects to Relational schema. A POOL-RAL/FroNTier-Oracle plug-in is used to to enable a middle-tier proxy/caching service for read-only access. Have working example. The central DB is ORACLE, but other technologies can be used for testing ( MySQL, SQLite, …) User Application EDM Framework EDM EventSetup POOL-ORA Frontier Cache Database Optional POOL-RAL Plugin

Aug. 25, 2005Conditions DB Deployment4 N-Tier Deployment Redundant Tomcat + Squid servers are deployed at Tier 0 (launchpad). Squids are deployed at Tier 1, and Tier N sites. Will be an “edge service” for Grid computing centers. Configuration includes: –Access Control List (ACL) –Cache management (Memory and Disk) –Inter Cache sharing (if desired). Applications needing access to conditions data are configured with a list of servers, and proxies. For Example: –FRONTIER_SERVER1= 00/Frontier export –FRONTIER_SERVER2= 00/Frontier export –FRONTIER_PROXY1= domain.net:10000 Squid(s) Tomcat(s) Squid DB Squid Tier 0 Tier 1 Tier N FroNTier Launchpad

Aug. 25, 2005Conditions DB Deployment5 Fall 05 Testing Plan 1.Load estimated data for ~1 year’s running (~500GB) 2.Exercise various loading and access patterns to simulate stable operations and worst case scenarios. Limited distributed deployment (eg CERN, FNAL and HLT CERN). 3.Deploy Squid servers to Tier 1 and Tier 2 centers. Repeat testing under various loads and access patterns. Looking for sites interested in participating.

Aug. 25, 2005Conditions DB Deployment6 Integration Schedule Oct. 1: Central DB Configured and Loaded for DB test Activity. Oct. 15: Small scale test CERN+FNAL of distributed DB. Nov. 1-30: Large scale test of distributed DB. Dec. 20: DB test Evaluation Report. Milestone

Aug. 25, 2005Conditions DB Deployment7 Application Object Diagram Radovan Chytracek IMonitoring IMonitoringService IRelationaService IRelationaSession FrontierAccess CORAL PLUG-IN MySQLAccess CORAL PLUG-IN OracleAccess CORAL PLUG-IN CSV Reporter DB Reporter XML Reporter ApMon

Aug. 25, 2005Conditions DB Deployment8 MonALISA ApMon ApMon is a set of flexible APIs that can be used by any application to send monitoring information to MonALISA services. The monitoring data is sent as UDP datagrams to one or more hosts running MonALISA services. Applications can periodically report any type of information the user wants to collect, monitor or use in the MonALISA framework to trigger alarms or activate decision agents. We provide ApMon implementations for 5 programming languages: C, C++, Java, Perl and Python.

Aug. 25, 2005Conditions DB Deployment9 Next Steps (w/ LCG3D) Monitoring –Prepare CORAL client monitoring framework Radovan described a few weeks ago. –Prepare udp communication with MonALISA based message collector. –prepare information strings to monitor who is the requester?, where is the request coming from?, what is the request?, when was the request made?, how long did it take to reply?, how large was the reply?, possibly what was the intermediate routing requested? Testing repository, Frontier plug-in, etc. –Continue testing with actual CMS POOL data objects. Can use existing DB and frontier server at FNAL to start. Need to be able to build with CMS software. –Start using 3D test infrastructure at CERN. We can start with devdb10 and existing 3d frontier server. –Set up CMS POOL repository for stress and access pattern testing. Need to move to test DB server for a while to understand loading needs better. –Populate repository and test.

The End