Jean-Yves Nief, CC-IN2P3 CC-IN2P3 KEK-CCIN2P3 meeting on Grids. September 11th – 12th, 2006.

Slides:



Advertisements
Similar presentations
Virtual Disk based Centralized Management for Enterprise Networks
Advertisements

National Partnership for Advanced Computational Infrastructure San Diego Supercomputer Center Data Grids for Collection Federation Reagan W. Moore University.
Peter Berrisford RAL – Data Management Group SRB Services.
Data Grid: Storage Resource Broker Mike Smorul. SRB Overview Developed at San Diego Supercomputing Center. Provides the abstraction mechanisms needed.
San Diego Supercomputer Center, University of California at San Diego Grid Physics Network (GriPhyN) University of Florida A Data Storage Language for.
Chronopolis: Preserving Our Digital Heritage David Minor UC San Diego San Diego Supercomputer Center.
Applying Data Grids to Support Distributed Data Management Storage Resource Broker Reagan W. Moore Ian Fisk Bing Zhu University of California, San Diego.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Generic policy rules and principles Jean-Yves Nief.
IRODS usage at CC-IN2P3 Jean-Yves Nief. Talk overview What is CC-IN2P3 ? Who is using iRODS ? iRODS administration: –Hardware setup. iRODS interaction.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
November 16, 2007 Dominique Boutigny – CC-IN2P3 Grids: Tools for e-Science DoSon AC GRID School.
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
XCAT Science Portal Status & Future Work July 15, 2002 Shava Smallen Extreme! Computing Laboratory Indiana University.
Users’ Authentication in the VRVS System David Collados California Institute of Technology November 20th, 2003TERENA - Authentication & Authorization.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
IRODS performance test and SRB system at KEK Yoshimi KEK Building data grids with iRODS 27 May 2008.
Jean-Yves Nief, CC-IN2P3, Lyon First Latin American EELA Workshop April 24th – 26th, 2006 Data distribution and aggregation over geographically distant.
Core SRB Technology for 2005 NCOIC Workshop By Michael Wan And Wayne Schroeder SDSC SDSC/UCSD/NPACI.
Introduction to iRODS Jean-Yves Nief. Talk overview Data management context. Some data management goals: –Storage virtualization. –Virtualization of the.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Rule-Based Data Management Systems Reagan W. Moore Wayne Schroeder Mike Wan Arcot Rajasekar {moore, schroede, mwan, {moore, schroede, mwan,
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
Production Data Grids SRB - iRODS Storage Resource Broker Reagan W. Moore
BaBar Data Distribution using the Storage Resource Broker Adil Hasan, Wilko Kroeger (SLAC Computing Services), Dominique Boutigny (LAPP), Cristina Bulfon.
A Web-based Distributed Simulation System Christopher Taewan Ryu Computer Science Department California State University, Fullerton.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Michael Doherty RAL UK e-Science AHM 2-4 September 2003 SRB in Action.
MEDIGRID project, DataGrid FR meeting, April 18, 2002, Johan Montagnat, WP10 ACI GRID 2002 MEDIGRID: high performance medical image processing on a computational.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Managing Petabytes of data with iRODS at CC-IN2P3
Xrootd Proxy Service Andrew Hanushevsky Heinz Stockinger Stanford Linear Accelerator Center SAG September-04
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
The Storage Resource Broker and.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
Lynda : Lyon Neuroimaging Database and Applications (1) Institut des Sciences Cognitives UMR 5015 CNRS ; (2) parallel computing ENS-Lyon ; (3)Centre de.
Building Preservation Environments with Data Grid Technology Reagan W. Moore Presenter: Praveen Namburi.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Preservation Data Services Persistent Archive Research Group Reagan W. Moore October 1, 2003.
1 eScience Grid Environments th May 2004 NESC - Edinburgh Deployment of Storage Resource Broker at CCLRC for E-science Projects Ananta Manandhar.
IRODS at CC-IN2P3: overview Jean-Yves Nief. Talk overview iRODS in production: –Hardware setup. –Usage. –Prospects. iRODS developpements in Lyon: –Scripts.
CC-IN2P3 data repositories Jean-Yves Nief. What is CC-IN2P3 ? 04/12/2009CC-IN2P3 data repositories2 Federate computing needs of the french community:
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
Management of the Data in Auger Jean-Noël Albert LAL – Orsay IN2P3 - CNRS ASPERA – Oct Lyon.
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
NA4/medical imaging. Medical Data Manager Installation
An Overview of iRODS Integrated Rule-Oriented Data System
Introduction to iRODS Jean-Yves Nief.
Introduction to Data Management in EGI
Arcot Rajasekar Michael Wan Reagan Moore (sekar, mwan,
CC-IN2P3 Jean-Yves Nief, CC-IN2P3 HEPiX, SLAC
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Using an Object Oriented Database to Store BaBar's Terabytes
Gridifying the LHCb Monte Carlo production system
Sending data to EUROSTAT using STATEL and STADIUM web client
STATEL an easy way to transfer data
Presentation transcript:

Jean-Yves Nief, CC-IN2P3 CC-IN2P3 KEK-CCIN2P3 meeting on Grids. September 11th – 12th, 2006

KEK-CCIN2P3 meeting on Grids, September 11th-12th Overview. 3 SRB servers: –1 Sun V440, 1 Sun V480 (Ultra Sparc III), 1 Sun v20z (AMD Opteron). –OS: Solaris 9 and 10. –Total disk space: ~ 8 TB –HPSS driver (non DCE): Using HPSS 5.1. MCAT: –Oracle 10g. Environment with multiple OS for clients or other SRB servers: –Linux: RedHat, Scientific Linux, Debian. –Solaris. –Windows. –Mac OS. Interfaces: –Scommands invoked from the shell (script based on them). –Java APIs. –Perl APIs. –Web interface mySRB.

KEK-CCIN2P3 meeting on Grids, September 11th-12th Who is using CC-IN2P3 ? In green = pre-production. High Energy Physics: –BaBar (SLAC, Stanford). –CMOS (International Linear Collider R&D). –Calice (International Linear Collider R&D). Astroparticle: –Edelweiss (Modane, France). –Pierre Auger Observatory (Argentina). Astrophysics: –SuperNovae Factory (Hawaii). Biomedical applications: –Neuroscience research. –Mammography project. –Cardiology research.

KEK-CCIN2P3 meeting on Grids, September 11th-12th Babar, SLAC & CC-IN2P3. BaBar: High Energy Physics experiment closed to Stanford (California). SLAC and CC-IN2P3 first opened to the BaBar collaborators data analysis. Both held complete copies of data (Objectivity). Now only SLAC hold a complete copy of the data. Natural candidates for testing and deployment of grid middleware. Data should be available in a delay of 24/48 hours. SRB: chosen for data distribution of hundreds of TBs of data.

KEK-CCIN2P3 meeting on Grids, September 11th-12th SRB BaBar architecture. CC-IN2P3 (Lyon) HPSS/Lyon SRB SLAC (Stanford, CA) SRB MCAT (1) (3) (2) SRB MCAT 2 Zones (SLAC + Lyon) HPSS/SLAC

KEK-CCIN2P3 meeting on Grids, September 11th-12th Extra details (BaBar). Hardware: –SUN servers (Solaris 5.8, 5.9, 5.10): NetraT V240, V440, V480, V20z. Software: –Oracle 10g for the SLAC and Lyon MCAT. MCATs synchronization: only users and physical resources. Comparison of the MCATs contents to transfer the data. Step (1), (2), (3) multithreaded under client control: very little latency. Advantage: –External client can pick up data from SLAC or Lyon without interacting with the other site.

KEK-CCIN2P3 meeting on Grids, September 11th-12th Overall assessment for BaBar. A lot of time saved for developping applications thanks to the SRB. Transparent access to data: –Very useful in an hybrid environment (disk, tape). –Easy to scale the service (adding/removing new servers on the fly). –Not dependent of physical locations changes in the client application. Fully automated procedure on both sides. Easy for SLAC to recover corrupted data. 300 TB (530,000 files) shipped to Lyon. Up to 3 TB /day from tape to tape (minimum latency). Going up to 5 TB / day now.

KEK-CCIN2P3 meeting on Grids, September 11th-12th Fermilab (US)  CERN SLAC (US)  IN2P3 (FR) 1 Terabyte/day SLAC (US)  INFN Padva (IT) Fermilab (US)  U. Chicago (US) CEBAF (US)  IN2P3 (FR) INFN Padva (IT)  SLAC (US) U. Toronto (CA)  Fermilab (US) Helmholtz-Karlsruhe (DE)  SLAC (US) DOE Lab  DOE Lab SLAC (US)  JANET (UK) Fermilab (US)  JANET (UK) Argonne (US)  Level3 (US) Argonne  SURFnet (NL) IN2P3 (FR)  SLAC (US) Fermilab (US)  INFN Padva (IT) ESNET Traffic with one server on both sides (April 2004).

KEK-CCIN2P3 meeting on Grids, September 11th-12th CMOS, Calice: ILC. HPSS/Lyon SRB IReS (Strasbourg) CC-IN2P3 (2 TB) CMOS: 5 to 10 TBs / year HPSS/Lyon SRB User PC CC-IN2P3 Calice: 2 to 5 TBs / year

SuperNovae Factory. Telescope data stored into the SRB, processed in Lyon (almost online). Collaborative tool + backup (files exchanged between French and US users). Hawaii telescope HPSS/Lyon SRB CC-IN2P3 a few GBs / day SRB HPSS/NERSC Berkeley (project) SRB needed for the « online »!

KEK-CCIN2P3 meeting on Grids, September 11th-12th Neuroscience research. DICOM IRM Siemens MAGNETOM Sonata Maestro Class 1.5 T Consol Siemens Celsius Xeon (Window NT) Acquisition DICOM Export PC Dell PowerEdge 800  FTP,  File sharing,  … DICOM

KEK-CCIN2P3 meeting on Grids, September 11th-12th Neuroscience research (II). Goal: make SRB invisible to the end user. More than 500,000 files registered. Data pushed from Lyon, Strasbourg hospital: –Automated procedure including anonymization. Now interfaced within the MATLAB environment. ~ 1.5 FTE for 6 months… Next step: –Ever growing community (a few TBs / year). Goal: –Join the BIRN network (US biomedical network).

KEK-CCIN2P3 meeting on Grids, September 11th-12th Mammography. Database of X ray pictures (Florida) stored into SRB: –Reference pictures of various type of breast cancers. Analyze a X ray picture of a breast: –Submitting a job in EGEE framework. Compare with the ones in the reference database: –Pick up from the SRB.

KEK-CCIN2P3 meeting on Grids, September 11th-12th Cardiology. PACS (hospital internal info system): invisible from the outside world. Being interfaced with the SRB at the hospital using SRB/DICOM driver (thanks to CR4I, Italy!). PACS data published in the SRB anonymized on the fly. Possibility to exchange data in a secure way. SRB (Lyon Hospital) CC-IN2P3 PACS Deployed but needs more testing.

KEK-CCIN2P3 meeting on Grids, September 11th-12th GGF Data Grid Interoperability Demonstration. Goals: –Demonstrate federation of 14 SRB data grids (shared name spaces). –Demonstrate authentication, authorization, shared collections, remote data access. –CC-IN2P3 part of it. Organizers: Erwin Laure Reagan Moore Arun Jagatheesan Sheau-Yen Chen

KEK-CCIN2P3 meeting on Grids, September 11th-12th GGF Data Grid Interoperability Demonstration (II). A few tests with KEK, RAL, IB (UK + New Zealand).

KEK-CCIN2P3 meeting on Grids, September 11th-12th Summary. Lightweight administration for the entire system. Fully automated monitoring of the system health. For each project: –Training of the administrator(s) of the project. –Proposing the architecture. –User support and « consulting » on SRB. Different project = different needs, various aspects of SRB used. Over 1 million of files for some catalogs very soon. More projects coming to SRB: –Auger: CC-IN2P3 Tier 0, import from Argentina, real data and simulation distribution. –1 MegaStar project (Eros, astro): usage of HDF5 driver ? –BioEmergence.

KEK-CCIN2P3 meeting on Grids, September 11th-12th What’s next ? Monitoring tools of the SRB systems for the users needed: (like Adil, Roger Downing did for CCLRC). Build with Adil some kind of European forum on SRB: –Already contacts in Italy, Netherlands, Germany. –Gather everybody experience on SRB. –Put in common tools, scripts developped. –Adil will host the first meeting in the UK. –Big party in his new appartment: everybody welcome! SRB-DSI.

KEK-CCIN2P3 meeting on Grids, September 11th-12th Involvement in iRODS. Many possibilities, some examples: –Interface with MSS: HPSS driver. Improvement of the compound resources (rules for migration, etc…). Mixing compound and logical resources. Containers ( see CCLRC). –Optimization of the transfer protocol on long distance network wrt SRB (?). –Databases performance (RCAT, DAI). –Improvement of data encryption services. –Web interface (php ?).