Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.

Slides:



Advertisements
Similar presentations
Edwin A. Valentijn Prof Astronomical Information Technology Target- OmegaCEN – Kapteyn University of Groningen e-science platform 29 October2013 Target.
Advertisements

Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Deployment and Management of Grid Services.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Scientific Computing Laboratory I NSTITUTE OF P HYSICS B ELGRADE WWW. SCL. RS.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Infrastructure overview Arnold Meijster &
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Lofar Survey workshop 8/3/2007 Lofar -Surveys and AstroWise OmegaCEN NOVA – Kapteyn Institute -RuG 8 March 2007 Lofar Survey Workshop OmegaCEN NOVA – Kapteyn.
XLDB -Lyon Aug 2009 Astro-WISE -> TarGet OmegaCEN – TarGet Kapteyn Astronomical Institute University of Groningen 28 August 2009 UU OmegaCEN – TarGet Kapteyn.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Configuring and Maintaining EGEE Production.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
Date donald smits center for information technology Centre of Information Technology RUG Robert Janz Centre of Information Technology University.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
Astro-WISE & Grid Fokke Dijkstra – Donald Smits Centre for Information Technology Andrey Belikov – OmegaCEN, Kapteyn institute University of Groningen.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks LOFAR Archive Information System Kor Begeman.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Grid Computing Status Report Jeff Templon PDP Group, NIKHEF NIKHEF Scientific Advisory Committee 20 May 2005.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Integration of Astro-WISE with Grid storage.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
CGW 04, Stripped replication for the grid environment as a web service1 Stripped replication for the Grid environment as a web service Marek Ciglan, Ondrej.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
July 29' 2010INDIA-CMS_meeting_BARC1 LHC Computing Grid Makrand Siddhabhatti DHEP, TIFR Mumbai.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
The Astro-Wise Federation … staying connected Lorentz center, Monday, 31 March 2008.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarksEGEE-III INFSO-RI Astro-Wise and EGEE.
EURO-VO: GRID and VO Lofar Information System Design OmegaCEN Kapteyn Institute TARGET- Computing Center University Groningen Garching, 10 April 2008 Lofar.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Configuring, Managing and Maintaining Windows Server® 2008 Servers Course 6419A.
The GridPP DIRAC project DIRAC for non-LHC communities.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Mining Job Monitoring Data Automatic Error.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Enabling Grids for E-sciencE EGEE-II INFSO-RI Status of SRB/SRM interface development Fu-Ming Tsai Academia Sinica Grid Computing.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
CERN IT Department CH-1211 Genève 23 Switzerland t EGEE09 Barcelona ATLAS Distributed Data Management Fernando H. Barreiro Megino on behalf.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Storage discovery in AliEn
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI solution for high throughput data analysis Peter Solagna EGI.eu Operations.
WP 9.4 Use Case (ESA-ESRIN, IPSL, KNMI)
SuperB – INFN-Bari Giacinto DONVITO.
Belle II Physics Analysis Center at TIFR
Computing Board Report CHIPP Plenary Meeting
EGEE NA4 Lofar Lofar Information System Design OmegaCEN
Ákos Frohner EGEE'08 September 2008
Installation/Configuration
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
The LHCb Computing Data Challenge DC06
Presentation transcript:

Lofar Information System on GRID A.N.Belikov

Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation Problem: optimal data placement

Lofar: Science

Lofar: Network 10 Gbps optical single data processing center for raw data (CIT, Groningen)

Lofar: Data Processing Blue Gene/L linux cluster centralized data processing for raw data long-term data storage

Lofar: LTA requirements

20 PB for 5 years decentralized data storage decentralized data reprocessing international partners

LHC & EGEE 15 PB data per year grid computing EGEE: ~250 sites, >45000 CPU EGEE-III

EGEE: Tiers approach

Astro-Wise: grid

Astro-WISE: dataservers (1)

Astro-WISE: dataservers (2) Astro-Wise LoWise dataserver data retrieved by URI 0.5 TB 7 sec 3.5 TB 20 sec 1 PB 8 h

LoWISE: URI storage

LoWISE: Tiers Astro-WISE-like interfaces user access (from PM to anonymous) Tier 1 as core element

LoWISE: data storage distributed file systems: GPFS (IBM) Lustre (Sun) dCAche CASTOR

Lofar-WISE: Data Storage initial configuration distributed FS & EGEE Oracle for metadata

Problem: users users: observer, project manager privileged users: operators

Problem: CE & SE geographical location file transfer user access research group driven

Problem: Access policy

Problem: Optimal Data Placement priorities groups pool of queries TCF monitor system

Task create and implement a system for an optimal data placement