ESA EO Data Preservation System: CBA Infrastructure and its future

Slides:



Advertisements
Similar presentations
The Data Information and Management System (DIMS)
Advertisements

Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Computing Infrastructure
Windows Azure Conference 2014 Hybrid Cloud Storage: StorSimple and Windows Azure.
Vorlesung Speichernetzwerke Teil 2 Dipl. – Ing. (BA) Ingo Fuchs 2003.
“An integrator view on storage consolidation & Information Lifecycle Management (ILM)” Ulrik Van Schepdael - Marketing Manager Belgacom Network & System.
Components and Architecture CS 543 – Data Warehousing.
Implementation Review1 Moving Archive Data to the EMC Storage Array March 14, 2003 Faith Abney.
Designing Storage Architectures for Preservation Collections Library of Congress, September 17-18, 2007 Preservation and Access Repository Storage Architecture.
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
Harvard’s Digital Repository Service (DRS) Architecture Harvard University Library (HUL) Andrea Goethals, Randy Stern December 10, 2009.
1© Copyright 2013 EMC Corporation. All rights reserved. EMC and Microsoft SharePoint Server Performance Name Title Date.
Reducing Risk with Cloud Storage. Dell Storage Forum 2011 Storage 2 Dells’ Definition of Cloud Demand driven scalability: up or down, just happens Metered:
11 Capacity Planning Methodologies / Reporting for Storage Space and SAN Port Usage Bob Davis EMC Technical Consultant.
1 © Copyright 2009 EMC Corporation. All rights reserved. Agenda Storing More Efficiently  Storage Consolidation  Tiered Storage  Storing More Intelligently.
© 2006 IBM Corporation IBM Systems Business Continuity and Disaster Recovery Part 2, the hardware Tim McMahon
Best Practices for Backup in SAN/NAS Environments Jeff Wells.
Enterprise Storage A New Approach to Information Access Darren Thomas Vice President Compaq Computer Corporation.
Module – 4 Intelligent storage system
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
ASI-Eumetsat Meeting Matera, 4-5 Feb CNM Context Matera, February 4-5, 20092ASI-Eumetsat Meeting.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
2004 Purchasing Intentions Survey Mark Schlack Editorial Director, Storage Media Group TechTarget.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
ESA Report to DAI/IPR WG Gian Maria Pinna DAI/IPR meeting Toulouse 2-5 November 2004.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Implementation Review1 Archive Ingest Redesign March 14, 2003.
+ Lec#1: Planning, Designing, and Operating Local Area Networks 1 st semester CT.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Page 1PV2009, 1-3/12/2009 The ESA Earth Observation Payload Data Long Term Storage Activities Gian Maria Pinna (1), Francesco Ferrante (2) (1) ESA-ESRIN.
Packard Campus for Audio Visual Conservation Formerly NAVCC Architecting for Data Integrity.
EonStor DS series disk arrays
EonStor DS 1000.
Open-E Data Storage Software (DSS V6)
MERANTI Caused More Than 1.5 B$ Damage
EonStor DS 2000.
Video Security Design Workshop:
Data Stewardship Interest Group WGISS-43 Meeting
Reducing Risk with Cloud Storage
Data Stewardship Interest Group WGISS-43 Meeting
The demonstration of Lustre in EAST data system
Data Stewardship Interest Group WGISS-42 Meeting
22 September 2017, ESA/ESRIN - Frascati
Technology for Long-Term Digital Preservation
Big Data in Earth Observation
INTA ESA-ESRIN 1st LTDP+ Workshop Canaries Space Centre
LONG TERM DATA PRESERVATION
Update on Plan for KISTI-GSDC
Cluster Active Archive
Luca dell’Agnello INFN-CNAF
The INFN Tier-1 Storage Implementation
Technology for Long Term Digital Preservation Workshop ESA 22/09/2017
Lecture 11: DMBS Internals
Real IBM C exam questions and answers
Unity Storage Array Profile
Overview Introduction VPS Understanding VPS Architecture
Storage & Digital Asset Management CIO Council Update
JDAT Production Hardware
Airbus Archive Technologies
Long-Term Storage Service for Memory Institutions and Research Centers
Storage Trends: DoITT Enterprise Storage
Rajeev Bhardwaj Director, Product Management
Note to Oracle Sales Teams
Proposal for a DØ Remote Analysis Model (DØRAM)
September 2005 Ukraine Kyiv
Presentation transcript:

ESA EO Data Preservation System: CBA Infrastructure and its future LTDP+ Team

Data Preservation System

Preservation Element Front End (PE-FE) Data flow between live missions and Data Preservation System via a common Front-end Cryosat-2 Master Archive Ingestion report SMOS Data Information System Preservation Element Front-End AEOLUS TPM Cold Back-up Archive Ingestion report Data Reprocessing

Master Archive - Data Archive Service (DAS) Implements the ESA Master Archive through a dedicated service awarded to industry through an open ITT: Contract Kicked-off in August 2016 4.5 TB of EO consolidated Data Two copies of the data held > 200 Km apart Connection to the ESA EO WAN Checks of the data at all stages from ingestion to delivery Scalability required for growth of contents Management of the data information (MetaData) in fully redundant Databases

DAS Consortium The industrial consortium is made of ACRI-ST (FR), adwäisEO (LU) and KSAT (NO), with the following distribution of roles: ACRI-ST: Data Archiving (and delivery) Service provider – Prime contractor adwäisEO: data archiving (and delivery) operation provider with its leading edge IT and connected secured infrastructure in Luxembourg KSAT: data knowledge provider and overall ingestion validation

DAS Data flow

Cold Back-up Archive (CBA) Implemented in May 2014 Unique data & Unconsolidated data PAC and Station Phase-out project data Inter-directorate disaster recovery archive Second copy of the Master Inventory Live mission support to PDGS’s Odin Proba-1 SCISAT Two copies of the data held at ESA premises in Frascati ESA Internal LAN Main Tape Library capable of storing 24 Petabytes Live Mission support Swarm (started in mid 2014) Cryosat-2 (started in March 2015) SMOS (started in May 2016) AEOLUS (supported at day zero) EO-SIP, SAFE and Native Format archiver

CBA Figures Average write a day: 13TB Peak read a day: 10TB Mission Uncons. Cons. Native EO-SIP LIVE ERS-1/2 X   Envisat Cryosat-2  X SMOS SWARM AEOLUS Goce Adeos ALOS X  GOSAT Ikonos-2 IRS-P3 JERS-1 Kompsat-1 Landsat-1,8 MOS-1,1b Nimbus NOAA-7,19 Oceansat-2 ODIN PROBA-1 QuickSCAT Rapideye Scisat Seasat SeaStar SPOT1-2 TerraSAR WorldView-1,3 Windsat Average write a day: 13TB Peak read a day: 10TB Amount of Data (JAN 2019) 7.95 Petabyte Number of Files 129,500,000

STK SL8500 3000 slots 8 T1000D Drives (252 MB/s) 8 Robotic Arms CBA Main Tape Library STK SL8500 3000 slots 8 T1000D Drives (252 MB/s) 8 Robotic Arms Fully redundant 16 GB Fibre Connection SSD Disk based cache ORACLE HSM driven

CBA Ingestion tool Bulk Ingestion from local or remote locations Automated Ingestion from inbox baskets EO-SIP and SAFE Validation Metadata extraction Reporting Ingestion Confirmation Reports Data Information System population

CBA Hardware

LTDP Working Group #37 – DLR Presentation

Cold Back-up Archive HW enhancements 2014 2017 2018 2019 2015 2016 2020 2021 Migrated technology and data from T10KB to T10KD Increased number of drives to 8 Implemented 10GB Network Upgraded FC environment from 8gbp/s to 16gbp/s Upgraded 1rst tier from mechanical HDD to SSD Disks Optimized 1st tier – dedicated LUN for iNodes Implemented Disaster Recovery Implementation of Business Continuity (1h) Introduction of 2nd tier (sort of) Migration to LTO-9 Introduction of object storage node (replace DR library ?)

Dedicated LUN for iNodes RAID RAID-1 RAID LUN1 LUN2 LUN3 LUNn LUN LUN1 LUN2 LUNn SAM-FS Master Configuration File SAM-FS Master Configuration File iNodes HSM File-System

Large Tape User Group outcome Technology progress per se is good, but obstacles ahead (NAND, HDD) Key computing markets in the hand of very few companies Price/performance advances are slowing down HDD are a key storage for the foreseeable future, SSDs not cost effective for capacity Have to closely watch the tape development There will be NO relevant new storage technologies in the market in the coming few years (e.g. DNA storage) Holistic view needed for the storage architecture, careful combination of SDD, HDD and Tape to optimize pure storage needs and high throughput I/O