Download presentation
Presentation is loading. Please wait.
Published byAnnabel Fitzgerald Modified over 9 years ago
1
DISTRIBUTED DATA FLOW WEB-SERVICES FOR ACCESSING AND PROCESSING OF BIG DATA SETS IN EARTH SCIENCES A.A. Poyda 1, M.N. Zhizhin 1, D.P. Medvedev 2, D.Y. Mishin 3 1 NRC "Kurchatov Institute", Moscow, Russia 2 Geophysical Center RAS, Moscow, Russia 3 Johns Hopkins University, Baltimore, USA
2
The Big Data problem in Earth sciences
3
Big Data problem in Earth sciences Storage problem: remote access is required. Data request problem: timeout or insufficient memory when requesting big data blocks. Data processing problem: processing of big data volumes may lead to disk swapping resulting in dramatic performance decrease. Optimization of data access and processing is required.
4
Vis5D time-space-parameter animation Data model for Earth sciences
5
Data access and processing optimizations in Earth sciences Data access parallelization Migration to data-flow / block-stream data access Data store optimization Migration to distributed data-flow processing
6
Data access parallelization OpenStack Swift Fault-tolerant, distributed object or blob storage with continuity support Works as data container Supports fault-tolerance and data replication Data backup Scalability RESTful S3-like interface Supports users authorization and authentication (swauth, keystone)
7
Openstack SWIFT performance
8
Data-flow / block-stream data access
9
Scientific data arrays Arrays are widely used in environmental sciences to store modelling results, satellite observations, raster maps, etc. Datasets can be quite large, up to several terabytes. Most data are stored as file collections in proprietary formats or universally adopted formats like netCDF, GRIB, HDF5. File access can be problematic: Scientists need to know about too many file formats Usually files must be completely downloaded before they can be used Thousands of files can be processed in one data request; only a small portion of their contents appears in the result set Currently available database solutions do not have convenient array storage capabilities.
10
Data store optimization. Cloud-based Active Storage for multidimensional arrays. Active Storage is a new way in database design used for storing multi- dimensional numeric arrays containing space, terrestrial weather data archives and large scaled images. Special features of Active Storage are: – Universal architecture capable to store different data types in one system. – Effective index creating for large data (tens and hundreds Tb). – Can do basic data transformations directly on storage nodes (arithmetic operations, statistical operations, linear convolution). – Metadata integrated with data. – Can distribute data automatically on several computer nodes (also can distribute computations). – Can be used in Grid infrastructure using OGSA-DAI services.
11
Splitting an array into chunks 1 seek8 seeks 4 seeks Chunked array Non-chunked array We store chunks in BLOB fields of a database table Chunks do not need to be the same size chunk_keychunk 0 1 2 3
12
ActiveStorage performance Request number Request form (time Х latitude Х longitude) 18 х 64 х 128 232 х 32 х 64 3128 х 16 х 32 4512 х 8 х16 52048 х 4 х8 68192 х 2 х 4 732768 х 1 х 2
13
Distributed data-flow processing Distributed data-flow processing organization problems: Data communication support between activity; Load balancing and parallelization management; Fault-tolerance and error processing support; Activity management. At present, several frameworks of distributed data- flow processing exist: Yahoo S4, Twitter Storm, Taverna, Kepler, OGSA DAI.
14
Twitter Storm
15
Wind speed calculation workflow example Wind speed calculation:
16
Dependence of data-flow processing time from data volume
17
Dependence of data-flow processing time from nodes number
18
Problems that are not solved by frameworks Automatic partitioning of source data space. Flooding and synchronization management in case of data flow merging. Data flow routing in case of parallel processing activity and data flow merging.
19
Current work Twitter Storm data request block-stream activity supporting block geometry and array priority direction properties, and automatic partitioning of source data space. Twitter Storm data processing activity supporting automatic data flow merging, generalized array processing language, and flooding management.
20
Results A framework has been developed, having the following features: cloud storage with data reservation and access acceleration; designed for large multidimentional data arrays; request shape flexibility; flow-based system for access and processing; high scalability.
21
Applications High-resolution 3D models of the Earth based on large number of observations. Climate modeling and analysis tasks. Multispectral satellite and geological imagery processing.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.