Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Evolution: 101. Parallel Filesystem vs Object Stores Amazon S3 CIFS NFS.

Similar presentations


Presentation on theme: "Data Evolution: 101. Parallel Filesystem vs Object Stores Amazon S3 CIFS NFS."— Presentation transcript:

1 Data Evolution: 101

2 Parallel Filesystem vs Object Stores Amazon S3 CIFS NFS

3 Summary: Parallel Filesystem vs Object Stores Parallel FilesystemObject Store Client access methodNative RDMA Posix Client. Posix reads and writes, MPIIO API (plus gateways): PUT, GET, DELETE Object Types allowedFiles and DirectoriesObjects with defined structure (data, metadata, data protection policy, checksum) IO types supportedParallel IO from single clients, parallel IO from multiple clients Single stream IO from very large numbers of clients Geographical distributionUsually local with some limited WAN capability Global Filesystem metadata managament Usually use Inodes with pointers to data blocks and pointers to pointers No inodes, object ID returned to client which encodes object location in a hash

4 WOS Testing Limits System AttributesWOS 2.5 Maximum # of Unique Objects256 Billion Maximum Total Cluster Capacity (using 4TB HDDs)30.72 PB Maximum Aggregate R/W Performance (SAS HDDs)8M Object Reads; 2M Object Writes Latency (4KB Read; 2-way Replication, SAS HDD)9ms (Flash is even better) Latency (4KB Write; 2-way Replication, SAS HDD)25ms (Flash is even better) Maximum Object Size; Minimum Object Size5TB; 512B Maximum Number of Nodes / Cluster256 (2 nodes per WOS6000) Maximum Metadata Space per ObjectID64MB Maximum # of Storage Zones64 Maximum # of Replication/Protection Policies64 4

5 Extents-based Traditional File System Approach Ease of use is limited at Scale – FSCK challenges – Inode data structures – Fragmentation issues – Filesystem expansion tricky – No native understanding of distribution of data – RAID management and hot-spare management 5

6 WOS (Web Object Scaler) Not POSIX-based Not RAID-based No Spare Drives No inode references, no FAT, no extent lists No more running fsck No more volume management Not based on single-site/box architecture 6

7 DDN | WoS Software Stack ObjectAssure™ Erasure CodingReplication Engine WOS Policy Engine De-clustered Data Management & Fast Rebuild Self-Healing Object Storage Clustering Latency-Aware Access Manager WOS Core: Peer-to-Peer Object Storage WOS Cluster Management Utility Connectors User-Defined Metadata NFS & CIFS GRIDScaler HSM Android, iOS & S3 Multi-Tenancy Layer WOS API C++, Java, Python, PHP, HTTP, HTTP CDN Caching EXAScaler HSM API-based Integrate applications and devices more robustly Policy driven Manage truly via policy, rather than micromanaging mulitiple layers of traditional filesystems Global, Peer:Peer Distribute data across 100s of sites in one namespace Self-Healing Intelligent Data Management system recovers from failures rapidly and autonomously Data Protection Replicate and/or Erasure Code Small files, large files, streaming files Low seek times to get data WOS cacheing servers for massive streaming data Distributed Objects Object ID Management


Download ppt "Data Evolution: 101. Parallel Filesystem vs Object Stores Amazon S3 CIFS NFS."

Similar presentations


Ads by Google