A Workflow-Aware Storage System Emalayan Vairavanathan 1 Samer Al-Kiswany, Lauro Beltrão Costa, Zhao Zhang, Daniel S. Katz, Michael Wilde, Matei Ripeanu
Workflow Example - ModFTDock Protein docking application Simulates a more complex protein model from two known proteins Applications Drugs design Protein interaction prediction 2
Background – ModFTDock in Argonne BG/P 3 Backend file system (e.g., GPFS, NFS) Scale: Compute nodes File based communication Large IO volume Workflow Runtime Engine 1.2 M Docking Tasks IO rate : 8GBps = 51KBps / core App. task Local storage App. task Local storage App. task Local storage App. task Local storage App. task Local storage
Source [Zhao et. al] Background – Backend Storage Bottleneck Storage is one of the main bottlenecks for workflows Montage workflow (512 BG/P cores, GPFS backend file system) 4 Scheduling and Idle 40%
Intermediate Storage Approach 5 Backend file system (e.g., GPFS, NFS) App. task Local storage App. task Local storage App. task Local storage Intermediate Storage … POSIX API Workflow Runtime Engine Scale: Compute nodes Stage In Stage Out Source [Zhao et. al] MTAGS 2008
Research Question How can we improve the storage performance for workflow applications? 6
IO-Patterns in Workflow Applications – by Justin Wozniak et al PDSW’09 Pipeline Broadcast Reduce Scatter and Gather 7 Locality and location-aware scheduling Replication Collocation and location-aware scheduling Block-level data placement
IO-Patterns in ModFTDock 1.2 M Dock, Merge and Score instances at large run Average file size 100 KB– 75 MB Stage - 1 Broadcast pattern Stage - 2 Reduce pattern Stage - 3 Pipeline pattern 8 ModFTDock
Research Question How can we improve the storage performance for workflow applications? 9 Workflow-aware storage: Optimizing the storage for IO patterns Our Answer Traditional approach: One size fits all Our approach: File / block-level optimizations
Integrating with the workflow runtime engine 10 Backend file system (e.g., GPFS, NFS) Workflow Runtime Engine App. task Local storage App. task Local storage App. task Local storage Workflow-aware storage (shared) Compute Nodes … Stage In/Out Storage hints (e.g., location information) Application hints (e.g., indicating access patterns) POSIX API
Outline Background IO Patterns Workflow-aware storage system: Implementation Evaluation 11
Implementation: MosaStore File is divided into fixed size chunks. Chunks: stored on the storage nodes. Manager maintains a block-map for each file POSIX interface for accessing the system MosaStore distributed storage architecture 12
Implementation: Workflow-aware Storage System Workflow-aware storage architecture 13
Implementation: Workflow-aware Storage System Optimized data placement for the pipeline pattern Priority to local writes and reads Optimized data placement for the reduce pattern Collocating files in a single storage node Replication mechanism optimized for the broadcast pattern Parallel replication Exposing file location to workflow runtime engine 14
Outline Background IO Patterns Workflow-aware storage system: Implementation Evaluation 15
Evaluation - Baselines 16 MosaStore, NFS and Node-local storage vs Workflow-aware storage Backend file system (e.g., GPFS, NFS) App. task Local storage App. task Local storage App. task Local storage Intermediate storage (shared) Compute Nodes … Stage In/Out MosaStore NFS Local storage Workflow- aware storage
Evaluation - Platform 17 Cluster of 20 machines. Intel Xeon 4-core, 2.33-GHz CPU, 4-GB RAM, 1-Gbps NIC, and a RAID- 1 on two 300-GB 7200-rpm SATA disks Backend storage NFS server Intel Xeon E core, 2.33-GHz CPU, 8-GB RAM, 1-Gbps NIC, and a 6 SATA disks in a RAID 5 configuration NFS server is better provisioned
Evaluation – Benchmarks and Application 18 Synthetic benchmark Application and workflow run-time engine ModFTDock WorkloadPipelineBroadcastReduce Small 100KB, 200KB, 10KB100KB, 1KB10KB, 100KB Medium 100 MB, 200 MB, 1MB100 MB, 1MB10MB, 200 MB Large 1GB, 2GB, 10MB1 GB, 10 MB100MB, 2 GB
Synthetic Benchmark - Pipeline 19 Average runtime for medium workload Optimization: Locality and location-aware scheduling
Synthetic Benchmarks - Reduce 20 Optimization: Collocation and location-aware scheduling Average runtime for medium workload
Synthetic Benchmarks - Broadcast 21 Optimization: Replication Average runtime for medium workload
Not everything is perfect ! 22 Average runtime for small workload (pipeline, broadcast and reduce benchmarks)
Evaluation – ModFTDock 23 ModFTDock workflow Total application time on three different systems
Evaluation – Highlights 24 WASS shows considerable performance gain with all the benchmarks on medium and large workload (up to 18x faster than NFS and up to 2x faster than MosaStore). ModFTDock is 20% faster on WASS than on MosaStore, and more than 2x faster than running on NFS. WASS provides lower performance with small benchmarks due to metadata overheads and manager latency.
Summary 25 Problem How can we improve the storage performance for workflow applications? Approach Workflow aware storage system (WASS) From backend storage to intermediate storage Bi-directional communication using hints Future work Integrating more applications Large scale evaluation
THANK YOU 26 MosaStore: netsyslab.ece.ubc.ca/wiki/index.php/MosaStore Networked Systems Laboratory: netsyslab.ece.ubc.ca