Download presentation
Presentation is loading. Please wait.
Published byAllen Patrick Modified over 9 years ago
1
Large Scale Parallel File System and Cluster Management ICT, CAS
2
About ICT, CAS Institute of Computing Technology, Chinese Academy of Science The first (from 1958) and largest national IT research institute in China The largest graduate school of Computer Science in China Builder of most Chinese systems in HPC TOP 500 Focusing on computing system architecture: CPU, Compiler, Network, Grid, HPC and Storage
3
Storage Centre of ICT Founded in 2001 Leader: Dr. Xu Lu (from HP Lab) Storage for scientific computing –BWFS: Parallel cluster file system –Service on Demand system: Storage-based cluster management system. Storage for business computing –VSDS: Virtual storage research project –Backup / Virtual Computing……
4
Initial Goals…… Cluster is the future of scientific computing Avoid the I/O bottleneck – BWFS –Easy to extend to large scale –Both in capacity and speed of parallel data access Reduce the Management Work – Service on Demand system –Easy to use and management when the scale is getting larger –Higher availability facing the hardware fault when the scale is getting larger Make the cluster easy to scale, use, management and access - SUMA
5
The Storage Bottleneck of Cluster NFS (Network File System) –Most widely used in clusters to provide shared data access –Simple and easy to use and management Scalability Problem –Multiple NFS server means multiple name space –Hard to extend in capacity. –The performance do not increase with the capacity Parallel Access Problem –Poor performance in I/O density computing –Weak MS Windows support
6
Parallel network file system –Support multiple storage appliances (8-128) in a single name space (Up to 512 TB) –Separated Data and Meta-Data access to provide parallel accessing between different storage appliance Global name space between clients with different platforms –Fully compatible with NFS (not 100% POSIX) –Support data sharing between Linux and Windows clients –Support IA32, IA64 and x86_64 hardware platforms What’s BWFS
7
Centralized Management –Web based management for the storage appliances and the storage sub-system –Integrated client management with Service on Demand system. Online extension –Add storage appliances to increase the capacity without stopping the application –The new data will be automatically stripped between all the storage appliances to get a high performance.
9
`` Meta-DataUser Data Data Access on NFS
10
User-Data Meta-Data Data Access on BWFS
11
Bandwidth of BWFS write large files(20G per node, 1MB record size) 0 50 100 150 200 250 300 350 124816 Number of client nodes Aggregate Bandwidth (MB/s) 1SN 2SN 4SN NFS read large files(20G per node, 1MB record size) 0 50 100 150 200 250 300 350 124816 Number of client nodes Aggregate Bandwidth(MB/s) 1SN 2SN 4SN NFS
12
Paradigm Epos 3 (China Petrol, Xinjiang)
13
Paradigm Disco (China Petrol, Xinjiang)
14
Management Interface
15
Service on Demand System Initially developed as a subsystem of BWFS to provide cluster management Reduce the management work especially in the system deployments Increase the availability against the storage components fail Enable the fast schedule in large server farms with multiple clusters Boot the system directly from the BWFS storage appliance without the need of local hard disks
16
Traditional Cluster Deployment System 20mins
17
Shortcoming 1: Inefficiency in Schedule 20 mins
18
Shortcoming 2: Inefficiency in Maintains Hard disk errors occupy 30%-50% of all the computer system errors
19
Shortcoming 3: Inefficiency in Capacity A 5GB system on a 74GB hard disk The disks are getting larger and larger but the system images are keeping small to reduce deployment time
20
Service on Demand System Diskless boot OS by TCP/IP –Virtual SCSI disk to support Windows and Linux –Fully compatible with applications Provide high performance snapshots to support fast cloning of system images –Copy on Write when the system image is modified –Online backup system image with snapshot Automatic take over on failed clients Integrated monitor engine to support automatic schedule or adaptive computing (still in researching)
21
Service on Demand System Service 1 Service 2 Service N Map to Local Disk Network
22
Email 系统 Web 系统 Paradigm Image Fast Deployment and Schedule CGG Image Paradigm Snapshot Paradigm Snapshot Paradigm Snapshot Paradigm Snapshot Paradigm Services CGG Services Paradigm Snapshot CGG Snapshot
23
System Image System Snapshot Easy to maintain System Snapshot System Snapshot System Snapshot System Snapshot Maintenance
24
Management UI
26
Thanks 谢谢!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.