Presentation is loading. Please wait.

Presentation is loading. Please wait.

Page 1 of John Wong CTO Twin Peaks Software Inc. Mirror File System A Multiple Server File System.

Similar presentations


Presentation on theme: "Page 1 of John Wong CTO Twin Peaks Software Inc. Mirror File System A Multiple Server File System."— Presentation transcript:

1 Page 1 of John Wong CTO John.Wong@TwinPeakSoft.com Twin Peaks Software Inc. Mirror File System A Multiple Server File System

2 Page 2 of Multiple Server File System Conventional File System – UFS, EXT3 and NFS – Manage and store files on a single server and its storage devices Multiple Server File system – Manage and store files on multiple servers and their storage devices

3 Page 3 of Problems Single resource is vulnerable Redundancy provides a safety net – Disk level=>RAID – Storage level=>Storage Replication – TCP/IP level=>SNDR – File System level=>CFS, MFS – System level=>Clustering system – Application=>Database

4 Page 4 of Why MFS? Many advantages over existing technologies

5 Page 5 of Unix/Linux File System Data UFS/EXT3 Application 1 Application 2 Kernel Space User Space Disk Driver

6 Page 6 of Application NFS (Client mount) Application Data UFS/EXT3 NFSD Network File System

7 Page 7 of Application NFS (Client mount) Application Data B UFS/EXT3 NFSD UFS | NFS UFS/EXT3 Data B

8 Page 8 of UFS manages data on the local server’s storage devices NFS manages data on remote server’s storage devices Combine these two file systems to manage data on both local and remote servers storage devices UFS + NFS

9 Page 9 of Application Data UFS/EXT3 Application Data UFS/EXT3 Passive MFS Server MFS = UFS + NFS Active MFS Server MFS NFS

10 Page 10 of MFS is a kernel loadable module MFS is loaded on top of UFS and NFS Standard VFS interface No change to UFS and NFS Building Block Approach

11 Page 11 of File System Framework SOLARIS Internal, Core Kernel Architecture, Jim Mauro. Richard McDougall, PRENTICE HALL Optical drive Network File System Operation calls File Operation System Calls Other System calls read () write () open () close () mkdir () rmdir () link () ioctl () creat ()lseek ()mount () umount () Statfs()sync () Vnode interfaces VFS interfaces UFS (2) NFS (2) VxFS HSFS QFS UFS (1) NFS (1) PCFS Data File System Operation calls

12 Page 12 of MFS Framework File Operation System Calls File System Operation calls Other System calls read () write () open () close () mkdir () rmdir () link () ioctl () creat () lseek () mount () umount () Statfs() sync () Vnode interfaces VFS interfaces UFS (2) NFS (2) VxFS HSFS QFS UFS(1) NFS (1) PCFS Network Optical drive Data MFS Vnode VFS interface

13 Page 13 of Transparent to users and applications - No re-compilation or re-link needed Transparent to existing file structures - Same pathname access Transparent to underlying file systems -UFS, NFS Transparency

14 Page 14 of Conventional Mount - One directory, one file system MFS Mount - One directory, two or more file systems Mount Mechanism

15 Page 15 of # mount –F mfs host:/ndir1/ndir2 /udir1/udir2 -First mount the NFS on a UFS directory -Then mount the MFS on top of UFS and NFS - Existing UFS tree structure /udir1/udir2 becomes a local copy of MFS - Newly mounted host:/ndir1/ndir2 becomes a remote copy of MFS - Same mount options as NFS except no ‘-o hard’ option Mount Mechanism

16 Page 16 of # /usr/lib/fs/mfs/mfsck mfs_dir -After MFS mount succeeds, the local copy may not be identical to the remote copy. -Use mfsck (the MFS fsck) to synchronize them. -The mfs_dir can be any directory under MFS mount point. -Multiple mfsck commands can be invoked at the same time. MFS mfsck Command

17 Page 17 of READ/WRITE Vnode Operation All VFS/vnode operations received by MFS READ related operation: read, getattr,…. those operations only need to go to local copy (UFS). WRITE related operation: write, setattr,….. those operations go to both local (UFS) and remote (NFS) copy simultaneously (using threads)

18 Page 18 of Directory Level -Mirror any UFS directory instead of entire UFS file system -Directory A mirrored to Server A -Directory B mirrored to Server B Block Level Update -Only changed block is mirrored Mirroring Granularity

19 Page 19 of # /usr/lib/fs/mfs/msync mfs_root_dir -A daemon that synchronizes MFS pair after a remote MFS partner fails. -Upon a write failure, MFS: - Logs name of file to which the write operation failed - Starts a heartbeat thread to verify the remote MFS server is back online -Once the remote MFS server is back online, msync uses the log to sync missing files to remote server. MFS msync Command

20 Page 20 of Active/Active Configuration Server Application Data A UFS Application Data B UFS Active MFS Server MFS Active MFS Server NFS

21 Page 21 of MFS uses UFS, NFS file record lock. Locking is required for the active-active configuration. Locking enables write-related vnode operations as atomic operations. Locking is enabled by default. Locking is not necessary in active-passive configuration. MFS Locking Mechanism

22 Page 22 of Real-time -- Replicate file in real-time Scheduled -- Log file path, offset and size -- Replicate only changed portion of a file Real -Time and Scheduled

23 Page 23 of Online File Backup Server File Backup, active  passive Server/NAS Clustering, active  Active Applications

24 Page 24 of Application Data NTFS Application Data NTFS Remote Server MFS = NTFS + CIFS Window Desktop/Laptop MFS CIFS

25 Page 25 of MFS User Desktop/Laptop Online File Backup Real-time or Scheduled time Folder ISP Server MFS Folder MFS Folder LAN or Wan

26 Page 26 of Secondary Server Replication Mirror File System Mirror File System App Primary Email Mirror File System Mirroring Path : /home : /var/spool/mail Heartbeat

27 Page 27 of Enterprise Clusters Mirror File System Mirror File System Mirroring Path App Mirror File System Mirror File System Central Mirror File System App

28 Page 28 of Building block approach -- Building upon existing UFS, EXT3, NFS, CIFS infrastructures No metadata is replicated -- Superblock, Cylinder group, file allocation map are not replicated. Every file write operation is checked by file system -- file consistency, integrity Live file, not raw data replication -- The primary and backup copy both are live files Advantages

29 Page 29 of Interoperability -- Two nodes can be different systems -- Storage systems can be different Small granularity -- Directory level, not entire file system One to many or many to one replication Advantages

30 Page 30 of Fast replication -- Replication in Kernel file system module Immediate failover -- No need to fsck and mount operation Geographically dispersed clustering -- Two nodes can be separated by hundreds of miles Easy to deploy and manage -- Only one copy of MFS running on primary server is needed for replication Advantages

31 Page 31 of Why MFS? Better Data Protection Better Disaster Recovery Better RAS Better Scalability Better Performance Better Resources Utilization

32 Page 32 of Q & A Application Data A Application Data B MFS


Download ppt "Page 1 of John Wong CTO Twin Peaks Software Inc. Mirror File System A Multiple Server File System."

Similar presentations


Ads by Google