Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed File Systems

Similar presentations


Presentation on theme: "Distributed File Systems"— Presentation transcript:

1 Distributed File Systems
Objectives to understand Unix network file sharing Contents Installing NFS How To Get NFS Started The /etc/exports File Activating Modifications The Exports File NFS And DNS Configuring The NFS Client Other NFS Considerations Practical to share and mount NFS file systems Summary This chapter looks at NFS, Sun’s Network Filesystem. It focuses primarily on setting up NFS servers and clients and especially on the differences between BSD and SVR4 UNIX in terms of setting up NFS. Lastly it looks at disadvantages of NFS.

2 NFS/DFS: An Overview Unix distributed filesystems are used to
centralise administration of disks provide transparent file sharing across a network Three main systems: NFS: Network File Systems developed by Sun Microsystems 1984 AFS: Andrew Filesystem developed by Carnegie-Mellon University Unix NFS packages usually include client and server components A DFS server shares local files on the network A DFS client mounts shared files locally a Unix system can be a client, server or both depending on which commands are executed Can be fast in comparasion to many other DFS Very little overhead Simple and stable protocols Based on RPC (The R family and S family) NFS ( the network filesystem ) was developed by SUN Microsystems in the 1984 and licensed to many vendors. It essentially allows a “server” to make available its filesystems/directories, typically application and home directories to “clients” who mount from the server. The client then “sees” the data as if it were local, i.e. the mount should be transparent to the client. NFS then saves applications being installed on several machines. Other, less popular network filesystems are AFS ( the Andrew Filesystem ) and RFS (remote file system). Sometimes NFS is referred to as VFS, Virtual Filesystems.

3 General Overview of NFS
Developed by Sun Microsystems 1984 Independent of operating system, network, and transport protocols. Available on many platforms including: Linux, Windows, OS/2, MVS, VMS, AIX, HP-UX…. Restrictions of NFS stateless open architecture Unix filesystem semantics not guaranteed No access to remote special files (devices, etc.) Restricted locking file locking is implemented through a separate lock daemon Industry standard is currently nfsV3 as default in RedHat, SuSE, OpenBSD, FreeBSD, Slackware, Solaris, HP-UX, Gentoo Kernel NFS or UserSpace NFS NFS’ openness and statelessness makes maintaining Unix filesystem semantics less straightforward. Because other operating systems such as VMS or DOS can so easily be accessed via NFS, operations like creating Unix-like symbolic links cannot be supported if the remote system does not support them. . Another example is that of deleting files while they are still open. A Unix program may do this to create a temporary file - the file is opened, then deleted. It is still accessible by the program, but it doesn’t have a name in the filesystem; Unix doesn’t free the disk blocks until the file has finally been closed. Supporting this in the NFS Protocol would mean introducing state into the server; it can be supported on the client, however. A “special file” in Unix accesses a device. NFS provides no support for this over a network. (AT&T’s RFS does.) File locking becomes difficult in a networked environment - we are trying to introduce state into a system that was designed to be stateless. A separate lock manager daemon does handle file locking, however.

4 Three versions of NFS available
Supports files up to 4GB long (most common 2GByte) Requires an NFS server to successfully write data to its disks before the write request is considered successful Has a limit of 8KB per read or write request. (1 TCP Window) Version 3 is the industry standard: Supports extremely large file sizes of up to bytes Supports files up to 8 Exabyte Supports the NFS server data updates as being successful when the data is written to the server's cache Negotiates the data limit per read or write request between the client and server to a mutually decided optimal value. Version 4 is coming: File locking and mounting are integrated in the NFS daemon and operate on a single, well known TCP port, making network security easier Support for the bundling of requests from each client provides more efficient processing by the NFS server. File locking is mandatory, whereas before it was optional It is important to match the versions of NFS running on clients and server to help ensure the necessary compatibility to get NFS to work predictably. NFS on WIKI, wiki is nice, here one can read the mandatorys of NFS plus some pros and cons. nfsV4 is coming, but still support in Linux is under development, anyone can attend and test. It is not an easy story, it includes downloading of several sourcecodes and changing libraries, patching and recompiling the linux kernel for nfsv4 support. Howto setup NFS from sourcecodes NFS Version 1 was a prototype only!

5 Important NFS Daemons Portmap The primary daemon upon which all the RPC rely Manages connections for applications that use the RPC specification Listens to TCP port 111 for initial connection negotiate a range of TCP ports, usually above port 1024, for further comms. You need to run portmap on both the NFS server and client. Nfs (rpc.nfsd) Starts the RPC processes needed to serve shared NFS file systems Listens to TCP or UDP port 2049 (port can vary) The nfs daemon needs to be run on the NFS server only. Nfslock (rpc.mountd) Used to allow NFS clients to lock files on the server via RPC processes. Neogated port UDP/TCP port The nfslock daemon needs to be run on both the NFS server and client netfs Allows RPC processes run on NFS clients to mount NFS filesystems on the server. The nfslock daemon needs to be run on the NFS client only. NFS isn't a single program, but a suite of interrelated programs that work together to get the job done. Portmap MUST be running before NFSSERVER can start. In some NFS implementations nfslock & netfs is started automatically when nfsserver starts (NFSv3 on SuSE), in other Unixe’s it is nessesary to start them all manually in order to get the server up.

6 The NFS Protocol Stack aka. VFS
mountd biod nfsd statd lockd statd lockd MOUNT NFS server client XDR RPC In terms of the protocol stack NFS is an application. XDR (eXternal Data Representation) is our presentation layer and RPC’s (Remote Procedure Calls) are our session layer entity. On the client side, we mount from the server using the mount command. On the server side,. mountd responds to the mount request and allows or disallows the mount. Data is represented in an Operating System independent fashion i.e. using XDR therefore allowing many OS’s to share data including UNIX, DOS, OS/2 Netware, MVS, VMS and more.. When the client wishes to communicate with the server it generates RPC’s and the server’s NFSD (network file system daemons) answer the clients calls. Record locking is implemented using the lockd and statd daemons which must be run on both the server and the client. TRANSPORT, NETWORK, LINK & PHYSICAL LAYERS RPC depend on PORTMAP which is on both client and server

7 Installing kernelNFS, Linux
Check if NFS is installed with rpm Check if RPC portmap package installed rpm If not Install them, allways begin with portmap If you are not running SuSE Install: portmap, nfs-utils,nfs-server (should be implemented in kernel) suse93:~ # rpm -qa | grep nfs nfs-utils yast2-nfs-client yast2-nfs-server # rpm -qa | grep portmap portmap-5beta-733 # rpm –ivh # rpm –ivh Samba is usually the solution of choice when you want to share disk space between Linux and Windows machines. NFS is used when disks need to be shared between Linux servers. Directories on the NFS server are presented to the network as special NFS filesystems and the remote NFS clients use the mount command to gain access to them. SuSE/RedHat Linux installs NFS by default and also by default it is activated when the system boots up. You can determine whether you have NFS installed using the RPM command, the main NFS package is called "nfs-utils". Also you might see nfs-client and nfs-server yast-packages for configuration or similar redhat tools. You will also need to have the RPC portmap package installed as well. You can use the rpm command to determine whether it's installed on your system. This should be default, but since many people lack knowledge in RPC and howto secure it they avoid RPC. If NFS and portmap are not installed, they can be added fairly easily. Most RedHat and Fedora Linux software products are available in the RPM format. Downloading and installing RPMs isn't hard. If you need a refresher, the chapter on RPMs covers how to do this in detail. You will need to make sure that the nfsutil, and portmap software RPMs are installed. When searching for the RPMs, remember that the filename usually starts with the software package name by a version number like this: nfs-utils and: portmap-5beta-733

8 How To Get kernelNFS server Started
Activate the 3 nessesary servers for NFS at boot NFS server demon NFS file locking RPC portmap Start the PORTMAPPER and NFS server Which starts all dependent services Whatever you do allways start PORTMAP first Check that services for NFS is running with rpcinfo In some Unixes you need to separately start /etc/init.d/portmap start or shortly portmap(d) /etc/init.d/nfs start or shortly nfs(d) /etc/init.d/nfslock start or shortly nfslock(d) # insserv portmap # insserv nfsserver # rcportmap start # rcnfsserver start # rpcinfo -p localhost program vers proto port tcp portmapper udp portmapper udp nfs udp nfs udp nfs_acl tcp nfs tcp nfs tcp nfs_acl udp status udp nlockmgr program vers proto port udp nlockmgr tcp status tcp nlockmgr tcp nlockmgr tcp nlockmgr udp mountd tcp mountd udp mountd tcp mountd udp mountd tcp mountd You can also use the chkconfig command to configure NFS and RPC portmap to start at boot. You will also have to activate NFS file locking to reduce the risk of corrupted data. If your client or server is mounting nfs filesystem from /etc/fstab you can start rcnfs as well You can also use the init scripts in the /etc/init.d directory to start/stop/restart NFS and RPC portmap after booting: # /etc/init.d/portmap start # /etc/init.d/nfsserver start If services were running you will get an [Error] or [Fail] message. You can also usally stop and reload as well as check status in services directly from ”init scripts” rpcinfo is nice is but a bit to clunsy and to detailed. Normally you might only check with nfsserver status If you installed NFS from other sources than rpm or rpms you need to add portmap and nfs to in RedHat /etc/init.d/rc.local or /etc/init.d/boot.local in SuSE If you installed nfsv4 from sources, you might need to edit the /etc/init.d/nfsserver and /etc/init.d/portmap scripts to make the new sources work.

9 How To Get NFS client Started
Activate the 2 nessesary servers for NFS at boot NFS file locking nfslock RPC portmap Start the PORTMAPPER and NFS server With rc Check that services for NFS is running with rpcinfo Note! There can be more services running dependent on your system setup In some Unixes you need to separately start /etc/init.d/netfs start or shortly netfs(d) /etc/init.d/nfslock start or shortly nfslock(d) Allways start portmap first then netfs and last nfslock # insserv portmap # rcportmap start # rpcinfo -p localhost rpcinfo -p program vers proto port tcp portmapper udp portmapper You can also use the chkconfig command to configure RPC portmap to start at boot. You will also have to activate NFS file locking to reduce the risk of corrupted data. You can also use the init scripts in the /etc/init.d directory to start/stop/restart RPC portmap after booting: # /etc/init.d/portmap start If services were running you will get an [Error] or [Fail] message. You can also usally stop and reload as well as check status in services directly from ”init scripts” rpcinfo is nice is but a bit to clunsy and to detailed. Normally you might only check with rcportmap status If you installed NFS from other sources than rpm or rpms you need to add portmap in RedHat /etc/init.d/rc.local or /etc/init.d/boot.local in SuSE If you installed nfsv4 from sources, you might need to edit the /etc/init.d/portmap scripts to make the new sources work.

10 NFS And DNS Check FORWARD resolution Check REVERSE resolution
Both forward and reverse must be same If not, fix your DNS zonefiles (review netadmin chapter 3) Syncronized /etc/hosts in server and client will also do Some common error messages Lookup: host resolution error Timeout: firewall port setup Not registered: portmap is not running # host in-addr.arpa domain name pointer a01.my-site.com. # host a01.my-site.com a01.my-site.com has address The NFS client must have a matching pair of forward and reverse DNS entries on the DNS server used by the NFS server. In other words, a DNS lookup on the NFS server for the IP address of the NFS client must return a server name that will map back to the original IP address when a DNS lookup is done on that same server name. This is a security precaution added into the nfs package that lessens the likelihood of unauthorized servers from gaining access to files on the NFS server. Failure to correctly register your server IPs in DNS can result in "fake hostname" errors. Note! Most RPC services need forward and reverse lookup to be same. This also goes for many other services. To secure RPC use the service native mode allowing only certain IP addresses or users or domains to have access, next is to ”help” the service with firewall like iptables, ipchains, ipf, pf or tcp-wrappers. forward lookup doesn't exist RPC: Timeout RPC: Program not registered failed: server is down.

11 The NFS Server sharing directories
The exportfs command is used to share directories on the network any directory can be exported subdirectories of an exported directory may not be exported unless they are on a different disk parents of an exported directory may not be exported unless they are on a different disk only local filesystems can be exported Some exportfs –o sharing options We share the home directory in –v verbose mode rw = Read Write (default) squash_uids, squash_gids = make user and group id’s specified to be ”squashed to” user with identity nobody directory is shared to host rosies only ro read only access rw read and write access sync write when requested wdelay wait for sync hide dont show subdirs that is exported of other export no_all_squash remote uid’s & gid’s become equal of client root_squash remote root uid become anonymous on the client no_root_squash remote root equals to local root user squash_uids remote uid’s & gid’s are threated as identity nobody The exportfs command is a general purpose network sharing command which can be used for NFS (as here). rw and root_sqash is standard sharing options. Extend this with –o squash-options If you recieve this error: exportfs: rosies has non-inet addr it means there is trouble with DNS or /etc/hosts file, name resolving does not work proper. Check your zonefiles and hosts file. When using NFS it is imperative that UIDs are consistent across all systems in the network, the Squash tool helps a bit on the way. But we will later in this class learn LDAP who will open for uniform UID/GID in the enterprice enviroments. # exportfs –v -o rw,squash_uids=0-499,squash_gids=0-499 rosies:/home exporting rosies:/home

12 More on Shared Directories
If someone is using the shared directory, you will not be able unshare. Check if someone is accessing RPC, using a share The first red line show that someone is using RPC against our server. The second red line show that someone have accessed /home Unshare a share in –v verbose mode Check what the server is sharing # showmount –a localhost All mount points on server: *, /24:/home *:/home *:/install/suse9.3 rosies:* rosies:*, /24 The exportfs switch -i can prevent you from disaster like unexporting all shares or the reverse, it ignores the /etc/exports file, so that only default options and options given on the command line are used. showmounts –e is also comfortable to see what the server is exporting. Showmounts can also monitor remote servers showmounts –e server2 NFS And Symbolic Links, limitations You have to be careful with the use of symbolic links on exported NFS directories. If an absolute link points to a directory on the NFS server that hasn't been exported, then the NFS client won't be able to access it. NFS and mounted image files, workaround This does currently not work, it is not possible to export mounted loopback filesystem images. Workaround is to export the image files and let client to do the loopback mount themself. At server with ISO files) exportfs rosies:/shared At client) mount –t nfs –o rw server.my-site.com:/shared /mnt/nethome mkdir /mnt/a ; mount –o loop /mnt/nethome/image.iso /mnt/a # exportfs -v -u rosies:/home unexporting roseis:/home # exportfs -v /home /24(rw,wdelay,root_squash) /exports/network-install/SuSE/9.3 <world>(ro,wdelay,root_squash) /install/suse9.3

13 The /etc/exports File, static shares
Sample exports file Some options in exports file (same as exportfs) Squash changes remote identity to selectable local identity Linux uses another format in /etc/exports than BSD system’s # cat /etc/exports /data/files           *(ro,sync) /home                  /24(rw,sync) /data/test            *.my-site.com(rw,sync) /data/database         /32(rw,sync) ro read only access rw read and write access sync write when requested wdelay wait for sync hide dont show subdirs that is exported of other export no_all_squash remote uid’s & gid’s become equal of client root_squash remote root uid become anonymous on the client no_root_squash remote root equals to local root user squash_uids remote uid’s & gid’s are threated as identity nobody This is the main NFS configuration file and consists of two columns. The first column lists the directories you want to make available to the network. The second column has two parts. The first part lists the networks or DNS domains that can get access to the directory, the second part lists NFS options in brackets. In the sample we have provided: Read only access to the /data/files directory to all networks Read/write access to the /home directory from all servers on the /24 network, that is all addresses from to Read/write access to the /data/test directory from servers in the my-site.com DNS domain Read/write access to the /data/database directory from a single server More options: squash_uids=0-50 squash uid 0-50 to uid nobody squash_gids=0-50 squash gid 0-50 to gid nobogy no_root_squash opens system to much! anonuid=500 squash to uid 500 anongid=500 squash to gid 500 map_static=/etc/nfs/shrike.map file declares what to squash to what: # remote local uid # squash these gid # squash these uid # map 1000 to 1200 gid # map 1000 to 1200 uid gid

14 The /etc/exports File, Squashing
Sample exports file using map_static Map_static file =/etc/squash.map Squash changes remote identity to selectable local identity # cat /etc/exports /data/files           *(ro,sync) /home                  /24(map_static=/etc/squash.map,rw,sync) /data/test            *.my-site.com(rw,sync) /data/database         /32(rw,sync) # /etc/squash.map # remote local comment uid # squash to user nobody gid # squash to group nobody uid # map to uid gid # map to gid uid # map individual user to uid 2001 gid # map individual user to gid 2001 In many cases UserID and GroupID can become a problem, specially if you have a large network and many clients. In order to make the NFS server a bit more uniform when it comes to UID and GID we use squash commands, ofter togeather with NIS and LDAP. Here we use a special static mapping file for ”squashing” client uid & gid to server uid & gid. For central administration of squashing in enterprice enviroment we can use NIS/NIS+ or LDAP, then we can use map_nis and map_ldap. Both NIS and LDAP uses central databases to keep users and their attributes. Later we will look on LDAP. The exports file is a complex file, there are many more options, check the manual pages: man exports for more information.

15 Activating Modifications in Exports File
Re-reading all entries in /etc/exports file When no directories have been exported to NFS, then the "exportfs -a" command is used: After adding share(s) to /etc/exports file When adding a share you can use the "exportfs -r" command to export only the new entries: Deleting, Moving Or Modifying A Share In this case it is best to temporarily unmount the NFS directories using the "exportfs -ua" command followed by the "exportfs -a" command. Termporary export /usr/src to host’s on net # exportfs -a # exportfs -r Exportfs is used to both instruct rc.nfsd to read the /etc/exports file after changes and to do temporary shares from command line or from within varius appz. exportfs switches: -u unexport -a all entries in /etc/exports -r resync/refresh exports -v be verbose Add a temporary share: exportfs :/usr/src Remove share temporary: exportfs –u :/usr/src # exportfs -ua # exportfs -a # exportfs /24:/usr/src –o rw

16 Exercise - Sharing Directories
Write down the commands to do the following? With one command share /usr/share readonly for all clients in your net # Permanently Share /etc readonly for rosies and tokyo and read/write for seoul list the file containing the permanent shares two commands showing what your host has shared check who has mounted your shared directories check who has mounted directories on rosies check the server nfs status From the server, with one command check that the nfs-client has portmapper running

17 The nfsstat Command Server statistics Client statistics
A large table arrives after command is issued Client statistics Server numbers of filehandlers Usage information on the server's file handle cache, including the total number of lookups, and the number of hits and misses. The server has a limited number of filehandlers that can be tuned # nfsstat -s # nfsstat -c Server nfs v3: null getattr setattr lookup access readlink % % % % % % read write create mkdir symlink mknod % % % % % % remove rmdir rename link readdir readdirplus fsstat fsinfo pathconf commit % % % % The nfsstat command provides useful error statistics. The -s option provides NFS server stats, while the -c option provides them of for clients. # nfsstat -o fh Server file handle cache: lookup anon ncachedir ncachedir stale

18 Error Thresholds For The "nfsstat" Command

19 The NFS Client side Ensure Portmap Is Running If not, start portmap
Clients need portmap only to be running Also check that server is up If not, start portmap Show exported shares on a remote server Temporary mount nfs shares on client with default options umount temporaty mounted nfs shares on client # rpcinfo -p localhost # rpcinfo -p # rcportmap start # showmount -e Export list for : /home * /exports/network-install/SuSE/9.3 * NFS configuration on the client requires you to start the NFS application; create a directory on which to mount the NFS server's directories that you exported via the /etc/exports file in the server, and finally to mount the NFS server's directory on your local directory, or mount point. Here's how to do it all. Use the rpcinfo command to make sure the portmap daemon is running. Start it if it isn't. NFS clients don't need NFS to be running to mount remote directories, but need to have portmap running to communicate correctly with the NFS server. You mount nfs shares in your current client filetree with mount command, usally you mount local attached devices like: mount /dev/cdrom /mnt/cdrom For the cdrom but now you need to specify: mount –o rw,soft –t nfs hostname:/path/to/share /local/mount/point So the redirector is activated through portmapper. # mkdir /mnt/nethome # mount –t nfs :/home /mnt/nethome # umount /mnt/nethome

20 To see what is mounted on client side
Using the df command show disk usage: The mount command is most detailed about mount options The showmount shows all exported shares on a remote server plus all mounts from client Client nfsstat will show statistics # df –F NFS Filesystem k-blocks Used Available Use% Mounted on :/install/suse % /mnt/a # mount | grep nfs :/install/suse9.3 on /mnt/a type nfs (rw,addr= ) # showmount -a All mount points on : *, /24:/home *:/home *:/install/suse9.3 :* It is important to be able monitor what forigin filesystems are attached to a client,therefore it exist a range of tools that can literary do the same, but with minor differences. NFSSTAT is on both client and server. It is usefull for analyzong nfs server and client progress in your enviroment. # nfsstat –c Client rpc stats: calls retrans authrefrsh

21 mount –o <options> –t nfs
NFS clients access network shared directories using the mount command NFS mount –o options: rw/ro read-write (default) or read-only hard retry mount operation until server responds (default) or soft try mount once and allow to timeout retrans & transmission and timeout parameters for soft mounted operations timeout bg after first mount failure, retry mount in the background intr allow operations on filesystems to be interrupted with kill signals nfsvers=n The version of NFS the mount command should attempt to use Use /etc/fstab to make NFS mounts permanent a02:/tmp  /mnt/nethome   nfs    soft,intr,nfsvers=3      0      0 Manually mounting /tmp as /mnt/nethome on local host from a02: The initial mount command mounts the resource from the server. The client side “mount” command contacts the server’s rpcbind (portmapper) to ask which port number the mount daemon (mountd) is listening on: the port number should be thought of as an application’s address. Once the client’s mount contacts the server’s mountd the mount is either allowed or denied, based on rights specified by the server in /etc/exports. If the mount is allowed, the server passes the client an identifier called a File Handle which the client machine’s kernel puts in its mount table. When it references the mounted structure in future it simply passes the server the File Handle to indicate what it is attempting to access. The make the mounts happen at boot time edit /etc/fstab. Mounts made on the command line like the ones in the above slide will be lost on a reboot. # hostname a01 # mount –o rw,soft -t nfs a02:/tmp /mnt/nethome

22 Mount ”nfs-shares” at boot in client
Make entries in /etc/fstab Some /etc/fstab mount options Mount all unmounted If you made changes on live system in fstab, you can mount all unmounted filesystem with: #/etc/fstab #Directory MountPoint Type   Options   Dump   FSCK :/data/files  /mnt/nfs   nfs    soft,nfsvers=3     0      0 auto mount this when mount –a is used defaults (rw suid dev exec auto nouser async) user allow regular users to mount/umount sync use syncron I/O most safe soft skip mount if server not responding hard try until server responds retry=minutes bg/fg retry mounting in background or foreground The /etc/fstab File The /etc/fstab file lists all the partitions that need to be auto-mounted when the system boots. Therefore you need to edit the /etc/fstab file if you need the NFS directory to be made permanently available to users on the NFS. In this case we're mounting the /data/files directory as an NFS type filesystem on the /mnt/nfs mount point. The NFS server is "bigserver" whose IP address is Note: With mount –o (option) can you use almost all the options from command lines as well. The client, or worse a server can severely ”hang” if you do not do soft mount and or bg mount of NFS filesystems! mount –a

23 Possible NFS Mount options
There are more options, look in the man pages for mount!

24 Exercise - Using mount with NFS
What command will mount /usr/share from mash4077 on the local mount point /usr/share? How do I check what filesystems are mounted locally? Make a static mount in a01 ”/mnt/nethome” of exported ”a02:/tmp” in /etc/fstab: Manually mount exported a02:/usr/share as read only on a01: How can I show what is nfs exported on the server # # # # #

25 NFS security NFS is inherently insecure
NFS can be run in encrypted mode which encrypts data over the network AFS more appropriate for security conscious sites User IDs must be co-ordinated across all platforms UIDs and not user names are used to control file access (use LDAP or NIS) mismatched user id's cause access and security problems Fortunately root access is denied by default over NFS root is mapped to user nobody # mount | grep "/share" mail:/share on /share # id uid=318(hawkeye) gid=318(hawkeye) # touch /share/hawkeye # ssh mail ls -l /share/hawkeye -rwxr-xr-x 2 soonlee sonlee 0 Jan 11 11:21 /share/hawkeye NFS is very useful but there is a price: UIDs must be co-ordinated across all platforms otherwise users may have access to data they should not. In the above example, we used UID=318 for user hawkeye, but the same UID on the remote system was assigned to user soonlee. NFS and portmap have had a number of known security deficiencies in the past and as a result, it is not recommended to use NFS over insecure networks. NFS doesn't encrypt data and it is possible for root users on NFS clients to have root access the server's filesystems. o        Exercise caution with NFS. o        Restrict its use to secure networks o        Export only the most needed data o        Consider using read only exports whenever data updates aren't necessary. o        Use the root_squash option in /etc/exports (default) to reduce the risk of the abuse of privileges by NFS client "root" users on the NFS serve Testing NFS) useradd –u 555 hawkeye 4. ssh mail exportfs --export :/tmp passwd hawkeye ssh mail useradd –g 555 sonlee id hawekeye mkdir /mnt/nethome uid=555(hawkeye) gid=501(hawkeye) 7. mount –o rw –t nfs mail:/tmp /mnt/nethome 8. su hawkeye ; touch /tmp/hawkeye 9. ssh mail ls –l /mnt/hawkeye

26 NFS Hanging Run NFS on a reliable network
Avoid having NFS servers that NFS mount each other's filesystems or directories Always use the sync option whenever possible Mission critical computers shouldn't rely on an NFS server to operate Dont have NFS shares in search path Most NFS transactions use the UDP protocol which doesn't keep track of the state of a connection. If the remote server fails, the NFS client will sometimes not be aware of the disruption in service. If this occurs, the NFS client will wait indefinitely for the return of the server. This will also force programs relying on the same client server relationship to wait indefinitely too. It is for this reason that it's recommended to use the "soft" option in the NFS client's /etc/fstab file this will cause NFS to report I/O error to the calling program after a long timeout. A hung NFS connection to a directory in your search path could cause your shell to pause at that point in the search path until the NFS session is regained. NFS mounted directories shouldn't be part of your search path.

27 NFS Hanging continued File Locking Nesting Exports
Known issues exist, test your applications carefullý Nesting Exports NFS doesn't allow you to export directories that are subdirectories of directories that have already been exported unless they are on different partitions. Limiting "root" Access no_root_squash Restricting Access to the NFS server You can add user named "nfsuser" on the NFS client to let this user squash access for all other users on that client Use nfsV3 if possible NFS allows multiple clients to mount the same directory but NFS has a history of not handling file locking well, though more recent versions are said to have rectified the problem. Test your network based applications thoroughly before considering using NFS. NFS doesn't allow a "root" user on a NFS client to have "root" privileges on the NFS server. This can be disabled with the no_root_squash export option in the /etc/exports file. NFS doesn't provide restrictions on a per user basis. If a user named "nfsuser" exists on the NFS client, then they will have access to all the files of a user named "nfsuser" on the NFS server. It is therefore best to use the /etc/exports file to limit access to certain trusted servers or networks. You may also want to use a firewall to protect access to the NFS server. A main communication control channel is usually created between the client and server on TCP port 111, but the data is frequently transferred on a randomly chosen TCP port negotiated between them. There are ways to limit the TCP ports used, but that is beyond the scope of this book. You may also want to eliminate any wireless networks between your NFS server and client, and it is not wise to mount an NFS share across the Internet

28 NFS Firewall considerations
NFS uses many ports RPC uses TCP port 111 NFS server itself uses port 2049 MOUNTD listens on neogated UDP/TCP port’s NLOCKMGR listens on neogated UDP / TCP port’s Expect almost any TCP/UDP port over 1023 can be allocated for NFS NFS need a STATEFUL firewall A stateful firewall will be able dealing with traffic that originates from inside a network and block traffic from outside SPI can demolish NFS Stateful packet inspection on cheaper routers/firewalls can missinteprete NFS traffic as DOS attacks and start drop packages NFSSHELL This is a hacker tool, it can hack some NFS Invented by Leendert van Doom Use VPN and IPSEC tunnels With complex services like NFS IPSEC or some kind of VPN should be considered if used in untrusted networks.

29 Common NFS error messages

30 NFS Automounter for clients or servers
Automatically mount directories from server when needed To activate automount manually and at boot Management of shares centralized on server Increases security and reduces lockup problems with static shares Main configuration sit in /etc/auto.master Simple format is: MOUNT-KEY MOUNT-OPTIONS LOCATION MOUNT-KEY is local mountpoint, here /doc, /- (from root) and /home MOUNT-OPTIONS is the standard mount options previously described, here -ro LOCATION can be a direct share on a server like server and map file auto.direct and indirect like /etc/auto.home. Common configuration /etc/auto.misc is for floppy/cd/dvd. Centralized administration need to set /etc/nsswitch.conf # rcautofs start # insserv autofs /doc -ro server:/usr/doc /- /etc/auto.direct /home /etc/auto.home The permanent mounting of filesystems has its disadvantages. For example, the /etc/fstab file is unique per Linux server and has to be individually edited on each. NFS client management, therefore, becomes more difficult. Also, the mount is permanent, tying up system resources even when the NFS server isn't being accessed. NFS uses an automounter feature that overcomes these shortcomings by allowing you to bypass the /etc/fstab file for NFS mounts, instead using an NFS-specific map file that can be distributed to multiple clients. In addition, you can use the file to specify the expected duration of the NFS mount, after which time it is unmounted automatically. However, automounter continues to report to the operating system kernel that the mount is still active. When the kernel makes an NFS file request, automounter intercepts it and mounts the remote directory on the mount point defined in the map file. The mount point directory is dynamically created by the automounter when needed, after the timeout period the remote directory is unmounted and the mount point is deleted. Centralized administration of filesystems and users can be achived with NIS/NIS+ and LDAP, most popular today is LDAP but many are still using NIS/NIS+. Filesystems to be mounted from servers is ussaly users /home and some other filesystem with appz. /etc/nsswitch.conf controls how. For example like in picture above it reads ”automount: files nis ldap”, this tell the client to first look after /etc/auto.master and then ask the NIS server and last the LDAP server. Both NIS and LDAP –servers act as centralized databases with not only user patterns but can also describe client host patterns and the infrastructure they live in. automount: files nis ldap

31 Direct And Indirect Map Files structure
File /etc/auto.master sets the mandatory automount config map files always try to mount in auto.master ”mount key” Direct map file /etc/auto.direct Direct Maps are used to define NFS filesystems that are mounted on different servers or that all don't start with the same prefix. Indirect map file /etc/auto.home Indirect Maps define directories that can be mounted under the same mount point. Like users home directories. /data/sales -rw server:/disk1/data/sales /sql/database -ro,soft snail:/var/mysql/database peter server:/home/peter kalle akvarius:/home/bob walker iss:/home/bunny The format of these map files is similar to that of the /etc/auto.master file, except that columns two and three have been switched. Column one lists all the directory keys that will activate the automounter feature. It is also the name of the mount point under the directory listed in the /etc/auto.master file. The second column provides all the NFS options to be used, and the third column lists the NFS servers and the filesystems that map to the keys. When the NFS client accesses a file, it refers to the keys in the /etc/auto.master file to see whether any fall within the realm of the automounter's responsibility. If one does, then automounter checks the subsidiary map file for subdirectory mount point key. If it finds one, then automounter mounts the files for the system. Indirect Map File Example In the previous example, the /etc/auto.master file redirected all references to the /home directory to the /etc/auto.home file. This second file has entries for peter, kalle, and walker; these directories are actually mount points for directories on servers server, akvarius, and iss. Direct Map File Example The second entry in the /etc/auto.master file was specifically created to handle all references to one of a kind directory prefixes. In the example the /data/sales and /sql/database are the mount points for directories on servers server and snail.

32 Wildcards In Map Files Wildcards In Map Files
The asterisk (*), which means all the ampersand (&), which instructs automounter to substitute the value of the key for the & character. Using the Ampersand Wildcard /etc/auto.home the key is peter, so the ampersand wildcard is interpreted to mean peter too. This means you'll be mounting the server:/home/peter directory. Using the Asterisk Wildcard /etc/auto.home In the example below, the key is *, meaning that automounter will attempt to mount any attempt to enter the /home directory. But what's the value of the ampersand? It is actually assigned the value of the key that triggered the access to the /etc/auto.home file. If the access was for /home/peter, then the ampersand is interpreted to mean peter, and server:/home/peter is mounted. If access was for /home/kalle, then akvarius:/home/kalle would be mounted. peter server:/home/& SAMPLE AUTOFS FILES (lives in /etc): # Sample auto.master file # This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # For details of the format look at autofs(5). #/misc /etc/auto.misc --timeout=60 #/misc /etc/auto.misc #/net /etc/auto.net /home /etc/auto.home # # File: /etc/auto.home * lina:/home/& # snippet from /etc/nsswicth.conf automount: files nis * bigboy:/home/&

33 Other DFS Systems RFS: Remote File Sharing AFS: Andrew Filesystem
developed by AT&T to address problems with NFS stateful system supporting Unix filesystem semantics uses same SVR4 commands as NFS, just use rfs as file type standard in SVR4 but not found in many other systems AFS: Andrew Filesystem developed as a research project at Carnegie-Mellon University now distributed by a third party (Transarc Corporation) available for most Unix platforms and PCs running DOS, OS/2, Windows uses its own set of commands remote systems access through a common interface (the /afs directory) supports local data caching and enhanced security using Kerberos fast gaining popularity in the Unix community Other distributed filesystems include AFS ( the Andrew Filesystem ) and RFS (remote file system). Industry pundits expect AFS to become the preferred distributed filesystem as the commercial world becomes more security conscious. However this have changed the last year, nfsV4 has gained much more features than AFS have and now walks towards a fast and secure network file system.

34 Summary Unix supports file sharing across a network
NFS is the most popular system and allows Unix to share files with other O/S Servers share directories across the network using the share command Permanent shared drives can be configured into /etc/fstab Clients use mount to access shared drives Use mount and exportfs to look at distributed files/catalogs As you have seen NFS can be a very powerful tool in providing clients with access to large amounts of data, such as a database stored on a centralized server. Many of the new network-attached storage products currently available on the market rely on NFS - a testament to its popularity, increasing stability, and improving security.


Download ppt "Distributed File Systems"

Similar presentations


Ads by Google