Mt Xia: Technical Consulting Group

Business Continuity / Disaster Recovery / High Availability
Data Center Automation / Audit Response / Audit Compliance

-
Current Location
-

css
  GlobalSysAdmin
    AIX

-

digg Digg this page
del.icio.us Post to del.icio.us
Slashdot Slashdot it!


Business Web Site Hosting
$3.99 / month includes Tools,
Shopping Cart, Site Builder

www.siteox.com

FREE Domain Registration
included with Web Site Hosting
Tools, Social Networking, Blog

www.siteox.com

This document contains configuration standards for utilizing NFS/NAS filesystems on AIX machine. These standards are to provide consistancy across a wide variety of applications being run on the AIX platform. These standards also provide interoperability with the requirements of HACMP.

The NAS servers can provide High Availability Network File System (NFS) services to the AIX machines across the ethernet network. This is not to be confused with the IBM product HANFS. The NAS servers are clustered to provide High Availability, but provide normal NFS services to the AIX machines. The NAS servers are connected to the LAN via fibre channel. The actual disk storage for the NAS exported NFS directories are located within the Network Appliance NAS server machines.


NFS server considerations

Because the NFS protocol is designed to be operating system independent, the connection between the server and the client is stateless. Statelessness means that the server does not have to maintain state of its clients to be able to function correctly (statelessness does not mean that the server is not allowed to maintain state of its clients). In the NFS configuration the server is dumb and the client is smart, which means that the client has to convert the file access method provided by the server into a access method understood by its applications.

Considering this, there is really not much to do at the server side but export the file system, directory or file chosen, start the daemons, and control performance

The information in /etc/rmtab can become stale if the NAS server goes down abruptly, or if clients are physically removed without unmounting the file system. In this case, you would remove all locks and the rmtab file.

The most common issue is whether to use a hardmount or a soft mount. A soft mount will try to re-transmit a number of times. This re-transmit value is defined by the retransoption. After the set number of retransmissions has been used, the soft mount gives up and returns an error.

A hard mount retries a request until a server responds. The hard option is the default value. On hard mounts, the intr option should be used to allow a user to interrupt a system call that is waiting on a crashed server.

Both hard mounts and soft mounts use the timeooption, to calculate the time between re-transmits. The default value is 0.7 seconds for the first time-out, After that, it increases the time-out exponentially until a maximum of 30 seconds, where it stabilizes until a reply is received. Depending of the value set for the retransoption, the soft mount has probably given up already at this stage. When discussing time-outs and hard mounts, you should choose between two other mount options, protoTCP or UDP.

When using UDP, it is important to understand that if a write or read packet is lost on the network or dropped at the server, the full time-out interval will expire before the packet is retransmitted from the client. On UDP, there is no intermediate-ack mechanism that would inform the client, for example, that the server only received five of the expected six write fragment packets.

The reliable delivery mechanisms built into TCP will help maintain good performance in networks where the unreliable UDP transport fails. The reason is that TCP uses a packet level delivery acknowledgment mechanism that keeps fragments from being lost. Recall that lost fragments on UDP require re-sending the entire read or write request after a time-out expires. TCP avoids this by guaranteeing delivery of the request.

Finally, there is the choice of mounting in the background (bg) or in the foreground (fg). If the bg is defined and an NFS server does not answer a mount request, then another mount process will start in the background and keep trying to establish the mount. By this method, the mount process is free to process another mount request. Define the bg in the /etc/filesystems file when establishing a predefined mount that will be mounted during system startup. Mounts that are non-interruptible and running in the foreground can hang the client if the network or server is down when the client system starts up. If a client cannot access the network or server, the user must start the machine again in maintenance mode and edit the appropriate mount requests.

When evaluating how many biods to run, you should consider the server capabilities as well as the typical NFS usage on the client machine. If there are multiple users or multiple process on the client that will need to perform NFS operations to the same NFS mounted file systems, you have to be aware that contention for biod services can occur with just two simultaneous read or write operations.

For NFS Version 3 mounts, the read/write sizes can be both increased and decreased. The default read/write sizes are 32 KB. The maximum possible on AIX is 61440 bytes (60 x 1024). Using 60 KB read/write sizes may provide slight performance improvement in specialized environments.

To increase the read/write sizes on the NFS client, the mount must be performed setting up the read/write sizes with the -o option . For example: -o rsize=61440,wsize=61440 .


NAS NFS Server Information Requirements

To configure an RS/6000 machine running AIX to use one or more NFS exported directories from the NAS servers, the following items must be established, determined or configured on the NAS server:

  • Storage space requirements of NFS client must be determined.
  • NAS filesystem created or adjusted to accomodate NFS client requirements.
  • IP address and hostname of NFS client added to /etc/host file of all NAS servers.
  • NFS client added to list of hosts allowed to connect to NAS exported filesystem.
  • Determine if NFS client is allowed "root" access to to NAS exported filesystem.
  • Determine if NAS exported filesystem is "read only" or allows "read-write" access.
  • Determine if NAS exported filesystem should use "secure" option.
  • Determine if NAS exported filesystem is a "public" filesystem.
  • Determine if NFS client will use NFS version 2 or 3.
  • Use the maximum read/write size of the NAS NFS server for NFS version 3 clients.
  • Average number of expected simultaneous accesses from each system (to determine number of nfsd's required).
  • High water mark number of expected simultaneous accesses from each system (to determine number of nfsd's required).
  • Low water mark number of expected simultaneous accesses from each system (to determine number of nfsd's required).
  • Estimate of average volume of NFS network traffic (MB/hour).
  • Estimate of burst volume of NFS network traffic.

  • Filesystems which are to be accessed by a single machine should NOT be NFS mounted or be a part of the NAS solution. These filesystems should be directly allocated from the SAN environment.

  • NFS Client Configuration/Implementation Requirements

    The following items must be determined and configured for NFS clients accessing the NAS server:

    • Storage space requirements of NFS client must be determined.
    • IP address and hostname of then NAS Server(s) should be added to /etc/host file of each NFS client.
    • Determine if NFS client is allowed "root" access to to NAS exported filesystem.
    • Calculation of additional biod's required to support NFS client requests.
    • Determine if NFS client requires "read only" or "read-write" access to NAS exported filesystem.
    • Determine if NFS client should use "secure" option.
    • Determine if NFS client will use NFS version 2 or 3.
    • Use the maximum read/write size of the NAS NFS server for NFS version 3 clients.
    • NFS mounts should all be "soft, background" mounts unless there is a specific requirement for other settings.
    • Cleanup the /etc/rmtab on each client during startup/shutdown processes.
    • Determine and use maximum NFS read/write size when performing NFS mounts.
    • Average number of expected simultaneous accesses on each NFS client (to determine number of biod's required).
    • High water mark number of expected simultaneous accesses on each NFS client (to determine number of biod's required).
    • Low water mark number of expected simultaneous accesses on each NFS client (to determine number of biod's required).
    • Estimate of average volume of NFS network traffic (MB/hour).
    • Estimate of burst volume of NFS network traffic.

    The NFS client machines should use mount points which reflect the hostname of the NFS/NAS server and the exported directory name. For example:

    • A directory called "/vol/SHR01/s_cricket" which is exported from an NFS/NAS server with a hostname of "ddcshrnasp02" should have an NFS client mount point of "/ddcshrnasp02/vol/SHR01/s_cricket ".
    • A directory called "/vol/SHR01/s_apache1" which is exported from an NFS/NAS server with a hostname of "ddcshrnasp02" should have an NFS client mount point of "/ddcshrnasp02/vol/SHR01/s_apache1".
    • And so on...

    If application requirements running on the client machines do not allow directory structures as described above, then symbolic links should be used to satisfy these application requirements.

    This same concept should also be used when mounting filesystems from SAN attached storage, as depicted in the document entitled "Filesystem Naming Standards".

    -
    NFS/NAS Filesystems
    -
     


    FREE Domain Registration
    included with Web Site Hosting
    Tools, Social Networking, Blog

    www.siteox.com

    Business Web Site Hosting
    $3.99 / month includes Tools,
    Shopping Cart, Site Builder

    www.siteox.com