This document contains configuration standards for utilizing the NAS
server from an AIX machine. These standards are used by Mt Xia to
provide consistancy across a wide variety of applications being run on
the AIX platform. These standards also provide interoperability with
the requirements of HACMP.
The NAS servers will provide High Availability Network File System
(NFS) services to the AIX machines across the ethernet network. This is
not to be confused with the IBM product HANFS. The NAS servers are
clustered to provide High Availability, but provide normal NFS services
to the AIX machines. The NAS servers are connected to the EMC Symmetrix
machines via fibre channel. The actual disk storage for the exported NFS
directories are located on the EMC Symmetrix machines.
NFS server considerations
Because the NFS protocol is designed to be operating system
independent, the connection between the server and the client is
stateless. Statelessness means that the server does not have to
maintain state of its clients to be able to function correctly
(statelessness does not mean that the server is not allowed to maintain
state of its clients). In the NFS configuration the server is dumb and
the client is smart, which means that the client has to convert the file
access method provided by the server into a access method understood by
its applications.
Considering this, there is really not much to do at the server side
but export the file system, directory or file chosen, start the daemons,
and control performance
The information in /etc/rmtab can become stale if the NAS server
goes down abruptly, or if clients are physically removed without
unmounting the file system. In this case, you would remove all locks
and the rmtab file.
The most common issue is whether to use a hardmount or a soft mount.
A soft mount will try to re-transmit a number of times. This
re-transmit value is defined by the retransoption. After the set number
of retransmissions has been used, the soft mount gives up and returns an
error.
A hard mount retries a request until a server responds. The hard
option is the default value. On hard mounts, the intr option should be
used to allow a user to interrupt a system call that is waiting on a
crashed server.
Both hard mounts and soft mounts use the timeooption, to calculate
the time between re-transmits. The default value is 0.7 seconds for the
first time-out, After that, it increases the time-out exponentially
until a maximum of 30 seconds, where it stabilizes until a reply is
received. Depending of the value set for the retransoption, the soft
mount has probably given up already at this stage. When discussing
time-outs and hard mounts, you should choose between two other mount
options, protoTCP or UDP.
When using UDP, it is important to understand that if a write or
read packet is lost on the network or dropped at the server, the full
time-out interval will expire before the packet is retransmitted from
the client. On UDP, there is no intermediate-ack mechanism that would
inform the client, for example, that the server only received five of
the expected six write fragment packets.
The reliable delivery mechanisms built into TCP will help maintain
good performance in networks where the unreliable UDP transport fails.
The reason is that TCP uses a packet level delivery acknowledgment
mechanism that keeps fragments from being lost. Recall that lost
fragments on UDP require re-sending the entire read or write request
after a time-out expires. TCP avoids this by guaranteeing delivery of
the request.
Finally, there is the choice of mounting in the background (bg) or
in the foreground (fg). If the bg is defined and an NFS server does not
answer a mount request, then another mount process will start in the
background and keep trying to establish the mount. By this method, the
mount process is free to process another mount request. Define the bg
in the /etc/filesystems file when establishing a predefined mount that
will be mounted during system startup. Mounts that are
non-interruptible and running in the foreground can hang the client if
the network or server is down when the client system starts up. If a
client cannot access the network or server, the user must start the
machine again in maintenance mode and edit the appropriate mount
requests.
When evaluating how many biods to run, you should consider the
server capabilities as well as the typical NFS usage on the client
machine. If there are multiple users or multiple process on the client
that will need to perform NFS operations to the same NFS mounted file
systems, you have to be aware that contention for biod services can
occur with just two simultaneous read or write operations.
For NFS Version 3 mounts, the read/write sizes can be both increased
and decreased. The default read/write sizes are 32 KB. The maximum
possible on AIX is 61440 bytes (60 x 1024). Using 60 KB read/write
sizes may provide slight performance improvement in specialized
environments.
To increase the read/write sizes on the NFS client, the mount must
be performed setting up the read/write sizes with the -o option . For
example: -o rsize=61440,wsize=61440 .
NAS NFS Server Information Requirements
To configure an RS/6000 machine running AIX to use one or more NFS
exported directories from the NAS servers, the following items must be
established, determined or configured on the NAS server:
- Storage space requirements of NFS client must be determined.
- NAS filesystem created or adjusted to accomodate NFS client
requirements.
- IP address and hostname of NFS client added to /etc/host file of
all NAS servers.
- NFS client added to list of hosts allowed to connect to NAS
exported filesystem.
- Determine if NFS client is allowed "root" access to to NAS exported
filesystem.
- Determine if NAS exported filesystem is "read only" or allows
"read-write" access.
- Determine if NAS exported filesystem should use "secure" option.
- Determine if NAS exported filesystem is a "public" filesystem.
- Determine if NFS client will use NFS version 2 or 3.
- Use the maximum read/write size of the NAS NFS server for NFS
version 3 clients.
- Average number of expected simultaneous accesses from each system
(to determine number of nfsd's required).
- High water mark number of expected simultaneous accesses from each
system (to determine number of nfsd's required).
- Low water mark number of expected simultaneous accesses from each
system (to determine number of nfsd's required).
- Estimate of average volume of NFS network traffic (MB/hour).
- Estimate of burst volume of NFS network traffic.
The Volume Group and Logical Volume naming standards used on the NAS
servers should follow the conventions outlined in the Opensystems
Standards document titled
"Filesystem Naming".
The filesystem mount points on the NAS servers should adhere to the
following:
- The first directory of the exported filesystem mount point
should be "/exp". This will segment all exported file systems on the
NAS server under a single top level directory. The next directory of
the mount point should reflect how the NAS file system will be used.
For example, if it is a public file system the first directory of the
mount point should be "
/public ". Other examples
include:
-
/exp/pub - File system accessible by any machine on
the Mt Xia network.
-
/exp/exe - File system shared by all EXE machines in the
Fort Worth Data Center.
-
/exp/eai - File system shared by all production
middleware cluster machines.
-
/exp/sap - File system shared by all SAP
cluster machines.
- Filesystems which are to be accessed by a single machine should
NOT be NFS mounted or a part of the NAS solution.
These filesystems should be directly allocated from the EMC drives.
- Subsequent directories can be determined by the content to be
contained with the directories. Examples of this include:
-
/exp/home/bin - File system which contains programs
and scripts shared by all machines in the Fort Worth Data Center.
-
/exp/eai/scripts - File system which contains scripts
used by all production middleware cluster machines.
-
/exp/sap/data - File system which contains data
shared between the SAP machines.
NFS Client Configuration/Implementation Requirements
The following items must be determined and configured for NFS clients
accessing the NAS server:
- Storage space requirements of NFS client must be determined.
- IP address and hostname of NAS Server(s) should be added to
/etc/host file of each NFS client.
- Determine if NFS client is allowed "root" access to to NAS exported
filesystem.
- Calculation of additional biod's required to support NFS client
requests.
- Determine if NFS client requires "read only" or "read-write" access
to NAS exported filesystem.
- Determine if NFS client should use "secure" option.
- Determine if NFS client will use NFS version 2 or 3.
- Use the maximum read/write size of the NAS NFS server for NFS
version 3 clients.
- NFS mounts should all be "soft, background" mounts unless there is
a specific requirement for other settings.
- Cleanup the /etc/rmtab on each client during startup/shutdown
processes.
- Determine and use maximum NFS read/write size when performing NFS
mounts.
- Average number of expected simultaneous accesses on each NFS client
(to determine number of biod's required).
- High water mark number of expected simultaneous accesses on each
NFS client (to determine number of biod's required).
- Low water mark number of expected simultaneous accesses on each NFS
client (to determine number of biod's required).
- Estimate of average volume of NFS network traffic (MB/hour).
- Estimate of burst volume of NFS network traffic.
The NFS client machines should use mount points which reflect the
hostname of the NAS server and the exported directory name. For
example:
- A directory called "
/exp/pub " which is exported from a
NAS server with a hostname of "ftwnasp1 " should have an NFS
client mount point of "/ftwnasp1/pub ".
- A directory called "
/exp/home/bin " which is exported from a
NAS server with a hostname of "ftwnasp1 " should have an NFS
client mount point of "/ftwnasp1/home/bin ".
- A directory called "
/exp/eai/scripts " which is exported from a
NAS server with a hostname of "ftwnasp1 " should have an NFS
client mount point of "/ftwnasp1/eai/scripts ".
- A directory called "
/exp/sap/data " which is exported from a
NAS server with a hostname of "ftwnasp1 " should have an NFS
client mount point of "/ftwnasp1/sap/data ".
If application requirements running on the client machines do not
allow directory structures as described above, then symbolic links
should be used to satisfy these application requirements.
|