HOWTO setup a small server
NFS Server (Network File System)
Security Notes for NFS version 2 and 3
NFS version 2 and 3 servers only provide (insecure) host-based authentication: Hosts are allowed/denied based on hostnames and/or IP addresses. Authorization of users is controlled on the clients using the permissions of the files based on user/group IDs. As a result, the mapping of user names to user IDs (and group names to group IDs) must be identical on the server and all its clients (or they must be mapped dynamically). The first can be achieved by a central database holding the user accounts, for example, based on OpenLDAP.
The fact that decisions about authorization are made on each client also
means that a user who can become root
on any client has full
access to any file/directory mounted from the NFS server. Although access by
root
itself can be prevented by appropriatly configuring the
exports on the server (e.g., with the root_squash
option), it must
be kept in mind that root
can create users with arbitrary IDs on
the client.
Installation
Besides the kernel-space support, the server and clients require some
user-space utilities: The portmapper (portmap
) and client utilies
(most notably, the network status monitor (rpc.statd
)):
# apt-get install portmap nfs-common
The server additionally requires the user-space utilities of the NFS kernel
server contained in the package nfs-kernel-server
(most notably,
rpc.mountd
and rpc.nfsd
):
# apt-get install nfs-kernel-server
If your server supports quotas, you will also need the quota
package containing the rpc.rquotad
:
# apt-get install quota
Exporting Directories
The export of directories is configured in /etc/exports
on the
server. Each line contains the export point (a directory) followed by a
white-space separated list of clients (hostnames, IP addresses, networks,
wildcard “*
”, etc.) to which access should be granted. Each client
can be followed by an optional list of options in brackets:
Excerpt: /etc/exports
directory client1(option1,option2,...) client2(...)
The export of the home directories could be done like this:
Excerpt: /etc/exports
/home *(rw,root_squash,sync,no_subtree_check)
For diskless clients the directory /opt/ltsp
can be exported read-only to any client
with the following entry:
Excerpt: /etc/exports
/opt/ltsp *(ro,no_root_squash,async,no_subtree_check)
In contrast, a public directory which is writable can be specified like
this (/public
must be writable by all users on the NFS server):
Excerpt: /etc/exports
/public *(rw,all_squash,sync,no_subtree_check)
The meaning of the options is as follows:
rw
- the filesystem is writable
ro
- the filesystem is exported read-only; this is the default
root_squash
- map
root
UID/GID to anonymous UID/GID (nobody
/nogroup
); this is the default all_squash
- map all UIDs/GIDs to anonymous UID/GID
(
nobody
/nogroup
) no_root_squash
- do not map
root
(nor any other) UID/GID to anonymous UID/GID (nobody
/nogroup
) sync
- reply clients after data have been stored to stable storage; this is the default
async
- reply clients before data have been stored to stable storage; improves
performance, but should only be used on
ro
filesystems
See man -L en exports
for more information on the supported
options. After changes of the configuration file, the NFS server can be forced
to re-read it with
# exportfs -r
Mounting NFS Filesystems
For example, mounting the home directories as exported in the above example
to /mnt
can be done by the following
command:
# mount -t nfs -o hard,intr server.example.com:/home /mnt
In order to survive reboots the following line can be added to
/etc/fstab
:
Excerpt: /etc/fstab
server.example.com:/home /mnt nfs hard,intr 0 0
The option hard
will lead to indefinite retries of the client
to access the filesystem, if requests time out (this is the default). This way
a program trying to access the NFS filesystem while the server is down will
hang and should continue without problems, when the server is back again. The
option intr
allows a filesystem operation to be interrupted (by
default it cannot be interrupted).
Access Control
Note: Take into account that the configuration
described below (in files /etc/hosts.allow
and
/etc/hosts.deny
) also takes effect for other daemons like
inetd
and even sshd
. The security gain is very small,
so you might want to skip this section.
Simple access control for the portmapper and the NFS daemons can be
configured in /etc/hosts.allow
and /etc/hosts.deny
.
(These access controll rules do not apply to the kernel part of NFS.) The two
files are processed in the following order:
- A client is granted access, if a (daemon, client) pair matches an entry
in
/etc/hosts.allow
. - Otherwise, a client is denied, if it matches an entry in
/etc/hosts.deny
. - Otherwise, access will be granted.
Rules are specified in /etc/hosts.allow
and
/etc/hosts.deny
for any daemon by specifying
daemon:
client
pairs. A
daemon
is either a single daemon like
sshd
(for the OpenSSH daemon) or a wildcard such as
ALL
. The client
part
consists of a comma-separated list of hostnames, IP addresses, networks, or
wildcards like ALL
. By default, access to all daemons is granted
for all clients. A very restrictive setup could deny access to all daemons
(even sshd
, be careful!):
File: /etc/hosts.deny
ALL: ALL
It might be preferable and avoid some trouble to use a less resistrictive configuration instead. You could deny access to the NFS related daemons only:
Excerpt: /etc/hosts.deny
portmap: ALL statd: ALL mountd: ALL rquotad: ALL
Afterwards, access has to be permitted in /etc/hosts.allow
for
the desired hosts. Access to the NFS related daemons can be granted, e.g., from
the 223.1.2.0/24
network by:
Excerpt: /etc/hosts.allow
portmap: 223.1.2.0/24 statd: 223.1.2.0/24 mountd: 223.1.2.0/24 rquotad: 223.1.2.0/24
Port Configuration
Typically, only portmap
and nfsd
use fixed
ports (111
and
2049
, respectively). The other services
bind to a random port during each start up by default. This makes it in
particular difficult to setup packet filter rules.
The rpc.statd
daemon can be forced
to use port 4000
to listen on and port
4001
as outgoing port by setting:
Excerpt: /etc/default/nfs-common
STATDOPTS="-p 4000 -o 4001"
Note: Unfortunatly, there is currently an unfixed
problem in nfs-utils
that makes rpc.statd
listen
to an additional random priviledged port.
The port used by rpc.mountd
can be
configured as 4002
by the following
parameter:
Excerpt: /etc/default/nfs-kernel-server
RPCMOUNTDOPTS="-p 4002"
The port used by lockd
can be set to
4003
by passing appropriate parameters
to the lockd
kernel module upon loading. Therefore, create the
following file:
File: /etc/modprobe.d/nfs_lockd
options lockd nlm_udpport=4003 nlm_tcpport=4003
Finally, if you use quotas, you can also assign a fix port to
rpc.rquotad
, e.g.,
4004
:
Excerpt: /etc/default/quota
RPCRQUOTADOPTS="-p 4004"
You will have to restart the daemons after changing the listen ports (see Restart of Daemons).
Enabling NFS Version 4
In comparison to NFS version 2 and 3 there are two major differences in version 4:
- It supports secure authentication and encryption, e.g., based on Kerberos.
- It requires only a single port (TCP 2049) which is advantageous for firewalling.
In this section (insecure) host-based authentication is still used. The next section deals with adding Kerberos authentication.
Firstly, you will need to enable rpc.idmapd
on both the server
and the clients for NFS version 4:
Excerpt: /etc/default/nfs-common
NEED_IDMAPD=yes
Set the correct domain of your network in the configuration file of
rpc.idmapd
on the server and its clients:
Excerpt: /etc/idmapd.conf
Domain = example.com
Secondly, you must export an explicit root directory, e.g.,
/srv/nfs4
, in your NFS configuration
which is marked with the option fsid=0
:
# mkdir -p /srv/nfs4
The corresponding entry in the NFS configuration file looks like this for
access from any (“*
”) client:
Excerpt: /etc/exports
/srv/nfs4 *(rw,sync,fsid=0,crossmnt,no_subtree_check)
You might not have all volumes located under the root directory. Other
volumes can be bind mounted under that root. For example, this can be achieved
for the directory /home
by the
following commands:
# mkdir -p /srv/nfs4/home # mount --bind /home /srv/nfs4/home
with the corresponding entry in the NFS configuration file:
Excerpt: /etc/exports
/srv/nfs4/home *(rw,sync,no_subtree_check)
Finally, restart the corresponding init script nfs-common
(see also Restart of Daemons) and make the
NFS server re-read its configuration:
# exportfs -r
Note: The bind mounts can be added to
/etc/fstab
in order to survive reboots:
Excerpt: /etc/fstab
/home /srv/nfs4/home none bind 0 0
Enabling Kerberos Authentication in NFS Version 4
Prerequisit: Heimdal In order to additionally add secure
authentication and encryption, rpc.svcgssd
must be started on the
server:
Excerpt: /etc/default/nfs-kernel-server
NEED_SVCGSSD=yes
Additionally, rpc.gssd
must be started on the server and its
clients (first line). It requires a keytab file (read more below) which is set
to /etc/krb5.keytab.nfs
with the second variable.
Excerpt: /etc/default/nfs-common
NEED_GSSD=yes RPCGSSDOPTS="-k /etc/krb5.keytab.nfs"
For secure NFS version 4 connections hosts have to authenticate via
Kerberos. Therefore, the server as well as the client require a Kerberos
principal nfs/hostname.example.com
and
a keytab file. For the server it can be created by:
# kadmin -l > add --random-key nfs/server.example.com > ext_keytab -k /etc/krb5.keytab.nfs nfs/server.example.com > q
For the client the hostname part must be replaced by the FQDN of the client.
The keytab file can be easily created on the server (e.g., as file
/tmp/client.keytab
) and has then to
be copied to /etc/krb5.keytab.nfs
on the client (readable only for
root
!):
# kadmin -l > add --random-key nfs/client.example.com > ext_keytab -k /tmp/client.keytab nfs/client.example.com > q
In the NFS configuration file the security flavor can be configured with
the sec=flavor
option. Valid
flavor
s are:
sys
- default; no cryptographic security
krb5
- Kerberos authentication
krb5i
- as
krb5
, with additional integrity (checksum) krb5p
- as
krb5i
, with additional privacy (encryption); strongest security
In the NFS version 4 configuration examples from the last section, e.g.,
sec=krb5
can be added to the options list:
Excerpt: /etc/exports
/srv/nfs4 *(rw,sync,fsid=0,crossmnt,no_subtree_check,sec=krb5) /src/nfs4/home *(rw,sync,no_subtree_check,sec=krb5)
Finally, restart the corresponding init scripts nfs-common
and
nfs-kernel-server
(see Restart of
Daemons).
Note 1: You may repeat the lines for enabling further Kerberos flavors and/or specify additional NFS version 2 or 3 filesystems.
Note 2: The syntax gss/krb5
,
gss/krb5i
, and gss/krb5p
in place of a
client (hostname, IP/netword address, wildcard) in /etc/exports
is
deprecated since Linux 2.6.23.
Mounting NFS version 4 Filesystems
When mounting an NFS version 4 filesystem, the root directory has to be
omitted. For the above examples, mounting the home directories to
/mnt
can be done by the following
command. The option -o sec=krb5
must be added, if the filesystem
has sec=krb5
set in the server configuration (accordingly for
krb5i
and krb5p
):
# mount -t nfs4 -o hard,intr,sec=krb5 server.example.com:/home /mnt
In order to survive reboots the following line can be added to
/etc/fstab
:
Excerpt: /etc/fstab
server.example.com:/home /mnt nfs4 hard,intr,sec=krb5 0 0
Users need a Kerberos ticket, before they can access the NFS mounts. If a directory is only accessible by a certain user, a ticket for the corresponding principal will be required.
Note: If the ticket is destroyed after the access, the directory will still be accessible for approximatly 30 minutes. Interestingly, getting a ticket for another principal (to access another user's directory) is ignored during that time. Unfortunatly, I did not find any documentation about that.
Restart of Daemons
Finally, if you change one of the configuration files of the init scripts
(/etc/default/init_script_name
), you
will have to restart the related init script. If any daemon is disabled in any
of these files, it will be advisable to first run the corresponding init script
with the stop
parameter, then change the configuration file of the
init script, and afterwards run the script with start
parameter.
Otherwise, a daemon that is disabled, but still running, might not be stopped
during a restart
. The list of relevant init scripts is as follows.
The nfs-kernel-server
is only to be restarted on the server and
quotarpc
will only exist, if quotas are used:
# /etc/init.d/nfs-common restart # /etc/init.d/nfs-kernel-server restart # /etc/init.d/quotarpc restart
Note: If you changed the listen ports of
lockd
, you will have to reboot the system as there is no way to
reload the kernel module.
You can verify that the neccessary services are running by querying the portmapper (for NFS version 2 and 3):
# rpcinfo -p
There should be portmapper
and status
mentioned
in the list on the server and client. On the server nfs
,
nlockmgr
, and mountd
have to also appear on the list.
If quotas are used, rquotad
must be listed.
Networking Requirements
Prerequisite: Shorewall In case of a packet filter (Shorewall), you will have to permit access from the clients to all the ports specified in the Port Configuration section. Without any fixed ports a packet filter can hardly be setup.
Note: The rpc.statd
listens to an
additional random priviledged port due to an unfixed issue. This is not
considered here and makes firewalling extremely difficult.
Excerpt: /etc/shorewall/rules
# NFS Kernel Server # # portmap ACCEPT net $FW tcp 111 ACCEPT net $FW udp 111 # (rpc.)nfsd ACCEPT net $FW tcp 2049 ACCEPT net $FW udp 2049 # rpc.statd ACCEPT net $FW tcp 4000 ACCEPT net $FW udp 4000 # ... and an additional, random UDP port # rpc.mountd ACCEPT net $FW tcp 4002 ACCEPT net $FW udp 4002 # lockd ACCEPT net $FW tcp 4003 ACCEPT net $FW udp 4003 # rpc.rquotad ACCEPT net $FW tcp 4004 ACCEPT net $FW udp 4004 #
and restart the packet filter:
# shorewall restart