How to Share Document Store with NFS

This document explains how an Alternative File Location for the SavaPage Document Store can be attached to an NFS share.

In our example we use one (1) NFS Server that is shared by two (2) SavaPage Servers, with the following IP addresses:

NFS Server 192.168.1.10
SavaPage Server #1 192.168.1.20
SavaPage Server #2 192.168.1.21

This setup allows for a Failover scenario where a single SavaPage proxy front-end either routes to the Primary SavaPage Server #1 or to the Fall-back SavaPage Server #2.

Failover requires that all SavaPage instances have an identical configuration, connect to a single external database on a separate node, and use the same Document Store, also on a separate node.

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. It allows you to mount your local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. With the help of NFS we can configure centralized storage solutions.

User authentication is handled differently in different versions. NFSv2/3 handles permissions solely based on UID and GID. File permissions on the server are matched against user- and group ids on client. That is why NFSv2/3 is by design insecure in environments where users have root access to client machines: UID spoofing is trivial in that case. Advanced authentication via Kerberos5 is offered by NFSv4.

In our example we opt for the easy NFSv3 configuration since we assume an enclosed environment where the IP addresses of the SavaPage clients, as whitelisted on the NFS server, are an adequate authentication.

When creating the User Account for our SavaPage Servers we make sure to use system group savapage that has same GID as system group savapage on the NFS server. Therefore we create the system group separately, with a fixed GID that is not already used on the client and server. Normally, as you can check in /etc/login.defs, the SYS_GID_MIN~SYS_GID_MAX range is between 100~999. In our example we pick 999.

# Execute on NFS server
sudo groupadd savapage -r -g 999
 
# Execute on SavaPage server #1
sudo groupadd savapage -r -g 999
sudo useradd savapage -g 999
 
# Execute on SavaPage server #2
sudo groupadd savapage -r -g 999
sudo useradd savapage -g 999
sudo apt-get install nfs-kernel-server nfs-common
# According to LFH, /srv contains site-specific data served by the system such
# as data offered by web or FTP servers. So we create this directory to share:
sudo mkdir -p /srv/savapage/data
 
# Files written to the share by user 'savapage' will be
# stored as savapage:savapage.
# Files written by 'root' are stored as nobody:savapage.
sudo chown nobody:savapage /srv/savapage/data
 
# Grant (+) read (r), write (w), execute (x) permission
# to users who are members of the file's group (g)
# File execution with owner UID and/or GID (s).
sudo chmod g+rwxs /srv/savapage/data
# Modify /etc/exports and "export" our NFS share
sudo vi /etc/exports
 
# Add uncommented line below to allow NFS version 3 share for 
# SavaPage Server #1 and #2
# 'rw' option allows client and server both read and write access
#    within the shared directory.
# 'sync' confirms requests to the shared directory only once the
#    changes have been committed.
# 'no_subtree_check' prevents the subtree checking. When a shared
#    directory is the subdirectory of a larger file system, nfs
#    performs scans of every directory above it, in order to verify
#    its permissions and details. Disabling the subtree check may
#    increase the reliability of NFS, but reduce security.
 
/srv/savapage/data 192.168.1.20(rw,sync,no_subtree_check) 192.168.1.21(rw,sync,no_subtree_check)
 
# Apply exports
sudo exportfs -rav
 
# Restart kernel nfs server (not needed)
# sudo systemctl restart nfs-kernel-server.service 

Apply for SavaPage Server #1 and #2.

# NFS client
sudo apt-get install nfs-common
# Create the directory to mount the NFS share.
sudo mkdir -p /mnt/nfs/savapage/data
# Explicitly mount as NFS version 3 (default mount type nfs4 uses Kerberos).  
sudo mount -v -o vers=3 192.168.1.10:/srv/savapage/data /mnt/nfs/savapage/data
# Check NFS share in the outputs of ...
df -h
# or ...
mount | grep savapage

To mount the NFS share at boot time:

sudo vi /etc/fstab

Add lines below:

# -------------------------------------------------------------------------------
# vers=3 : explicitly mount as NFS version 3 (default type nfs4 uses Kerberos)
# hard   : NFS requests are retried indefinitely (if NFS server is unresponsive)
# intr   : NFS requests can be interrupted if server goes down or is unreachable
# -------------------------------------------------------------------------------
192.168.1.10:/srv/savapage/data /mnt/nfs/savapage/data  nfs  vers=3,rw,sync,hard,intr  0  0

In case you want to unmount the share:

sudo umount /mnt/nfs/savapage/data
# use the -l (--lazy) option which allows you to unmount a 
# busy file system as soon as it is not busy anymore.
sudo umount -l /mnt/nfs/savapage/data
# If remote NFS system is unreachable, use -f (--force)
# option to force an unmount. WARNING: this may corrupt
# the data on the file system.
sudo umount -f /mnt/nfs/savapage/data

Add the following lines to /opt/savapage/server/server.properties and restart the SavaPage Server(s).

/opt/savapage/server/server.properties
# ...
app.dir.doc.store.archive=/mnt/nfs/savapage/data/doc-archive
app.dir.doc.store.journal=/mnt/nfs/savapage/data/doc-journal
# ...
  • howto/nfs/doc-store.txt
  • Last modified: 2020/10/27 11:00
  • by rijk