[Gluster-users] Being Gluster NFS off
Jiffin Tony Thottan
jthottan at redhat.com
Mon Oct 10 06:01:23 UTC 2016
I am trying to list out glusterd issues with the 3.8 feature "Gluster
NFS being off by default".
As per current implementation,
1.) On a freshly installed setup with 3.8/3.9, if u create a volume,
then Gluster NFS won't
come by default and in the vol info we can see " nfs.disable on"
2.) For existing volumes(created in 3.7 or below), there are two
a.) if there are only volumes with default configuration, Gluster NFS
won't come up and in
vol info "nfs.disable on" won't displayed. In volume status
command pid of Gluster NFS
will be N/A.
b.) if there is a volume with explicit "nfs.disable off" set , then
after upgrade Gluster NFS will
come and export all the existing volumes and vol info will have similar
value as a.)
Currently three bugs[1,2,3] have opened to address these issues.
As per 3.8 release note, gluster nfs should be up for all existing
volumes with default
configuration. We are planning to change this behavior from 3.9 onwards
and Atin send out a patch
With his patch after upgrade all the existing volumes with default
configuration will have
nfs.disable value set to on explicitly in the vol info. So Gluster NFS
won't export that volume at all
and gluster v status does not display status of Gluster NFS server.
This patch also solves bugs 2 and 3 as well
 https://bugzilla.redhat.com/show_bug.cgi?id=1383006 - gluster nfs
not coming for existing volumes on 3.8
 https://bugzilla.redhat.com/show_bug.cgi?id=1383005 - getting n/a
entry in volume status command
 https://bugzilla.redhat.com/show_bug.cgi?id=1379223 - nfs.disable:
on" is not showing in Vol info by default
for the 3.7.x volumes after updating to
More information about the Gluster-users