<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 03 May 2017, at 07:49 , Jiffin Tony Thottan <<a href="mailto:jthottan@redhat.com" class="">jthottan@redhat.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class=""><br class=""><br class="">On 02/05/17 15:27, hvjunk wrote:<br class=""><blockquote type="cite" class="">Good day,<br class=""><br class="">I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs running Debian 8. GlusterFS volume to be "replica 3 arbiter 1"<br class=""><br class="">In the NFS-ganesha information I’ve gleamed thus far, it mentions the "gluster volume set all cluster.enable-shared-storage enable”.<br class=""><br class="">My first question is this: is that shared volume that gets created/setup, suppose to be resilient across reboots?<br class=""> It appears to not be the case in my test setup thus far, that that mount doesn’t get recreated/remounted after a reboot.<br class=""></blockquote><br class="">Following is the script which creates shared storage and mount it in the node, plus an entry will be added to /etc/fstab<br class=""><a href="https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh" class="">https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh</a><br class=""><br class="">But there is a possibility such that, if glusterd(I hope u have enabled enabled glusterd service) is not started before<br class="">systemd tries mount the shared storage then it will fail.<br class=""></div></div></blockquote><div><br class=""></div>Thank Jiffin,</div><div><br class=""></div><div> I since found that (1) you need to wait a bit for the cluster to “settle” with that script having executed, before you reboot the cluster (As you might see in my bitbucket ansible scripts in <a href="https://bitbucket.org/dismyne/gluster-ansibles/src" class="">https://bitbucket.org/dismyne/gluster-ansibles/src</a> ) … something to add in the manuals perhaps to warn people to wait for that script to finish before rebooting node/vm/server(s)?</div><div><br class=""></div><div> (2) the default configuration, can’t bootstrap the /gluster_shared_storage volume/directory reliably from a clean shutdown-reboot of the whole cluster!!!</div><div><br class=""></div><div>The problem: SystemD and it’s wanting to have the control over /etc/fstab and the mounting, and and and…. (and I’ll not empty my mind about L.P. based on his remarks in: <a href="https://github.com/systemd/systemd/issues/4468#issuecomment-255711912" class="">https://github.com/systemd/systemd/issues/4468#issuecomment-255711912</a> after my struggling with this issue)</div><div><br class=""></div><div><br class=""></div><div>To have a reliably bootstrapped (from all nodes down booting up) I'm using the following SystemD service and helper script(s) to have the gluster cluster node mount their local mounts (like /gluster_shared_storage) reliably:</div><div><br class=""></div><div><a href="https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/ansible/files/glusterfsmounts.service-centos?at=master" class="">https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/ansible/files/glusterfsmounts.service-centos?at=master</a></div></body></html>