[Gluster-users] bootstrapping cluster "failure" condition fix for local mounts (like: "gluster volume set all cluster.enable-shared-storage enable")
Jiffin Tony Thottan
jthottan at redhat.com
Fri May 12 09:00:28 UTC 2017
On 09/05/17 19:18, hvjunk wrote:
>
>> On 03 May 2017, at 07:49 , Jiffin Tony Thottan <jthottan at redhat.com
>> <mailto:jthottan at redhat.com>> wrote:
>>
>>
>>
>> On 02/05/17 15:27, hvjunk wrote:
>>> Good day,
>>>
>>> I’m busy setting up/testing NFS-HA with GlusterFS storage across VMs
>>> running Debian 8. GlusterFS volume to be "replica 3 arbiter 1"
>>>
>>> In the NFS-ganesha information I’ve gleamed thus far, it mentions
>>> the "gluster volume set all cluster.enable-shared-storage enable”.
>>>
>>> My first question is this: is that shared volume that gets
>>> created/setup, suppose to be resilient across reboots?
>>> It appears to not be the case in my test setup thus far, that that
>>> mount doesn’t get recreated/remounted after a reboot.
>>
>> Following is the script which creates shared storage and mount it in
>> the node, plus an entry will be added to /etc/fstab
>> https://github.com/gluster/glusterfs/blob/master/extras/hook-scripts/set/post/S32gluster_enable_shared_storage.sh
>>
>> But there is a possibility such that, if glusterd(I hope u have
>> enabled enabled glusterd service) is not started before
>> systemd tries mount the shared storage then it will fail.
>
Thanks for the systemd helper script
--
Jiffin
> Thank Jiffin,
>
> I since found that (1) you need to wait a bit for the cluster to
> “settle” with that script having executed, before you reboot the
> cluster (As you might see in my bitbucket ansible scripts in
> https://bitbucket.org/dismyne/gluster-ansibles/src ) … something to
> add in the manuals perhaps to warn people to wait for that script to
> finish before rebooting node/vm/server(s)?
>
> (2) the default configuration, can’t bootstrap the
> /gluster_shared_storage volume/directory reliably from a clean
> shutdown-reboot of the whole cluster!!!
>
> The problem: SystemD and it’s wanting to have the control over
> /etc/fstab and the mounting, and and and…. (and I’ll not empty my mind
> about L.P. based on his remarks in:
> https://github.com/systemd/systemd/issues/4468#issuecomment-255711912 after
> my struggling with this issue)
>
>
> To have a reliably bootstrapped (from all nodes down booting up) I'm
> using the following SystemD service and helper script(s) to have the
> gluster cluster node mount their local mounts (like
> /gluster_shared_storage) reliably:
>
> https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/ansible/files/glusterfsmounts.service-centos?at=master
>
>
> https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/ansible/files/test-mounts.sh?at=master
>
>
>
>
>
>>
>> --
>> Jiffin
>>>
>>> If the mount is not resilient, ie. not recreated/mounted by
>>> glusterfs and neither added to the /etc/fstab by glusterfs, why the
>>> initial auto mount by glusterfs and not afterwards with a reboot?
>>>
>>> The biggest “issue” I have found with glusterfs is the interaction
>>> with SystemD and mounts that fails and don’t get properly retried
>>> later (Will email separately on that issue) during bootstrapping of
>>> the cluster, and that is why I need to confirm the reasoning/etc. on
>>> this initial auto-mounting, but then the need to manually add it
>>> into the /etc/fstab
>>>
>>> Thank you
>>> Hendrik
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170512/eed24930/attachment.html>
More information about the Gluster-users
mailing list