[Gluster-users] How to shutdown a node properly ?
David Gossage
dgossage at carouselchecks.com
Thu Jun 29 18:26:02 UTC 2017
On Thu, Jun 29, 2017 at 12:41 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Sounds less like an issue with how it's shutdown but an issue with how
it's mounted perhaps. My gluster fuse mounts seem to handle any one node
being shutdown just fine as long as quorum is maintained.
Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto:
>
>> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>>
>> Hi,
>>
>> Everytime I shutdown a node, I lost access (from clients) to the volumes
>> for 42 seconds (network.ping-timeout). Is there a special way to shutdown a
>> node to keep the access to the volumes without interruption ? Currently, I
>> use the ‘shutdown’ or ‘reboot’ command.
>>
>> `killall glusterfs glusterfsd glusterd` before issuing shutdown or
>> reboot. If it is a replica or EC volume, ensure that there are no pending
>> heals before bringing down a node. i.e. `gluster volume heal volname info`
>> should show 0 entries.
>>
>>
>>
>> My setup is :
>>
>> -4 gluster 3.10.3 nodes on debian 8 (jessie)
>>
>> -3 volumes Distributed-Replicate 2 X 2 = 4
>>
>>
>>
>> Thank you
>>
>> Renaud
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170629/ac687511/attachment.html>
More information about the Gluster-users
mailing list