[Gluster-users] How to shutdown a node properly ?
Ravishankar N
ravishankar at redhat.com
Fri Jun 30 01:24:31 UTC 2017
On 06/30/2017 12:40 AM, Renaud Fortier wrote:
>
> On my nodes, when i use the system.d script to kill gluster (service
> glusterfs-server stop) only glusterd is killed. Then I guess the
> shutdown doesn’t kill everything !
>
Killing glusterd does not kill other gluster processes.
When you shutdown a node, everything obviously gets killed but the
client does not get notified immediately that the brick went down,
leading for it to wait for the 42 second ping-timeout after which it
assumes the brick is down. When you kill the brick manually before
shutdown, the client immediate receives the notification and you don't
see the hang. See Xavi's description in Bug 1054694.
So if it is a planned shutdown or reboot, it is better to kill the
gluster processes before shutting the node down. BTW, you can use
https://github.com/gluster/glusterfs/blob/master/extras/stop-all-gluster-processes.sh
which automatically checks for pending heals etc before killing the
gluster processes.
-Ravi
> *De :*Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com]
> *Envoyé :* 29 juin 2017 13:41
> *À :* Ravishankar N <ravishankar at redhat.com>
> *Cc :* gluster-users at gluster.org; Renaud Fortier
> <Renaud.Fortier at fsaa.ulaval.ca>
> *Objet :* Re: [Gluster-users] How to shutdown a node properly ?
>
> Init.d/system.d script doesn't kill gluster automatically on
> reboot/shutdown?
>
> Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com
> <mailto:ravishankar at redhat.com>> ha scritto:
>
> On 06/29/2017 08:31 PM, Renaud Fortier wrote:
>
> Hi,
>
> Everytime I shutdown a node, I lost access (from clients) to
> the volumes for 42 seconds (network.ping-timeout). Is there a
> special way to shutdown a node to keep the access to the
> volumes without interruption ? Currently, I use the ‘shutdown’
> or ‘reboot’ command.
>
> `killall glusterfs glusterfsd glusterd` before issuing shutdown or
> reboot. If it is a replica or EC volume, ensure that there are no
> pending heals before bringing down a node. i.e. `gluster volume
> heal volname info` should show 0 entries.
>
>
> My setup is :
>
> -4 gluster 3.10.3 nodes on debian 8 (jessie)
>
> -3 volumes Distributed-Replicate 2 X 2 = 4
>
> Thank you
>
> Renaud
>
> _______________________________________________
>
> Gluster-users mailing list
>
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170630/8e9ea0c8/attachment.html>
More information about the Gluster-users
mailing list