<div id="ymail_android_signature">I have a 3 replica gluster volume created in 3 nodes and when one node is down due to some issue and the clients not able access volume. This was the issue. I have fixed the server and it is back. There was downtime at client. I just want to avoid the downtime since it is 3 replica.</div><div id="yMail_cursorElementTracker_1592365416567"><br></div><div id="yMail_cursorElementTracker_1592365417784">I am testing the high availability now by making one of the brick server rebooting or shut down manually. I just want to make volume accessible always by client. That is the reason we went for replica volume.</div><div id="yMail_cursorElementTracker_1592330049527"><br></div><div id="yMail_cursorElementTracker_1592330049663">So I just would like to know how to make the client volume high available even some VM or node which is having gluster volume goes down unexpectedly had down time of 10 hours.</div><div id="yMail_cursorElementTracker_1592365467069"><br></div><div id="yMail_cursorElementTracker_1592365467633"><br></div><div id="yMail_cursorElementTracker_1592365491832"><br></div><div id="yMail_cursorElementTracker_1592365492056">Glusterfsd service is used to stop which is disabled in my cluster and I see one more service running gluserd.&nbsp;</div><div id="yMail_cursorElementTracker_1592365535299"><br></div><div id="yMail_cursorElementTracker_1592365535942">Will starting glusterfsd service in all 3 replica nodes will help in achieving what I am trying.</div><div id="yMail_cursorElementTracker_1592365598356"><br></div><div id="yMail_cursorElementTracker_1592365598651">Hope I am clear.</div><div id="yMail_cursorElementTracker_1592365603574"><br></div><div id="yMail_cursorElementTracker_1592365603744">Thanks,</div><div id="yMail_cursorElementTracker_1592365606313">Ahemad</div><div id="yMail_cursorElementTracker_1592365588010"><br></div><div id="yMail_cursorElementTracker_1592365588536"><br></div><div id="yMail_cursorElementTracker_1592330082194"><br></div><div id="yMail_cursorElementTracker_1592330082391">Thanks,</div><div id="yMail_cursorElementTracker_1592365359700">Ahemad</div><div id="yMail_cursorElementTracker_1592330256754"><br></div><div id="yMail_cursorElementTracker_1592330257147"><br></div> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Tue, Jun 16, 2020 at 23:12, Strahil Nikolov</div><div>&lt;hunter86_bg@yahoo.com&gt; wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> In my cluster ,&nbsp; the service is enabled and running.<br clear="none"><br clear="none">What actually&nbsp; is your problem&nbsp; ?<br clear="none">When a gluster brick process dies unexpectedly - all fuse clients will be waiting for the timeout .<br clear="none">The service glusterfsd is ensuring that during system shutdown ,&nbsp; the brick procesees will be shutdown in such way that all native clients&nbsp; won't 'hang' and wait for the timeout, but will directly choose&nbsp; another brick.<br clear="none"><br clear="none">The same happens when you manually run the kill script&nbsp; -&nbsp; all gluster processes&nbsp; shutdown and all clients are&nbsp; redirected to another brick.<br clear="none"><br clear="none">Keep in mind that fuse mounts will&nbsp; also be killed&nbsp; both by the script and the glusterfsd service.<br clear="none"><br clear="none">Best Regards,<br clear="none">Strahil Nikolov<br clear="none"><div class="yqt9689797574 yQTDBase" id="yqtfd40624"><br clear="none">На 16 юни 2020 г. 19:48:32 GMT+03:00, ahemad shaik &lt;<a shape="rect" ymailto="mailto:ahemad_shaik@yahoo.com" href="mailto:ahemad_shaik@yahoo.com">ahemad_shaik@yahoo.com</a>&gt; написа:<br clear="none">&gt; Hi Strahil,<br clear="none">&gt;I have the gluster setup on centos 7 cluster.I see glusterfsd service<br clear="none">&gt;and it is in inactive state.<br clear="none">&gt;systemctl status glusterfsd.service● glusterfsd.service - GlusterFS<br clear="none">&gt;brick processes (stopping only)&nbsp; &nbsp;Loaded: loaded<br clear="none">&gt;(/usr/lib/systemd/system/glusterfsd.service; disabled; vendor preset:<br clear="none">&gt;disabled)&nbsp; &nbsp;Active: inactive (dead)<br clear="none">&gt;<br clear="none">&gt;so you mean starting this service in all the nodes where gluster<br clear="none">&gt;volumes are created, will solve the issue ?<br clear="none">&gt;<br clear="none">&gt;Thanks,Ahemad<br clear="none">&gt;<br clear="none">&gt;<br clear="none">&gt;On Tuesday, 16 June, 2020, 10:12:22 pm IST, Strahil Nikolov<br clear="none">&gt;&lt;<a shape="rect" ymailto="mailto:hunter86_bg@yahoo.com" href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>&gt; wrote:&nbsp; <br clear="none">&gt; <br clear="none">&gt; Hi ahemad,<br clear="none">&gt;<br clear="none">&gt;the&nbsp; script&nbsp; kills&nbsp; all gluster&nbsp; processes,&nbsp; so the clients won't wait&nbsp;<br clear="none">&gt;for the timeout before&nbsp; switching to another node in the TSP.<br clear="none">&gt;<br clear="none">&gt;In CentOS/RHEL,&nbsp; there&nbsp; is a&nbsp; systemd&nbsp; service called<br clear="none">&gt;'glusterfsd.service' that&nbsp; is taking care on shutdown to kill all<br clear="none">&gt;processes,&nbsp; so clients won't hung.<br clear="none">&gt;<br clear="none">&gt;systemctl cat glusterfsd.service --no-pager<br clear="none">&gt;# /usr/lib/systemd/system/glusterfsd.service<br clear="none">&gt;[Unit]<br clear="none">&gt;Description=GlusterFS brick processes (stopping only)<br clear="none">&gt;After=network.target glusterd.service<br clear="none">&gt;<br clear="none">&gt;[Service]<br clear="none">&gt;Type=oneshot<br clear="none">&gt;# glusterd starts the glusterfsd processed on-demand<br clear="none">&gt;# /bin/true will mark this service as started, RemainAfterExit keeps it<br clear="none">&gt;active<br clear="none">&gt;ExecStart=/bin/true<br clear="none">&gt;RemainAfterExit=yes<br clear="none">&gt;# if there are no glusterfsd processes, a stop/reload should not give<br clear="none">&gt;an error<br clear="none">&gt;ExecStop=/bin/sh -c "/bin/killall --wait glusterfsd || /bin/true"<br clear="none">&gt;ExecReload=/bin/sh -c "/bin/killall -HUP glusterfsd || /bin/true"<br clear="none">&gt;<br clear="none">&gt;[Install]<br clear="none">&gt;WantedBy=multi-user.target<br clear="none">&gt;<br clear="none">&gt;Best Regards,<br clear="none">&gt;Strahil&nbsp; Nikolov<br clear="none">&gt;<br clear="none">&gt;На 16 юни 2020 г. 18:41:59 GMT+03:00, ahemad shaik<br clear="none">&gt;&lt;<a shape="rect" ymailto="mailto:ahemad_shaik@yahoo.com" href="mailto:ahemad_shaik@yahoo.com">ahemad_shaik@yahoo.com</a>&gt; написа:<br clear="none">&gt;&gt; Hi,&nbsp;<br clear="none">&gt;&gt;I see there is a script file in below mentioned path in all nodes<br clear="none">&gt;using<br clear="none">&gt;&gt;which gluster volume<br clear="none">&gt;&gt;created./usr/share/glusterfs/scripts/stop-all-gluster-processes.sh<br clear="none">&gt;&gt;I need to create a system service and when ever there is some server<br clear="none">&gt;&gt;down, we need to call this script or we need to have it run always it<br clear="none">&gt;&gt;will take care when some node is down to make sure that client will<br clear="none">&gt;not<br clear="none">&gt;&gt;have any issues in accessing mount point ?<br clear="none">&gt;&gt;can you please share any documentation on how to use this.That will be<br clear="none">&gt;&gt;great help.<br clear="none">&gt;&gt;Thanks,Ahemad<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;On Tuesday, 16 June, 2020, 08:59:31 pm IST, Strahil Nikolov<br clear="none">&gt;&gt;&lt;<a shape="rect" ymailto="mailto:hunter86_bg@yahoo.com" href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>&gt; wrote:&nbsp; <br clear="none">&gt;&gt; <br clear="none">&gt;&gt; Hi Ahemad,<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;You can simplify it&nbsp; by creating a systemd service that&nbsp; will&nbsp; call&nbsp;<br clear="none">&gt;&gt;the script.<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;It was&nbsp; already mentioned&nbsp; in a previous thread&nbsp; (with example),&nbsp; so&nbsp;<br clear="none">&gt;&gt;you can just use&nbsp; it.<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;Best Regards,<br clear="none">&gt;&gt;Strahil&nbsp; Nikolov<br clear="none">&gt;&gt;<br clear="none">&gt;&gt;На 16 юни 2020 г. 16:02:07 GMT+03:00, Hu Bert &lt;<a shape="rect" ymailto="mailto:revirii@googlemail.com" href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>&gt;<br clear="none">&gt;&gt;написа:<br clear="none">&gt;&gt;&gt;Hi,<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;if you simply reboot or shutdown one of the gluster nodes, there<br clear="none">&gt;might<br clear="none">&gt;&gt;&gt;be a (short or medium) unavailability of the volume on the nodes. To<br clear="none">&gt;&gt;&gt;avoid this there's script:<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh (path may<br clear="none">&gt;&gt;&gt;be different depending on distribution)<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;If i remember correctly: this notifies the clients that this node is<br clear="none">&gt;&gt;&gt;going to be unavailable (please correct me if the details are wrong).<br clear="none">&gt;&gt;&gt;If i do reboots of one gluster node, i always call this script and<br clear="none">&gt;&gt;&gt;never have seen unavailability issues on the clients.<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;Regards,<br clear="none">&gt;&gt;&gt;Hubert<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;Am Mo., 15. Juni 2020 um 19:36 Uhr schrieb ahemad shaik<br clear="none">&gt;&gt;&gt;&lt;<a shape="rect" ymailto="mailto:ahemad_shaik@yahoo.com" href="mailto:ahemad_shaik@yahoo.com">ahemad_shaik@yahoo.com</a>&gt;:<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Hi There,<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; I have created 3 replica gluster volume with 3 bricks from 3 nodes.<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; "gluster volume create glustervol replica 3 transport tcp<br clear="none">&gt;&gt;node1:/data<br clear="none">&gt;&gt;&gt;node2:/data node3:/data force"<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; mounted on client node using below command.<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; "mount -t glusterfs node4:/glustervol&nbsp; &nbsp; /mnt/"<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; when any of the node (either node1,node2 or node3) goes down,<br clear="none">&gt;&gt;gluster<br clear="none">&gt;&gt;&gt;mount/volume (/mnt) not accessible at client (node4).<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; purpose of replicated volume is high availability but not able to<br clear="none">&gt;&gt;&gt;achieve it.<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Is it a bug or i am missing anything.<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Any suggestions will be great help!!!<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; kindly suggest.<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Thanks,<br clear="none">&gt;&gt;&gt;&gt; Ahemad<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; ________<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Community Meeting Calendar:<br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Schedule -<br clear="none">&gt;&gt;&gt;&gt; Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">&gt;&gt;&gt;&gt; Bridge: <a shape="rect" href="https://bluejeans.com/441850968" target="_blank">https://bluejeans.com/441850968</a><br clear="none">&gt;&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;&gt; Gluster-users mailing list<br clear="none">&gt;&gt;&gt;&gt; <a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">&gt;&gt;&gt;&gt; <a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">&gt;&gt;&gt;________<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;Community Meeting Calendar:<br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;Schedule -<br clear="none">&gt;&gt;&gt;Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">&gt;&gt;&gt;Bridge: <a shape="rect" href="https://bluejeans.com/441850968" target="_blank">https://bluejeans.com/441850968</a><br clear="none">&gt;&gt;&gt;<br clear="none">&gt;&gt;&gt;Gluster-users mailing list<br clear="none">&gt;&gt;&gt;<a shape="rect" ymailto="mailto:Gluster-users@gluster.org" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">&gt;&gt;&gt;<a shape="rect" href="https://lists.gluster.org/mailman/listinfo/gluster-users " target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users </a>&nbsp;  </div> </div> </blockquote>