<!DOCTYPE html><html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /></head><body><div data-html-editor-font-wrapper="true" style="font-family: arial, sans-serif; font-size: 13px;">Hi Karthik,<br><br><br>Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks:<br><br>root@gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack<br>getfattr: Removing leading '/' from absolute path names<br># file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack<br>trusted.afr.dirty=0x000000000000000000000000<br>trusted.afr.myvol-client-6=0x000000010000000100000000<br>trusted.bit-rot.version=0x02000000000000005a0d2f650005bf97<br>trusted.gfid=0xe46e9a655128456bba0d98568d432717<br><br>root@gv3 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack<br>getfattr: Removing leading '/' from absolute path names<br># file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack<br>trusted.afr.dirty=0x000000000000000000000000<br>trusted.afr.myvol-client-6=0x000000010000000100000000<br>trusted.bit-rot.version=0x02000000000000005a0d2f6900076620<br>trusted.gfid=0xe46e9a655128456bba0d98568d432717<br><br>root@gv1 ~ # getfattr -d -e hex -m . /data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack<br>getfattr: Removing leading '/' from absolute path names<br># file: data/gv23-arbiter/testset/306/30677af808ad578916f54783904e6342.pack<br>trusted.gfid=0xe46e9a655128456bba0d98568d432717<br><br>Is it okay that only gfid info is available on the arbiter brick?<br><br><signature>--<br>Best Regards,<br><br>Seva Gluschenko<br>CTO @ <a target="_blank" rel="noopener noreferrer" href="http://webkontrol.ru/">http://webkontrol.ru</a></signature><br><br><br>February 9, 2018 2:01 PM, "Karthik Subrahmanya" <<a target="_blank" tabindex="-1" href="mailto:%22Karthik%20Subrahmanya%22%20<ksubrahm@redhat.com>">ksubrahm@redhat.com</a>> wrote:<br> <blockquote><div><div><div dir="ltr"> <div> <div>On Fri, Feb 9, 2018 at 3:23 PM, Seva Gluschenko <span dir="ltr"><<a target="_blank" rel="external nofollow noopener noreferrer" tabindex="-1" href="mailto:gvs@webkontrol.ru">gvs@webkontrol.ru</a>></span> wrote:<blockquote style="margin: 0px 0px 0px 0.8ex;border-left: 1px solid rgb(204,204,204);padding-left: 1ex"><div><div style="font-family: arial,sans-serif;font-size: 13px">Hi Karthik,<br><br>Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.<wbr></wbr>log keeps growing, and there's a lot of pending entries in the heal info.<br><br>The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect it, but the truth is, the cluster is quite heavily loaded, it handles roughly 8 million reads and 100k writes daily.)</div></div></blockquote> <div>Since you have huge number of files inside nested directories and high load on the cluster, it might take some time to complete the heal. You don't need to worry about the gfids you are seeing on the heal info output.</div> <div>Heal info summary is supported from version 3.13.</div> <blockquote style="margin: 0px 0px 0px 0.8ex;border-left: 1px solid rgb(204,204,204);padding-left: 1ex"><div><div style="font-family: arial,sans-serif;font-size: 13px"> <br>The heal info output is full of lines like this:<br><br>...<br><br>Brick gv2:/data/glusterfs<br><gfid:96a4ee35-b519-40e2-8dc0-<wbr></wbr>a26f8faa5628><br><gfid:fa4185b0-e5ab-4fdc-9dca-<wbr></wbr>cb6ba33dcc8d><br><gfid:8b2cf4bf-8c2a-465e-8f28-<wbr></wbr>3e9a7f517268><br><gfid:13925c48-fda4-40bd-bfcb-<wbr></wbr>d7ced99b82b2><br><gfid:292e3a0e-7114-4c97-b688-<wbr></wbr>e94503047b58><br><gfid:a52d1173-e034-4b57-9170-<wbr></wbr>a7c91cbe2904><br><gfid:5c830c7b-97b7-425b-9ab2-<wbr></wbr>761ef2f41e88><br><gfid:420c76a8-1598-4136-9c77-<wbr></wbr>88c8d59d24e7><br><gfid:ea6dbca2-f7e3-4015-ae34-<wbr></wbr>04e8bf31fd4f><br>...<br><br>And so forth. Out of 80k+ lines, less than just 200 are not related to gfids (and yes, number of gfids is well beyond 64999):<br><br># grep -c gfid heal-info.fpack<br>80578<br><br># grep -v gfid heal-info.myvol<br>Brick gv0:/data/glusterfs<br>Status: Connected<br><span>Number of entries: 0<br><br>Brick gv1:/data/glusterfs</span><br>Status: Connected<br><span>Number of entries: 0<br><br>Brick gv4:/data/gv01-arbiter</span><br>Status: Connected<br><span>Number of entries: 0<br><br>Brick gv2:/data/glusterfs</span><br>/testset/13f/<wbr></wbr>13f27c303b3cb5e23ee647d8285a4a<wbr></wbr>6d.pack<br>/testset/05c - Possibly undergoing heal<br><br>/testset/b99 - Possibly undergoing heal<br><br>/testset/dd7 - Possibly undergoing heal<br><br>/testset/0b8 - Possibly undergoing heal<br><br>/testset/f21 - Possibly undergoing heal<br><br>...<br><br>And here is the getfattr output for a sample file:<br><br># getfattr -d -e hex -m . /data/glusterfs/testset/13f/<wbr></wbr>13f27c303b3cb5e23ee647d8285a4a<wbr></wbr>6d.pack<br>getfattr: Removing leading '/' from absolute path names<br># file: data/glusterfs/testset/13f/<wbr></wbr>13f27c303b3cb5e23ee647d8285a4a<wbr></wbr>6d.pack<br>trusted.afr.dirty=<wbr></wbr>0x000000000000000000000000<br>trusted.afr.myvol-client-6=<wbr></wbr>0x000000010000000000000000<br>trusted.bit-rot.version=<wbr></wbr>0x02000000000000005a0d2f650005<wbr></wbr>bf97<br>trusted.gfid=<wbr></wbr>0xb42d966b77154de990ecd0922017<wbr></wbr>14fd<br><br>I tried several files, and the output is pretty much the same, the gfid is the only difference.<br><br>Could it be anything else I would provide to shed some light on this?</div></div></blockquote> <div>I wanted to check the getfattr output of a file and a directory which belongs to the second replica sub volume from all the 3 bricks<br>Brick4: gv2:/data/glusterfs<br>Brick5: gv3:/data/glusterfs<br>Brick6: gv1:/data/gv23-arbiter (arbiter)</div> <div>to see the direction of pending markers being set.<br> </div> <div>Regards,</div> <div>Karthik</div> <blockquote style="margin: 0px 0px 0px 0.8ex;border-left: 1px solid rgb(204,204,204);padding-left: 1ex"><div><div style="font-family: arial,sans-serif;font-size: 13px"> <br><span>--<br>Best Regards,<br><br>Seva Gluschenko<br>CTO @ <a rel="external nofollow noopener noreferrer" target="_blank" tabindex="-1" href="http://webkontrol.ru/">http://webkontrol.ru</a></span><br><br> <div><div>February 9, 2018 9:16 AM, "Karthik Subrahmanya" <<a target="_blank" rel="external nofollow noopener noreferrer" tabindex="-1" href="mailto:%22Karthik%20Subrahmanya%22%20%3Cksubrahm@redhat.com%3E">ksubrahm@redhat.com</a>> wrote:<blockquote><div><div> <div dir="ltr"> <div> <div> <div> <div> <div>Hey,</div>Did the heal completed and you still have some entries pending heal?<br>If yes then can you provide the following informations to debug the issue.<br>1. Which version of gluster you are running</div>2. gluster volume heal <volname> info summary or gluster volume heal <volname> info</div>3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the which is pending heal from all the bricks</div>Regards,</div>Karthik</div> <div><div>On Thu, Feb 8, 2018 at 12:48 PM, Seva Gluschenko <span dir="ltr"><<a rel="external nofollow noopener noreferrer" target="_blank" tabindex="-1" href="mailto:gvs@webkontrol.ru">gvs@webkontrol.ru</a>></span> wrote:<br> <blockquote style="margin: 0px 0px 0px 0.8ex;border-left: 1px solid rgb(204,204,204);padding-left: 1ex"> <div><div style="font-family: arial,sans-serif;font-size: 13px"><div><div><div style="font-family: arial,sans-serif;font-size: 13px">Hi folks,<br><br>I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows:<br><br># gluster volume info<br><br>Volume Name: myvol<br>Type: Distributed-Replicate<br>Volume ID: 43ba517a-ac09-461e-99da-a19775<wbr></wbr>9a7dc8<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 3 x (2 + 1) = 9<br>Transport-type: tcp<br>Bricks:<br>Brick1: gv0:/data/glusterfs<br>Brick2: gv1:/data/glusterfs<br>Brick3: gv4:/data/gv01-arbiter (arbiter)<br>Brick4: gv2:/data/glusterfs<br>Brick5: gv3:/data/glusterfs<br>Brick6: gv1:/data/gv23-arbiter (arbiter)<br>Brick7: gv4:/data/glusterfs<br>Brick8: gv5:/data/glusterfs<br>Brick9: pluto:/var/gv45-arbiter (arbiter)<br>Options Reconfigured:<br>nfs.disable: on<br>transport.address-family: inet<br>storage.owner-gid: 1000<br>storage.owner-uid: 1000<br>cluster.self-heal-daemon: enable<br><br>The gv23-arbiter is the brick that was recently moved from other server (chronos) using the following command:<br><br># gluster volume replace-brick myvol chronos:/mnt/gv23-arbiter gv1:/data/gv23-arbiter commit force<br>volume replace-brick: success: replace-brick commit force operation successful<br><br>It's not the first time I was moving an arbiter brick, and the heal-count was zero for all the bricks before the change, so I didn't expect much trouble then. What was probably wrong is that I then forced chronos out of cluster with gluster peer detach command. All since that, over the course of the last 3 days, I see this:<br><br># gluster volume heal myvol statistics heal-count<br>Gathering count of entries to be healed on volume myvol has been successful<br><br>Brick gv0:/data/glusterfs<br>Number of entries: 0<br><br>Brick gv1:/data/glusterfs<br>Number of entries: 0<br><br>Brick gv4:/data/gv01-arbiter<br>Number of entries: 0<br><br>Brick gv2:/data/glusterfs<br>Number of entries: 64999<br><br>Brick gv3:/data/glusterfs<br>Number of entries: 64999<br><br>Brick gv1:/data/gv23-arbiter<br>Number of entries: 0<br><br>Brick gv4:/data/glusterfs<br>Number of entries: 0<br><br>Brick gv5:/data/glusterfs<br>Number of entries: 0<br><br>Brick pluto:/var/gv45-arbiter<br>Number of entries: 0<br><br>According to the /var/log/glusterfs/glustershd.<wbr></wbr>log, the self-healing is undergoing, so it might be worth just sit and wait, but I'm wondering why this 64999 heal-count persists (a limitation on counter? In fact, gv2 and gv3 bricks contain roughly 30 million files), and I feel bothered because of the following output:<br><br># gluster volume heal myvol info heal-failed<br>Gathering list of heal failed entries on volume myvol has been unsuccessful on bricks that are down. Please check if all brick processes are running.<br><br>I attached the chronos server back to the cluster, with no noticeable effect. Any comments and suggestions would be much appreciated.<br><br>--<br>Best Regards,<br><br>Seva Gluschenko<br>CTO @ <a rel="external nofollow noopener noreferrer" target="_blank" tabindex="-1" href="http://webkontrol.ru/">http://webkontrol.ru</a> </div></div></div></div></div> <br>______________________________<wbr></wbr>_________________<br>Gluster-users mailing list<br><a rel="external nofollow noopener noreferrer" target="_blank" tabindex="-1" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br><a rel="external nofollow noopener noreferrer" target="_blank" tabindex="-1" href="http://lists.gluster.org/mailman/listinfo/gluster-users">http://lists.gluster.org/mailm<wbr></wbr>an/listinfo/gluster-users</a> </blockquote> </div></div> </div></div></blockquote> </div></div> </div></div></blockquote> </div> </div> </div></div></div></blockquote> </div></body></html>