<div>Thanks for the additional hints, I have the following 2 questions first:<br></div><div><br></div><div>- In order to launch the index heal is the following command correct:<br></div><div>gluster volume heal myvolume<br></div><div><br></div><div>- If I run a "volume start force" will it have any short disruptions on my clients which mount the volume through FUSE? If yes, how long? This is a production system that's why I am asking.<br></div><div><br></div><div class="protonmail_signature_block protonmail_signature_block-empty"><div class="protonmail_signature_block-user protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote class="protonmail_quote" type="cite"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] self-heal not working<br></div><div>Local Time: August 22, 2017 6:26 AM<br></div><div>UTC Time: August 22, 2017 4:26 AM<br></div><div>From: ravishankar@redhat.com<br></div><div>To: mabi <mabi@protonmail.ch>, Ben Turner <bturner@redhat.com><br></div><div>Gluster Users <gluster-users@gluster.org><br></div><div><br></div><div> <br></div><p>Explore the following:<br></p><p>- Launch index heal and look at the glustershd logs of all bricks
for possible errors<br></p><p>- See if the glustershd in each node is connected to all bricks.<br></p><p>- If not try to restart shd by `volume start force`<br></p><p>- Launch index heal again and try.<br></p><div>- Try debugging the shd log by setting client-log-level to DEBUG
temporarily.<br></div><div> <br></div><div> <br></div><div class="moz-cite-prefix">On 08/22/2017 03:19 AM, mabi wrote:<br></div><blockquote type="cite"><div>Sure, it doesn't look like a split brain based on the output:<br></div><div><br></div><div>Brick node1.domain.tld:/data/myvolume/brick<br></div><div>Status: Connected<br></div><div>Number of entries in split-brain: 0<br></div><div><br></div><div>Brick node2.domain.tld:/data/myvolume/brick<br></div><div>Status: Connected<br></div><div>Number of entries in split-brain: 0<br></div><div><br></div><div>Brick node3.domain.tld:/srv/glusterfs/myvolume/brick<br></div><div>Status: Connected<br></div><div>Number of entries in split-brain: 0<br></div><div><br></div><div class="protonmail_signature_block
protonmail_signature_block-empty"><div class="protonmail_signature_block-user
protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton
protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote type="cite" class="protonmail_quote"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] self-heal not working<br></div><div>Local Time: August 21, 2017 11:35 PM<br></div><div>UTC Time: August 21, 2017 9:35 PM<br></div><div>From: <a href="mailto:bturner@redhat.com" class="moz-txt-link-abbreviated">bturner@redhat.com</a><br></div><div>To: mabi <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>Gluster Users <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div><br></div><div>Can you also provide:<br></div><div><br></div><div>gluster v heal <my vol> info split-brain<br></div><div><br></div><div>If it is split brain just delete the incorrect file from
the brick and run heal again. I haven"t tried this with
arbiter but I assume the process is the same.<br></div><div><br></div><div>-b<br></div><div><br></div><div>----- Original Message -----<br></div><div>> From: "mabi" <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>> To: "Ben Turner" <a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a><br></div><div>> Cc: "Gluster Users" <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div>> Sent: Monday, August 21, 2017 4:55:59 PM<br></div><div>> Subject: Re: [Gluster-users] self-heal not working<br></div><div>> <br></div><div>> Hi Ben,<br></div><div>> <br></div><div>> So it is really a 0 kBytes file everywhere (all nodes
including the arbiter<br></div><div>> and from the client).<br></div><div>> Here below you will find the output you requested.
Hopefully that will help<br></div><div>> to find out why this specific file is not healing...
Let me know if you need<br></div><div>> any more information. Btw node3 is my arbiter node.<br></div><div>> <br></div><div>> NODE1:<br></div><div>> <br></div><div>> STAT:<br></div><div>> File:<br></div><div>>
‘/data/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png’<br></div><div>> Size: 0 Blocks: 38 IO Block: 131072 regular empty file<br></div><div>> Device: 24h/36d Inode: 10033884 Links: 2<br></div><div>> Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>> Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>> Modify: 2017-08-14 17:11:46.407404779 +0200<br></div><div>> Change: 2017-08-14 17:11:46.407404779 +0200<br></div><div>> Birth: -<br></div><div>> <br></div><div>> GETFATTR:<br></div><div>> trusted.afr.dirty=0sAAAAAQAAAAAAAAAA<br></div><div>> trusted.bit-rot.version=0sAgAAAAAAAABZhuknAAlJAg==<br></div><div>> trusted.gfid=0sGYXiM9XuTj6lGs8LX58q6g==<br></div><div>>
trusted.glusterfs.d99af2fa-439b-4a21-bf3a-38f3849f87ec.xtime=0sWZG9sgAGOyo=<br></div><div>> <br></div><div>> NODE2:<br></div><div>> <br></div><div>> STAT:<br></div><div>> File:<br></div><div>>
‘/data/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png’<br></div><div>> Size: 0 Blocks: 38 IO Block: 131072 regular empty file<br></div><div>> Device: 26h/38d Inode: 10031330 Links: 2<br></div><div>> Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>> Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>> Modify: 2017-08-14 17:11:46.403704181 +0200<br></div><div>> Change: 2017-08-14 17:11:46.403704181 +0200<br></div><div>> Birth: -<br></div><div>> <br></div><div>> GETFATTR:<br></div><div>> trusted.afr.dirty=0sAAAAAQAAAAAAAAAA<br></div><div>> trusted.bit-rot.version=0sAgAAAAAAAABZhu6wAA8Hpw==<br></div><div>> trusted.gfid=0sGYXiM9XuTj6lGs8LX58q6g==<br></div><div>>
trusted.glusterfs.d99af2fa-439b-4a21-bf3a-38f3849f87ec.xtime=0sWZG9sgAGOVE=<br></div><div>> <br></div><div>> NODE3:<br></div><div>> STAT:<br></div><div>> File:<br></div><div>>
/srv/glusterfs/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>> Size: 0 Blocks: 0 IO Block: 4096 regular empty file<br></div><div>> Device: ca11h/51729d Inode: 405208959 Links: 2<br></div><div>> Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>> Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>> Modify: 2017-08-14 17:04:55.530681000 +0200<br></div><div>> Change: 2017-08-14 17:11:46.604380051 +0200<br></div><div>> Birth: -<br></div><div>> <br></div><div>> GETFATTR:<br></div><div>> trusted.afr.dirty=0sAAAAAQAAAAAAAAAA<br></div><div>> trusted.bit-rot.version=0sAgAAAAAAAABZe6ejAAKPAg==<br></div><div>> trusted.gfid=0sGYXiM9XuTj6lGs8LX58q6g==<br></div><div>>
trusted.glusterfs.d99af2fa-439b-4a21-bf3a-38f3849f87ec.xtime=0sWZG9sgAGOc4=<br></div><div>> <br></div><div>> CLIENT GLUSTER MOUNT:<br></div><div>> STAT:<br></div><div>> File:<br></div><div>>
"/mnt/myvolume/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png"<br></div><div>> Size: 0 Blocks: 0 IO Block: 131072 regular empty file<br></div><div>> Device: 1eh/30d Inode: 11897049013408443114 Links: 1<br></div><div>> Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>> Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>> Modify: 2017-08-14 17:11:46.407404779 +0200<br></div><div>> Change: 2017-08-14 17:11:46.407404779 +0200<br></div><div>> Birth: -<br></div><div>> <br></div><div>> > -------- Original Message --------<br></div><div>> > Subject: Re: [Gluster-users] self-heal not
working<br></div><div>> > Local Time: August 21, 2017 9:34 PM<br></div><div>> > UTC Time: August 21, 2017 7:34 PM<br></div><div>> > From: <a href="mailto:bturner@redhat.com" class="moz-txt-link-abbreviated">bturner@redhat.com</a><br></div><div>> > To: mabi <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>> > Gluster Users <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div>> ><br></div><div>> > ----- Original Message -----<br></div><div>> >> From: "mabi" <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>> >> To: "Gluster Users" <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div>> >> Sent: Monday, August 21, 2017 9:28:24 AM<br></div><div>> >> Subject: [Gluster-users] self-heal not
working<br></div><div>> >><br></div><div>> >> Hi,<br></div><div>> >><br></div><div>> >> I have a replicat 2 with arbiter GlusterFS
3.8.11 cluster and there is<br></div><div>> >> currently one file listed to be healed as you
can see below but never gets<br></div><div>> >> healed by the self-heal daemon:<br></div><div>> >><br></div><div>> >> Brick node1.domain.tld:/data/myvolume/brick<br></div><div>> >>
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>> >> Status: Connected<br></div><div>> >> Number of entries: 1<br></div><div>> >><br></div><div>> >> Brick node2.domain.tld:/data/myvolume/brick<br></div><div>> >>
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>> >> Status: Connected<br></div><div>> >> Number of entries: 1<br></div><div>> >><br></div><div>> >> Brick
node3.domain.tld:/srv/glusterfs/myvolume/brick<br></div><div>> >>
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>> >> Status: Connected<br></div><div>> >> Number of entries: 1<br></div><div>> >><br></div><div>> >> As once recommended on this mailing list I
have mounted that glusterfs<br></div><div>> >> volume<br></div><div>> >> temporarily through fuse/glusterfs and ran a
"stat" on that file which is<br></div><div>> >> listed above but nothing happened.<br></div><div>> >><br></div><div>> >> The file itself is available on all 3
nodes/bricks but on the last node it<br></div><div>> >> has a different date. By the way this file is
0 kBytes big. Is that maybe<br></div><div>> >> the reason why the self-heal does not work?<br></div><div>> ><br></div><div>> > Is the file actually 0 bytes or is it just 0
bytes on the arbiter(0 bytes<br></div><div>> > are expected on the arbiter, it just stores
metadata)? Can you send us the<br></div><div>> > output from stat on all 3 nodes:<br></div><div>> ><br></div><div>> > $ stat <file on back end brick><br></div><div>> > $ getfattr -d -m - <file on back end brick><br></div><div>> > $ stat <file from gluster mount><br></div><div>> ><br></div><div>> > Lets see what things look like on the back end,
it should tell us why<br></div><div>> > healing is failing.<br></div><div>> ><br></div><div>> > -b<br></div><div>> ><br></div><div>> >><br></div><div>> >> And how can I now make this file to heal?<br></div><div>> >><br></div><div>> >> Thanks,<br></div><div>> >> Mabi<br></div><div>> >><br></div><div>> >><br></div><div>> >><br></div><div>> >><br></div><div>> >>
_______________________________________________<br></div><div>> >> Gluster-users mailing list<br></div><div>> >> <a href="mailto:Gluster-users@gluster.org" class="moz-txt-link-abbreviated">Gluster-users@gluster.org</a><br></div><div>> >> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" class="moz-txt-link-freetext">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div></blockquote><div><br></div><div><br></div><div><br></div><pre wrap="">_______________________________________________
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" class="moz-txt-link-abbreviated">Gluster-users@gluster.org</a>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" class="moz-txt-link-freetext">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br></pre></blockquote></blockquote><div><br></div>