<div>As suggested I have now opened a bug on bugzilla:<br></div><div><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1486063">https://bugzilla.redhat.com/show_bug.cgi?id=1486063</a><br></div><div class="protonmail_signature_block protonmail_signature_block-empty"><div class="protonmail_signature_block-user protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote class="protonmail_quote" type="cite"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] self-heal not working<br></div><div>Local Time: August 28, 2017 4:29 PM<br></div><div>UTC Time: August 28, 2017 2:29 PM<br></div><div>From: ravishankar@redhat.com<br></div><div>To: mabi <mabi@protonmail.ch><br></div><div>Ben Turner <bturner@redhat.com>, Gluster Users <gluster-users@gluster.org><br></div><div><br></div><div> Great, can you raise a bug for the issue so that it is easier to
keep track (plus you'll be notified if the patch is posted) of it?
The general guidelines are @ <a href="https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Reporting-Guidelines" class="moz-txt-link-freetext">https://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Reporting-Guidelines</a> but you just need to provide whatever you described in this email
thread in the bug:<br></div><div> <br></div><div> i.e. volume info, heal info, getfattr and stat output of the file in
question.<br></div><div> <br></div><div> Thanks!<br></div><div> Ravi<br></div><div> <br></div><div> <br></div><div class="moz-cite-prefix">On 08/28/2017 07:49 PM, mabi wrote:<br></div><blockquote type="cite"><div>Thank you for the command. I ran it on all my nodes and now
finally the the self-heal daemon does not report any files to be
healed. Hopefully this scenario can get handled properly in
newer versions of GlusterFS.<br></div><div class="protonmail_signature_block
protonmail_signature_block-empty"><div class="protonmail_signature_block-user
protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton
protonmail_signature_block-empty"><br></div></div><div><br></div><blockquote class="protonmail_quote" type="cite"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] self-heal not working<br></div><div>Local Time: August 28, 2017 10:41 AM<br></div><div>UTC Time: August 28, 2017 8:41 AM<br></div><div>From: <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>To: mabi <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>Ben Turner <a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>, Gluster Users <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div><br></div><div><br></div><p><br></p><div><br></div><div class="moz-cite-prefix">On 08/28/2017 01:29 PM, mabi wrote:<br></div><blockquote type="cite"><div>Excuse me for my naive questions but how do I reset the
afr.dirty xattr on the file to be healed? and do I need to
do that through a FUSE mount? or simply on every bricks
directly? <br></div><div class="protonmail_signature_block
protonmail_signature_block-empty"><div class="protonmail_signature_block-user
protonmail_signature_block-empty"><div><br></div></div><div class="protonmail_signature_block-proton
protonmail_signature_block-empty"><br></div></div></blockquote><div>Directly on the bricks: `setfattr -n trusted.afr.dirty -v
0x000000000000000000000000
/data/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png`<br></div><div>-Ravi<br></div><div><br></div><blockquote type="cite"><div><br></div><blockquote type="cite" class="protonmail_quote"><div>-------- Original Message --------<br></div><div>Subject: Re: [Gluster-users] self-heal not working<br></div><div>Local Time: August 28, 2017 5:58 AM<br></div><div>UTC Time: August 28, 2017 3:58 AM<br></div><div>From: <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>To: Ben Turner <a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>,
mabi <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>Gluster Users <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div><br></div><div><br></div><div><br></div><div>On 08/28/2017 01:57 AM, Ben Turner wrote:<br></div><div>> ----- Original Message -----<br></div><div>>> From: "mabi" <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>>> To: "Ravishankar N" <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-rfc2396E"><ravishankar@redhat.com></a><br></div><div>>> Cc: "Ben Turner" <a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>,
"Gluster Users" <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div>>> Sent: Sunday, August 27, 2017 3:15:33 PM<br></div><div>>> Subject: Re: [Gluster-users] self-heal not
working<br></div><div>>><br></div><div>>> Thanks Ravi for your analysis. So as far as I
understand nothing to worry<br></div><div>>> about but my question now would be: how do I
get rid of this file from the<br></div><div>>> heal info?<br></div><div>> Correct me if I am wrong but clearing this is just
a matter of resetting the afr.dirty xattr? @Ravi - Is this
correct?<br></div><div><br></div><div>Yes resetting the xattr and launching index heal or
running heal-info <br></div><div>command should serve as a workaround.<br></div><div>-Ravi<br></div><div><br></div><div>><br></div><div>> -b<br></div><div>><br></div><div>>>> -------- Original Message --------<br></div><div>>>> Subject: Re: [Gluster-users] self-heal not
working<br></div><div>>>> Local Time: August 27, 2017 3:45 PM<br></div><div>>>> UTC Time: August 27, 2017 1:45 PM<br></div><div>>>> From: <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>>>> To: mabi <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a><br></div><div>>>> Ben Turner <a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>,
Gluster Users <a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a><br></div><div>>>><br></div><div>>>> Yes, the shds did pick up the file for
healing (I saw messages like " got<br></div><div>>>> entry:
1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea") but no error
afterwards.<br></div><div>>>><br></div><div>>>> Anyway I reproduced it by manually setting
the afr.dirty bit for a zero<br></div><div>>>> byte file on all 3 bricks. Since there are
no afr pending xattrs<br></div><div>>>> indicating good/bad copies and all files
are zero bytes, the data<br></div><div>>>> self-heal algorithm just picks the file
with the latest ctime as source.<br></div><div>>>> In your case that was the arbiter brick.
In the code, there is a check to<br></div><div>>>> prevent data heals if arbiter is the
source. So heal was not happening and<br></div><div>>>> the entries were not removed from
heal-info output.<br></div><div>>>><br></div><div>>>> Perhaps we should add a check in the code
to just remove the entries from<br></div><div>>>> heal-info if size is zero bytes in all
bricks.<br></div><div>>>><br></div><div>>>> -Ravi<br></div><div>>>><br></div><div>>>> On 08/25/2017 06:33 PM, mabi wrote:<br></div><div>>>><br></div><div>>>>> Hi Ravi,<br></div><div>>>>><br></div><div>>>>> Did you get a chance to have a look at
the log files I have attached in my<br></div><div>>>>> last mail?<br></div><div>>>>><br></div><div>>>>> Best,<br></div><div>>>>> Mabi<br></div><div>>>>><br></div><div>>>>>> -------- Original Message --------<br></div><div>>>>>> Subject: Re: [Gluster-users]
self-heal not working<br></div><div>>>>>> Local Time: August 24, 2017 12:08
PM<br></div><div>>>>>> UTC Time: August 24, 2017 10:08 AM<br></div><div>>>>>> From: <a href="mailto:mabi@protonmail.ch" class="moz-txt-link-abbreviated">mabi@protonmail.ch</a><br></div><div>>>>>> To: Ravishankar N<br></div><div>>>>>> [<a href="mailto:ravishankar@redhat.com" class="moz-txt-link-rfc2396E"><ravishankar@redhat.com></a>](<a href="mailto:ravishankar@redhat.com" class="moz-txt-link-freetext">mailto:ravishankar@redhat.com</a>)<br></div><div>>>>>> Ben Turner [<a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>](<a href="mailto:bturner@redhat.com" class="moz-txt-link-freetext">mailto:bturner@redhat.com</a>),
Gluster<br></div><div>>>>>> Users [<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>><br></div><div>>>>>> Thanks for confirming the command.
I have now enabled DEBUG<br></div><div>>>>>> client-log-level, run a heal and
then attached the glustershd log files<br></div><div>>>>>> of all 3 nodes in this mail.<br></div><div>>>>>><br></div><div>>>>>> The volume concerned is called
myvol-pro, the other 3 volumes have no<br></div><div>>>>>> problem so far.<br></div><div>>>>>><br></div><div>>>>>> Also note that in the mean time it
looks like the file has been deleted<br></div><div>>>>>> by the user and as such the heal
info command does not show the file<br></div><div>>>>>> name anymore but just is GFID
which is:<br></div><div>>>>>><br></div><div>>>>>>
gfid:1985e233-d5ee-4e3e-a51a-cf0b5f9f2aea<br></div><div>>>>>><br></div><div>>>>>> Hope that helps for debugging this
issue.<br></div><div>>>>>><br></div><div>>>>>>> -------- Original Message
--------<br></div><div>>>>>>> Subject: Re: [Gluster-users]
self-heal not working<br></div><div>>>>>>> Local Time: August 24, 2017
5:58 AM<br></div><div>>>>>>> UTC Time: August 24, 2017 3:58
AM<br></div><div>>>>>>> From: <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>>>>>>> To: mabi [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>)<br></div><div>>>>>>> Ben Turner [<a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>](<a href="mailto:bturner@redhat.com" class="moz-txt-link-freetext">mailto:bturner@redhat.com</a>),
Gluster<br></div><div>>>>>>> Users [<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>><br></div><div>>>>>>> Unlikely. In your case only
the afr.dirty is set, not the<br></div><div>>>>>>> afr.volname-client-xx xattr.<br></div><div>>>>>>><br></div><div>>>>>>> `gluster volume set myvolume
diagnostics.client-log-level DEBUG` is<br></div><div>>>>>>> right.<br></div><div>>>>>>><br></div><div>>>>>>> On 08/23/2017 10:31 PM, mabi
wrote:<br></div><div>>>>>>><br></div><div>>>>>>>> I just saw the following
bug which was fixed in 3.8.15:<br></div><div>>>>>>>><br></div><div>>>>>>>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1471613" class="moz-txt-link-freetext">https://bugzilla.redhat.com/show_bug.cgi?id=1471613</a><br></div><div>>>>>>>><br></div><div>>>>>>>> Is it possible that the
problem I described in this post is related to<br></div><div>>>>>>>> that bug?<br></div><div>>>>>>>><br></div><div>>>>>>>>> -------- Original
Message --------<br></div><div>>>>>>>>> Subject: Re:
[Gluster-users] self-heal not working<br></div><div>>>>>>>>> Local Time: August 22,
2017 11:51 AM<br></div><div>>>>>>>>> UTC Time: August 22,
2017 9:51 AM<br></div><div>>>>>>>>> From: <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>>>>>>>>> To: mabi [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>)<br></div><div>>>>>>>>> Ben Turner [<a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>](<a href="mailto:bturner@redhat.com" class="moz-txt-link-freetext">mailto:bturner@redhat.com</a>),
Gluster<br></div><div>>>>>>>>> Users [<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>>>><br></div><div>>>>>>>>> On 08/22/2017 02:30
PM, mabi wrote:<br></div><div>>>>>>>>><br></div><div>>>>>>>>>> Thanks for the
additional hints, I have the following 2 questions<br></div><div>>>>>>>>>> first:<br></div><div>>>>>>>>>><br></div><div>>>>>>>>>> - In order to
launch the index heal is the following command correct:<br></div><div>>>>>>>>>> gluster volume
heal myvolume<br></div><div>>>>>>>>> Yes<br></div><div>>>>>>>>><br></div><div>>>>>>>>>> - If I run a
"volume start force" will it have any short disruptions<br></div><div>>>>>>>>>> on my clients
which mount the volume through FUSE? If yes, how long?<br></div><div>>>>>>>>>> This is a
production system that"s why I am asking.<br></div><div>>>>>>>>> No. You can actually
create a test volume on your personal linux box<br></div><div>>>>>>>>> to try these kinds of
things without needing multiple machines. This<br></div><div>>>>>>>>> is how we develop and
test our patches :)<br></div><div>>>>>>>>> "gluster volume create
testvol replica 3 /home/mabi/bricks/brick{1..3}<br></div><div>>>>>>>>> force` and so on.<br></div><div>>>>>>>>><br></div><div>>>>>>>>> HTH,<br></div><div>>>>>>>>> Ravi<br></div><div>>>>>>>>><br></div><div>>>>>>>>>>> --------
Original Message --------<br></div><div>>>>>>>>>>> Subject: Re:
[Gluster-users] self-heal not working<br></div><div>>>>>>>>>>> Local Time:
August 22, 2017 6:26 AM<br></div><div>>>>>>>>>>> UTC Time:
August 22, 2017 4:26 AM<br></div><div>>>>>>>>>>> From: <a href="mailto:ravishankar@redhat.com" class="moz-txt-link-abbreviated">ravishankar@redhat.com</a><br></div><div>>>>>>>>>>> To: mabi [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>),
Ben<br></div><div>>>>>>>>>>> Turner [<a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>](<a href="mailto:bturner@redhat.com" class="moz-txt-link-freetext">mailto:bturner@redhat.com</a>)<br></div><div>>>>>>>>>>> Gluster Users<br></div><div>>>>>>>>>>> [<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> Explore the
following:<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> - Launch index
heal and look at the glustershd logs of all bricks<br></div><div>>>>>>>>>>> for possible
errors<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> - See if the
glustershd in each node is connected to all bricks.<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> - If not try
to restart shd by `volume start force`<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> - Launch index
heal again and try.<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> - Try
debugging the shd log by setting client-log-level to DEBUG<br></div><div>>>>>>>>>>> temporarily.<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>> On 08/22/2017
03:19 AM, mabi wrote:<br></div><div>>>>>>>>>>><br></div><div>>>>>>>>>>>> Sure, it
doesn"t look like a split brain based on the output:<br></div><div>>>>>>>>>>>><br></div><div>>>>>>>>>>>> Brick
node1.domain.tld:/data/myvolume/brick<br></div><div>>>>>>>>>>>> Status:
Connected<br></div><div>>>>>>>>>>>> Number of
entries in split-brain: 0<br></div><div>>>>>>>>>>>><br></div><div>>>>>>>>>>>> Brick
node2.domain.tld:/data/myvolume/brick<br></div><div>>>>>>>>>>>> Status:
Connected<br></div><div>>>>>>>>>>>> Number of
entries in split-brain: 0<br></div><div>>>>>>>>>>>><br></div><div>>>>>>>>>>>> Brick
node3.domain.tld:/srv/glusterfs/myvolume/brick<br></div><div>>>>>>>>>>>> Status:
Connected<br></div><div>>>>>>>>>>>> Number of
entries in split-brain: 0<br></div><div>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>
-------- Original Message --------<br></div><div>>>>>>>>>>>>>
Subject: Re: [Gluster-users] self-heal not working<br></div><div>>>>>>>>>>>>> Local
Time: August 21, 2017 11:35 PM<br></div><div>>>>>>>>>>>>> UTC
Time: August 21, 2017 9:35 PM<br></div><div>>>>>>>>>>>>> From: <a href="mailto:bturner@redhat.com" class="moz-txt-link-abbreviated">bturner@redhat.com</a><br></div><div>>>>>>>>>>>>> To:
mabi [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>)<br></div><div>>>>>>>>>>>>>
Gluster Users<br></div><div>>>>>>>>>>>>> [<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>> Can
you also provide:<br></div><div>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>
gluster v heal <my vol> info split-brain<br></div><div>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>> If it
is split brain just delete the incorrect file from the
brick<br></div><div>>>>>>>>>>>>> and
run heal again. I haven"t tried this with arbiter but I<br></div><div>>>>>>>>>>>>> assume
the process is the same.<br></div><div>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>> -b<br></div><div>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>> -----
Original Message -----<br></div><div>>>>>>>>>>>>>>
From: "mabi" [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>)<br></div><div>>>>>>>>>>>>>>
To: "Ben Turner"<br></div><div>>>>>>>>>>>>>> [<a href="mailto:bturner@redhat.com" class="moz-txt-link-rfc2396E"><bturner@redhat.com></a>](<a href="mailto:bturner@redhat.com" class="moz-txt-link-freetext">mailto:bturner@redhat.com</a>)<br></div><div>>>>>>>>>>>>>>
Cc: "Gluster Users"<br></div><div>>>>>>>>>>>>>> [<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>>>>>>>>>
Sent: Monday, August 21, 2017 4:55:59 PM<br></div><div>>>>>>>>>>>>>>
Subject: Re: [Gluster-users] self-heal not working<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>> Hi
Ben,<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>> So
it is really a 0 kBytes file everywhere (all nodes
including<br></div><div>>>>>>>>>>>>>>
the arbiter<br></div><div>>>>>>>>>>>>>>
and from the client).<br></div><div>>>>>>>>>>>>>>
Here below you will find the output you requested.
Hopefully that<br></div><div>>>>>>>>>>>>>>
will help<br></div><div>>>>>>>>>>>>>> to
find out why this specific file is not healing... Let me
know<br></div><div>>>>>>>>>>>>>> if
you need<br></div><div>>>>>>>>>>>>>>
any more information. Btw node3 is my arbiter node.<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
NODE1:<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
STAT:<br></div><div>>>>>>>>>>>>>>
File:<br></div><div>>>>>>>>>>>>>>
‘/data/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png’<br></div><div>>>>>>>>>>>>>>
Size: 0 Blocks: 38 IO Block: 131072 regular empty file<br></div><div>>>>>>>>>>>>>>
Device: 24h/36d Inode: 10033884 Links: 2<br></div><div>>>>>>>>>>>>>>
Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>>>>>>>>>>>>>>
Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>>>>>>>>>>>>>>
Modify: 2017-08-14 17:11:46.407404779 +0200<br></div><div>>>>>>>>>>>>>>
Change: 2017-08-14 17:11:46.407404779 +0200<br></div><div>>>>>>>>>>>>>>
Birth: -<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
GETFATTR:<br></div><div>>>>>>>>>>>>>>
trusted.afr.dirty=0sAAAAAQAAAAAAAAAA<br></div><div>>>>>>>>>>>>>>
trusted.bit-rot.version=0sAgAAAAAAAABZhuknAAlJAg==<br></div><div>>>>>>>>>>>>>>
trusted.gfid=0sGYXiM9XuTj6lGs8LX58q6g==<br></div><div>>>>>>>>>>>>>>
trusted.glusterfs.d99af2fa-439b-4a21-bf3a-38f3849f87ec.xtime=0sWZG9sgAGOyo=<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
NODE2:<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
STAT:<br></div><div>>>>>>>>>>>>>>
File:<br></div><div>>>>>>>>>>>>>>
‘/data/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png’<br></div><div>>>>>>>>>>>>>>
Size: 0 Blocks: 38 IO Block: 131072 regular empty file<br></div><div>>>>>>>>>>>>>>
Device: 26h/38d Inode: 10031330 Links: 2<br></div><div>>>>>>>>>>>>>>
Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>>>>>>>>>>>>>>
Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>>>>>>>>>>>>>>
Modify: 2017-08-14 17:11:46.403704181 +0200<br></div><div>>>>>>>>>>>>>>
Change: 2017-08-14 17:11:46.403704181 +0200<br></div><div>>>>>>>>>>>>>>
Birth: -<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
GETFATTR:<br></div><div>>>>>>>>>>>>>>
trusted.afr.dirty=0sAAAAAQAAAAAAAAAA<br></div><div>>>>>>>>>>>>>>
trusted.bit-rot.version=0sAgAAAAAAAABZhu6wAA8Hpw==<br></div><div>>>>>>>>>>>>>>
trusted.gfid=0sGYXiM9XuTj6lGs8LX58q6g==<br></div><div>>>>>>>>>>>>>>
trusted.glusterfs.d99af2fa-439b-4a21-bf3a-38f3849f87ec.xtime=0sWZG9sgAGOVE=<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
NODE3:<br></div><div>>>>>>>>>>>>>>
STAT:<br></div><div>>>>>>>>>>>>>>
File:<br></div><div>>>>>>>>>>>>>>
/srv/glusterfs/myvolume/brick/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>>>>>>>>>>>>>>
Size: 0 Blocks: 0 IO Block: 4096 regular empty file<br></div><div>>>>>>>>>>>>>>
Device: ca11h/51729d Inode: 405208959 Links: 2<br></div><div>>>>>>>>>>>>>>
Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>>>>>>>>>>>>>>
Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>>>>>>>>>>>>>>
Modify: 2017-08-14 17:04:55.530681000 +0200<br></div><div>>>>>>>>>>>>>>
Change: 2017-08-14 17:11:46.604380051 +0200<br></div><div>>>>>>>>>>>>>>
Birth: -<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
GETFATTR:<br></div><div>>>>>>>>>>>>>>
trusted.afr.dirty=0sAAAAAQAAAAAAAAAA<br></div><div>>>>>>>>>>>>>>
trusted.bit-rot.version=0sAgAAAAAAAABZe6ejAAKPAg==<br></div><div>>>>>>>>>>>>>>
trusted.gfid=0sGYXiM9XuTj6lGs8LX58q6g==<br></div><div>>>>>>>>>>>>>>
trusted.glusterfs.d99af2fa-439b-4a21-bf3a-38f3849f87ec.xtime=0sWZG9sgAGOc4=<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>
CLIENT GLUSTER MOUNT:<br></div><div>>>>>>>>>>>>>>
STAT:<br></div><div>>>>>>>>>>>>>>
File:<br></div><div>>>>>>>>>>>>>>
"/mnt/myvolume/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png"<br></div><div>>>>>>>>>>>>>>
Size: 0 Blocks: 0 IO Block: 131072 regular empty file<br></div><div>>>>>>>>>>>>>>
Device: 1eh/30d Inode: 11897049013408443114 Links: 1<br></div><div>>>>>>>>>>>>>>
Access: (0644/-rw-r--r--) Uid: ( 33/www-data) Gid: (
33/www-data)<br></div><div>>>>>>>>>>>>>>
Access: 2017-08-14 17:04:55.530681000 +0200<br></div><div>>>>>>>>>>>>>>
Modify: 2017-08-14 17:11:46.407404779 +0200<br></div><div>>>>>>>>>>>>>>
Change: 2017-08-14 17:11:46.407404779 +0200<br></div><div>>>>>>>>>>>>>>
Birth: -<br></div><div>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>
-------- Original Message --------<br></div><div>>>>>>>>>>>>>>>
Subject: Re: [Gluster-users] self-heal not working<br></div><div>>>>>>>>>>>>>>>
Local Time: August 21, 2017 9:34 PM<br></div><div>>>>>>>>>>>>>>>
UTC Time: August 21, 2017 7:34 PM<br></div><div>>>>>>>>>>>>>>>
From: <a href="mailto:bturner@redhat.com" class="moz-txt-link-abbreviated">bturner@redhat.com</a><br></div><div>>>>>>>>>>>>>>>
To: mabi [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>)<br></div><div>>>>>>>>>>>>>>>
Gluster Users<br></div><div>>>>>>>>>>>>>>>
[<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>
----- Original Message -----<br></div><div>>>>>>>>>>>>>>>>
From: "mabi" [<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-rfc2396E"><mabi@protonmail.ch></a>](<a href="mailto:mabi@protonmail.ch" class="moz-txt-link-freetext">mailto:mabi@protonmail.ch</a>)<br></div><div>>>>>>>>>>>>>>>>
To: "Gluster Users"<br></div><div>>>>>>>>>>>>>>>>
[<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-rfc2396E"><gluster-users@gluster.org></a>](<a href="mailto:gluster-users@gluster.org" class="moz-txt-link-freetext">mailto:gluster-users@gluster.org</a>)<br></div><div>>>>>>>>>>>>>>>>
Sent: Monday, August 21, 2017 9:28:24 AM<br></div><div>>>>>>>>>>>>>>>>
Subject: [Gluster-users] self-heal not working<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
Hi,<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
I have a replicat 2 with arbiter GlusterFS 3.8.11 cluster
and<br></div><div>>>>>>>>>>>>>>>>
there is<br></div><div>>>>>>>>>>>>>>>>
currently one file listed to be healed as you can see
below<br></div><div>>>>>>>>>>>>>>>>
but never gets<br></div><div>>>>>>>>>>>>>>>>
healed by the self-heal daemon:<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
Brick node1.domain.tld:/data/myvolume/brick<br></div><div>>>>>>>>>>>>>>>>
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>>>>>>>>>>>>>>>>
Status: Connected<br></div><div>>>>>>>>>>>>>>>>
Number of entries: 1<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
Brick node2.domain.tld:/data/myvolume/brick<br></div><div>>>>>>>>>>>>>>>>
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>>>>>>>>>>>>>>>>
Status: Connected<br></div><div>>>>>>>>>>>>>>>>
Number of entries: 1<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
Brick node3.domain.tld:/srv/glusterfs/myvolume/brick<br></div><div>>>>>>>>>>>>>>>>
/data/appdata_ocpom4nckwru/preview/1344699/64-64-crop.png<br></div><div>>>>>>>>>>>>>>>>
Status: Connected<br></div><div>>>>>>>>>>>>>>>>
Number of entries: 1<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
As once recommended on this mailing list I have mounted
that<br></div><div>>>>>>>>>>>>>>>>
glusterfs<br></div><div>>>>>>>>>>>>>>>>
volume<br></div><div>>>>>>>>>>>>>>>>
temporarily through fuse/glusterfs and ran a "stat" on
that<br></div><div>>>>>>>>>>>>>>>>
file which is<br></div><div>>>>>>>>>>>>>>>>
listed above but nothing happened.<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
The file itself is available on all 3 nodes/bricks but on
the<br></div><div>>>>>>>>>>>>>>>>
last node it<br></div><div>>>>>>>>>>>>>>>>
has a different date. By the way this file is 0 kBytes
big. Is<br></div><div>>>>>>>>>>>>>>>>
that maybe<br></div><div>>>>>>>>>>>>>>>>
the reason why the self-heal does not work?<br></div><div>>>>>>>>>>>>>>>
Is the file actually 0 bytes or is it just 0 bytes on the<br></div><div>>>>>>>>>>>>>>>
arbiter(0 bytes<br></div><div>>>>>>>>>>>>>>>
are expected on the arbiter, it just stores metadata)? Can
you<br></div><div>>>>>>>>>>>>>>>
send us the<br></div><div>>>>>>>>>>>>>>>
output from stat on all 3 nodes:<br></div><div>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>
$ stat <file on back end brick><br></div><div>>>>>>>>>>>>>>>
$ getfattr -d -m - <file on back end brick><br></div><div>>>>>>>>>>>>>>>
$ stat <file from gluster mount><br></div><div>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>
Lets see what things look like on the back end, it should
tell<br></div><div>>>>>>>>>>>>>>>
us why<br></div><div>>>>>>>>>>>>>>>
healing is failing.<br></div><div>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>
-b<br></div><div>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
And how can I now make this file to heal?<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
Thanks,<br></div><div>>>>>>>>>>>>>>>>
Mabi<br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>><br></div><div>>>>>>>>>>>>>>>>
_______________________________________________<br></div><div>>>>>>>>>>>>>>>>
Gluster-users mailing list<br></div><div>>>>>>>>>>>>>>>> <a href="mailto:Gluster-users@gluster.org" class="moz-txt-link-abbreviated">Gluster-users@gluster.org</a><br></div><div>>>>>>>>>>>>>>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" class="moz-txt-link-freetext">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div><div>>>>>>>>>>>>
_______________________________________________<br></div><div>>>>>>>>>>>>
Gluster-users mailing list<br></div><div>>>>>>>>>>>> <a href="mailto:Gluster-users@gluster.org" class="moz-txt-link-abbreviated">Gluster-users@gluster.org</a><br></div><div>>>>>>>>>>>><br></div><div>>>>>>>>>>>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" class="moz-txt-link-freetext">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div></blockquote><div><br></div></blockquote></blockquote><div><br></div></blockquote></blockquote><div><br></div>