<html>
<head>
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 29/01/20 9:56 pm, Cox, Jason wrote:<br>
</div>
<blockquote type="cite"
cite="mid:650846d7a1c4461ba690479ac1c144f7@MLBXCH14.cs.myharris.net">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<meta name="Generator" content="Microsoft Word 15 (filtered
medium)">
<style><!--
/* Font Definitions */
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
        {mso-style-name:msonormal;
        mso-margin-top-alt:auto;
        margin-right:0in;
        mso-margin-bottom-alt:auto;
        margin-left:0in;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;}
span.EmailStyle18
        {mso-style-type:personal;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
span.EmailStyle19
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">I have glusterfs (v6.6) deployed with 3-way
replication used by ovirt (v4.3).<o:p></o:p></p>
<p class="MsoNormal">I recently updated 1 of the nodes (now at
gluster v6.7) and rebooted. When it came back online,
glusterfs reported there were entries to be healed under the 2
nodes that had stayed online.
<o:p></o:p></p>
<p class="MsoNormal">After 2+ days, the 2 nodes still show
entries that need healing, so I’m trying to determine what the
issue is.
<o:p></o:p></p>
<p class="MsoNormal">The files shown in the heal info output are
small so healing should not take long. Also, ‘Gluster v heal
<vol>’ and ‘gluster v heal <vol> full’ both return
successful, but the entries persist.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">So first off, I’m a little confused by what
gluster volume heal <vol> info is reporting.<o:p></o:p></p>
<p class="MsoNormal">The following is what I see from heal info:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"># gluster v heal engine info<o:p></o:p></p>
<p class="MsoNormal">Brick repo0:/gluster_bricks/engine/engine<o:p></o:p></p>
<p class="MsoNormal">/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta
<o:p></o:p></p>
<p class="MsoNormal">/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
<o:p></o:p></p>
<p class="MsoNormal">Status: Connected<o:p></o:p></p>
<p class="MsoNormal">Number of entries: 2<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Brick repo1:/gluster_bricks/engine/engine<o:p></o:p></p>
<p class="MsoNormal">/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta
<o:p></o:p></p>
<p class="MsoNormal">/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta
<o:p></o:p></p>
<p class="MsoNormal">Status: Connected<o:p></o:p></p>
<p class="MsoNormal">Number of entries: 2<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Brick repo2:/gluster_bricks/engine/engine<o:p></o:p></p>
<p class="MsoNormal">Status: Connected<o:p></o:p></p>
<p class="MsoNormal">Number of entries: 0<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Repo0 and repo1 were not rebooted, but
repo2 was. <o:p></o:p></p>
<p class="MsoNormal">Since repo2 went offline I would expect
when it came back online it would have entries that need
healing, but based on the heal info output that’s not what it
looks like, so I’m thinking maybe heal info isn’t reporting
what I think it is reporting.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">*When gluster volume heal <vol> info
reports entries as above, what is it saying?</p>
</div>
</blockquote>
<p>In heal info output, it is usually the nodes that were up that
display the list of files that need heal. So the way to interpret
it is while repo2 was down, repo0 and repo1 witnessed some
modification to the files and therefore capture them as needing
heal, whose list is what the CLI displays</p>
<blockquote type="cite"
cite="mid:650846d7a1c4461ba690479ac1c144f7@MLBXCH14.cs.myharris.net">
<div class="WordSection1">
<p class="MsoNormal"><o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">From the above output, I was reading it as
repo0 has 2 entries that need to be healed from the other
bricks and repo1 has 2 entries that need healing from the
other bricks. However, that doesn’t make sense since repo2 was
the one that was rebooted and a ‘stat’ on the files in the
bricks show repo2 is the older version (checksums also show
repo0 and repo1 match). Trying to access the file through the
FUSE mount on any node gives input/output errors.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Getfattr output:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">repo0 glusterfs]# getfattr -d -m. -e hex
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta<o:p></o:p></p>
<p class="MsoNormal">getfattr: Removing leading '/' from
absolute path names<o:p></o:p></p>
<p class="MsoNormal"># file:
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta<o:p></o:p></p>
<p class="MsoNormal">security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000<o:p></o:p></p>
<p class="MsoNormal">trusted.afr.dirty=0x000000000000000000000000<o:p></o:p></p>
<p class="MsoNormal">trusted.afr.engine-client-2=0x000000020000000200000000<o:p></o:p></p>
<p class="MsoNormal">trusted.bit-rot.signature=0x0102000000000000009338ff61a57fcb452b92ae816b8e5ff672be6d340e7da0a0dcfa34e26b26933b<o:p></o:p></p>
<p class="MsoNormal">trusted.bit-rot.version=0x02000000000000005e09d54f000ef84f<o:p></o:p></p>
<p class="MsoNormal">trusted.gfid=0xb85edc187d594872a594c25419154d05<o:p></o:p></p>
<p class="MsoNormal">trusted.gfid2path.ff2d749198341aff=0x32393564303861372d386437352d343638392d393239332d3339336434346362656233342f63656234323734322d656161612d343836372d616135342d6461353235363239616165342e6d657461<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.mdata=0x010000000000000000000000005e2f88ce000000002d1fe613000000005e2f88ce000000002d10ae53000000005e2f88ce000000002d067b3b<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.shard.block-size=0x0000000004000000<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.shard.file-size=0x00000000000001ad000000000000000000000000000000010000000000000000<o:p></o:p></p>
<p class="MsoNormal">trusted.pgfid.295d08a7-8d75-4689-9293-393d44cbeb34=0x00000001<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">repo1 glusterfs]# getfattr -d -m. -e hex
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta<o:p></o:p></p>
<p class="MsoNormal">getfattr: Removing leading '/' from
absolute path names<o:p></o:p></p>
<p class="MsoNormal"># file:
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta<o:p></o:p></p>
<p class="MsoNormal">security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000<o:p></o:p></p>
<p class="MsoNormal">trusted.afr.dirty=0x000000000000000000000000<o:p></o:p></p>
<p class="MsoNormal">trusted.afr.engine-client-2=0x000000020000000200000000<o:p></o:p></p>
<p class="MsoNormal">trusted.bit-rot.signature=0x0102000000000000009338ff61a57fcb452b92ae816b8e5ff672be6d340e7da0a0dcfa34e26b26933b<o:p></o:p></p>
<p class="MsoNormal">trusted.bit-rot.version=0x02000000000000005e09db580000709b<o:p></o:p></p>
<p class="MsoNormal">trusted.gfid=0xb85edc187d594872a594c25419154d05<o:p></o:p></p>
<p class="MsoNormal">trusted.gfid2path.ff2d749198341aff=0x32393564303861372d386437352d343638392d393239332d3339336434346362656233342f63656234323734322d656161612d343836372d616135342d6461353235363239616165342e6d657461<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.mdata=0x010000000000000000000000005e2f88ce000000002d1fe613000000005e2f88ce000000002d10ae53000000005e2f88ce000000002d067b3b<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.shard.block-size=0x0000000004000000<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.shard.file-size=0x00000000000001ad000000000000000000000000000000010000000000000000<o:p></o:p></p>
<p class="MsoNormal">trusted.pgfid.295d08a7-8d75-4689-9293-393d44cbeb34=0x00000001<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">repo2 glusterfs]# getfattr -d -m. -e hex
/gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta<o:p></o:p></p>
<p class="MsoNormal">getfattr: Removing leading '/' from
absolute path names<o:p></o:p></p>
<p class="MsoNormal"># file:
gluster_bricks/engine/engine/372501f5-062c-4790-afdb-dd7e761828ac/images/4e3e8ca5-0edf-42ae-ac7b-e9a51ad85922/ceb42742-eaaa-4867-aa54-da525629aae4.meta<o:p></o:p></p>
<p class="MsoNormal">security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000<o:p></o:p></p>
<p class="MsoNormal">trusted.afr.dirty=0x000000000000000000000000<o:p></o:p></p>
<p class="MsoNormal">trusted.bit-rot.signature=0x010200000000000000413e794bfbaf54b98bc00df95ce540fb6affe56ab5f5ddbb1fdb9eec096e0232<o:p></o:p></p>
<p class="MsoNormal">trusted.bit-rot.version=0x02000000000000005e09d553000151af<o:p></o:p></p>
<p class="MsoNormal">trusted.gfid=0xd36b1a8f63bc4a4bbcd0433882866733<o:p></o:p></p>
<p class="MsoNormal">trusted.gfid2path.ff2d749198341aff=0x32393564303861372d386437352d343638392d393239332d3339336434346362656233342f63656234323734322d656161612d343836372d616135342d6461353235363239616165342e6d657461<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.mdata=0x010000000000000000000000005e1df4a80000000020b618bf000000005e1df4a80000000020a1633c000000005e1df4a80000000020950ed4<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.shard.block-size=0x0000000004000000<o:p></o:p></p>
<p class="MsoNormal">trusted.glusterfs.shard.file-size=0x00000000000001ad000000000000000000000000000000010000000000000000<o:p></o:p></p>
<p class="MsoNormal">trusted.pgfid.295d08a7-8d75-4689-9293-393d44cbeb34=0x00000001<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">searching the gluster logs for
ceb42742-eaaa-4867-aa54-da525629aae4.meta I see:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">repo0:<o:p></o:p></p>
<p class="MsoNormal">./rhev-data-center-mnt-glusterSD-repo0:_engine.log:The
message "E [MSGID: 108008]
[afr-self-heal-common.c:384:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:295d08a7-8d75-4689-9293-393d44cbeb34>/ceb42742-eaaa-4867-aa54-da525629aae4.meta>,
d36b1a8f-63bc-4a4b-bcd0-433882866733 on engine-client-2 and
b85edc18-7d59-4872-a594-c25419154d05 on engine-client-1."
repeated 5 times between [2020-01-28 23:01:22.912513] and
[2020-01-28 23:02:10.907716]<o:p></o:p></p>
<p class="MsoNormal">./rhev-data-center-mnt-glusterSD-repo0:_engine.log:[2020-01-28
23:24:12.808924] E [MSGID: 108008]
[afr-self-heal-common.c:384:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:295d08a7-8d75-4689-9293-393d44cbeb34>/ceb42742-eaaa-4867-aa54-da525629aae4.meta>,
d36b1a8f-63bc-4a4b-bcd0-433882866733 on engine-client-2 and
b85edc18-7d59-4872-a594-c25419154d05 on engine-client-1.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">repo1:<o:p></o:p></p>
<p class="MsoNormal">nothing<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">repo2:<o:p></o:p></p>
<p class="MsoNormal">./rhev-data-center-mnt-glusterSD-repo0:_engine.log:The
message "E [MSGID: 108008]
[afr-self-heal-common.c:384:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:295d08a7-8d75-4689-9293-393d44cbeb34>/ceb42742-eaaa-4867-aa54-da525629aae4.meta>,
d36b1a8f-63bc-4a4b-bcd0-433882866733 on engine-client-2 and
b85edc18-7d59-4872-a594-c25419154d05 on engine-client-1."
repeated 23 times between [2020-01-29 15:42:46.201849] and
[2020-01-29 15:44:36.873793]<o:p></o:p></p>
<p class="MsoNormal">./rhev-data-center-mnt-glusterSD-repo0:_engine.log:[2020-01-29
15:44:47.016466] E [MSGID: 108008]
[afr-self-heal-common.c:384:afr_gfid_split_brain_source]
0-engine-replicate-0: Gfid mismatch detected for
<gfid:295d08a7-8d75-4689-9293-393d44cbeb34>/ceb42742-eaaa-4867-aa54-da525629aae4.meta>,
d36b1a8f-63bc-4a4b-bcd0-433882866733 on engine-client-2 and
b85edc18-7d59-4872-a594-c25419154d05 on engine-client-1.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">So it looks like a split brain issue
according to the log message.<o:p></o:p></p>
<p class="MsoNormal">However,<o:p></o:p></p>
<p class="MsoNormal"> *Why doesn’t heal info show a split
brain condition?<o:p></o:p></p>
<p class="MsoNormal"> *Why does the logs for repo1 not have
anything concerning ceb42742-eaaa-4867-aa54-da525629aae4.meta?<o:p></o:p></p>
<p class="MsoNormal"> *If repo0 and repo1 match, why is there
a split brain issue?<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</blockquote>
<p>I think for some reason setting of AFR xattrs on the parent dir
did not happen, which is why the files are stuck in split-brain
(instead of getting recreated on repo2 using the files from repo0
or 1). You can resolve it using the split-brain CLI, eg: <tt>`gluster
volume heal $volname split-brain source-brick
repo0:/gluster_bricks/engine/engine
/372501f5-062c-4790-afdb-dd7e761828ac/images/968daf61-6858-454a-9ed4-3d3db2ae1805/4317dd0d-fd35-4176-9353-7ff69e3a8dc3.meta`</tt>
<br>
</p>
Thanks,<br>
Ravi<br>
<blockquote type="cite"
cite="mid:650846d7a1c4461ba690479ac1c144f7@MLBXCH14.cs.myharris.net">
<div class="WordSection1">
<p class="MsoNormal">‘Gluster peer status’ on each node shows
connected to each of the other 2 nodes.<o:p></o:p></p>
<p class="MsoNormal">‘Gluster volume heal engine info’ on each
node shows each brick is connected.<o:p></o:p></p>
<p class="MsoNormal">‘Gluster status engine’ on each node shows
all 3 bricks as online, all 3 self-heal daemons as online, all
3 bitrot daemons online, and all 3 scrubber daemons online.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Thanks,<o:p></o:p></p>
<p class="MsoNormal">Jason<o:p></o:p></p>
<p class="MsoNormal" style="margin-left:.5in"><o:p> </o:p></p>
</div>
<span><span><br>
</span> </span><br>
<div><span style="FONT-FAMILY: Times New Roman"><span
style="FONT-SIZE: 14px">CONFIDENTIALITY NOTICE: This email
and any attachments are for the sole use of the intended
recipient and may contain material that is proprietary,
confidential, privileged or otherwise legally protected or
restricted under applicable government laws. Any review,
disclosure, distributing or other use without expressed
permission of the sender is strictly prohibited. If you are
not the intended recipient, please contact the sender and
delete all copies without reading, printing, or saving.</span></span>
<div style="FONT-SIZE: 14px; FONT-FAMILY: "Times New
Roman""><br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
</body>
</html>