<div dir="ltr"><div><div><div>Have you verified that Gluster has marked the files as split-brain?<br><br></div>gluster volume heal &lt;vol&gt; info split-brain<br><br></div>If you&#39;re fairly confident about which files are correct, you can automate the split-brain healing procedure.<br><br></div>From the manual...<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">        volume heal &lt;VOLNAME&gt; split-brain bigger-file &lt;FILE&gt;<br>              Performs healing of &lt;FILE&gt; which is in split-brain by choosing the bigger file in the replica as source.<br><br>        volume heal &lt;VOLNAME&gt; split-brain source-brick &lt;HOSTNAME:BRICKNAME&gt;<br>              Selects &lt;HOSTNAME:BRICKNAME&gt; as the source for all the files that are in split-brain in that replica and heals them.<br><br>        volume heal &lt;VOLNAME&gt; split-brain source-brick &lt;HOSTNAME:BRICKNAME&gt; &lt;FILE&gt;<br>              Selects the split-brained &lt;FILE&gt; present in &lt;HOSTNAME:BRICKNAME&gt; as source and completes heal.<br></blockquote><div><br></div><div>D <br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 23 January 2017 at 16:28, James Wilkins <span dir="ltr">&lt;<a href="mailto:nebulai@gmail.com" target="_blank">nebulai@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello,<div><br></div><div>I have a couple of gluster clusters - setup with distributed/replicated volumes that have starting incrementing the heal-count from statistics - and for some files returning input/output error when attempting to access said files from a fuse mount.</div><div><br></div><div>If i take one volume, from one cluster as an example:</div><div><br></div><div>gluster volume heal storage01 statistics info<br></div><div>&lt;snip&gt;</div><div><div>Brick storage02.&lt;redacted&gt;:/storage/<wbr>sdc/brick_storage01</div><div>Number of entries: 595</div></div><div>&lt;/snip&gt;</div><div><br></div><div>And then proceed to look at one of these files (have found 2 copies - one on each server / brick)</div><div><br></div><div>First brick:</div><div><br></div><div><div># getfattr -m . -d -e hex  /storage/sdc/brick_storage01/<wbr>projects/183-57c559ea4d60e-<wbr>canary-test--node02/<wbr>wordpress285-data/html/wp-<wbr>content/themes/twentyfourteen/<wbr>single.php</div><div>getfattr: Removing leading &#39;/&#39; from absolute path names</div><div># file: storage/sdc/brick_storage01/<wbr>projects/183-57c559ea4d60e-<wbr>canary-test--node02/<wbr>wordpress285-data/html/wp-<wbr>content/themes/twentyfourteen/<wbr>single.php</div><div>security.selinux=<wbr>0x73797374656d5f753a6f626a6563<wbr>745f723a676c7573746572645f6272<wbr>69636b5f743a733000</div><div>trusted.afr.dirty=<wbr>0x000000000000000000000000</div><div>trusted.afr.storage01-client-<wbr>0=0x000000020000000100000000</div><div>trusted.bit-rot.version=<wbr>0x02000000000000005874e2cd0000<wbr>459d</div><div>trusted.gfid=<wbr>0xda4253be1c2647b7b6ec5c045d61<wbr>d216</div><div>trusted.glusterfs.quota.<wbr>c9764826-596a-4886-9bc0-<wbr>60ee9b3fce44.contri.1=<wbr>0x0000000000000600000000000000<wbr>0001</div><div>trusted.pgfid.c9764826-596a-<wbr>4886-9bc0-60ee9b3fce44=<wbr>0x00000001</div></div><div><br></div><div>Second Brick:</div><div><br></div><div><div># getfattr -m . -d -e hex /storage/sdc/brick_storage01/<wbr>projects/183-57c559ea4d60e-<wbr>canary-test--node02/<wbr>wordpress285-data/html/wp-<wbr>content/themes/twentyfourteen/<wbr>single.php</div><div>getfattr: Removing leading &#39;/&#39; from absolute path names</div><div># file: storage/sdc/brick_storage01/<wbr>projects/183-57c559ea4d60e-<wbr>canary-test--node02/<wbr>wordpress285-data/html/wp-<wbr>content/themes/twentyfourteen/<wbr>single.php</div><div>security.selinux=<wbr>0x73797374656d5f753a6f626a6563<wbr>745f723a676c7573746572645f6272<wbr>69636b5f743a733000</div><div>trusted.afr.dirty=<wbr>0x000000000000000000000000</div><div>trusted.bit-rot.version=<wbr>0x020000000000000057868423000d<wbr>6332</div><div>trusted.gfid=<wbr>0x14f74b04679345289dbd3290a366<wbr>5cbc</div><div>trusted.glusterfs.quota.<wbr>47e007ee-6f91-4187-81f8-<wbr>90a393deba2b.contri.1=<wbr>0x0000000000000600000000000000<wbr>0001</div><div>trusted.pgfid.47e007ee-6f91-<wbr>4187-81f8-90a393deba2b=<wbr>0x00000001</div></div><div><br></div><div><br></div><div><br></div><div>I can see the only the first brick has the appropiate trusted.afr.&lt;client&gt; tag - e.g in this case </div><div><br></div><div>trusted.afr.storage01-client-<wbr>0=0x000000020000000100000000<br></div><div><br></div><div>Files are same size under stat - just the access/modify/change dates are different.</div><div><br></div><div>My first question is - reading <a href="https://gluster.readthedocs.io/en/latest/Troubleshooting/split-brain/" target="_blank">https://gluster.<wbr>readthedocs.io/en/latest/<wbr>Troubleshooting/split-brain/</a> this suggests that i should have this field on both copies of the files - or am I mis-reading?</div><div><br></div><div>Secondly - am I correct that each one of these entries will require manual fixing?  (I have approx 6K files/directories in this state over two clusters - which appears like an awful lot of manual fixing)</div><div><br></div><div>I&#39;ve checked gluster volume info &lt;vol&gt; and all appropiate services/self-heal daemon are running.  We&#39;ve even tried a full heal/heal and iterating over parts of the filesystem in question with find / stat / md5sum.</div><div><br></div><div>Any input appreciated.</div><div><br></div><div>Cheers,</div><div><br></div><div><br></div><div><br></div><div><br></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>