<html><head></head><body><div>Hi all,</div><div><br></div><div>Some things I have found about the space issue:</div><div><br></div><ul><li>shared-brick-count in /var/lib/glusterd/vols/data/* is higher than 1 on some local bricks, even though they are actually on separate file systems</li><li>I have duplicate brick-fsid numbers in /var/lib/glusterd/vols/data/bricks/*</li><li>I have restarted glusterd and still have the duplicate brick-fsid's</li></ul><div><br></div><div>So I am wondering where the duplicate FSIDs come from, and how to (forcefully) resolve them. Can I safely alter them in /var/lib/glusterd/vols/data/bricks/* and restart glusterd maybe?</div><div><br></div><div>I *may* at some point have accidentally replaced a brick to a wrong location, being either the parent file system or another brick. But I have corrected this by replacing it again to the correct location. Each time I used "gluster volume replace-brick".</div><div><br></div><div>Running up-to-date Arch Linux by the way.</div><div>I have attached what I believe would be all relevant information to diagnose the issue.</div><div>Please let me know if I can provide more information to get this issue resolved.</div><div><br></div><div>Thanks!</div><div><br></div><div><span><pre>-- <br></pre><div>groet / cheers,</div><div>Patrick Dijkgraaf</div></span></div><div><br></div><div>-----Original Message-----</div><div><b>From</b>: Patrick Dijkgraaf <<a href="mailto:Patrick%20Dijkgraaf%20%3cbolderbast@duckstad.net%3e">bolderbast@duckstad.net</a>></div><div><b>To</b>: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a></div><div><b>Subject</b>: [Gluster-users] Two issues with my Gluster volume</div><div><b>Date</b>: Sat, 16 Apr 2022 14:03:16 +0200</div><div><b>Mailer</b>: Evolution 3.44.0 </div><div><br></div><!-- text/html --><div>Hi all, I hope this message finds you well.</div><div><br></div><div>I've been running a Gluster volume (32 bricks in distributed, replicated mode) on my 2 home servers for about 1,5 years now. I'm generally very happy with it!</div><div><br></div><div>Disks are distributed across 4 enclosures (2 enclosures per server). At one point one of these enclosures failed (8 bricks down on 1 server) but due to the awesomeness of Gluster (and my lack of monitoring :-( ) I only noticed this after about 6 weeks... This left me with A LOT of pending heals, about 40k per brick if I remember correctly.</div><div><br></div><div>Well, I brought back online the failed bricks and let the the gluster heal. And it did, mostly... It left about 1 to 4 pending heals on multiple bricks that won't heal, no matter what I've tried. I just let them be for some time until I had time to figure out what to do with them.</div><div><br></div><div>Also, because some disks were about to fail, I started replacing bricks. And taking advantage of this, I replaced them with a larger disk (4TB -> 8TB). Healing took care of copying all data to the new brick and finished succesfully. However, for some reason I do not see an increase in total space on the systems were I have mounted the Gluster volume.</div><div><br></div><div>So in short, I have 2 issues:</div><div><br></div><ol><li>Some Pending Heals that I cannot get rid of</li><li>Gluster total space being reported incorrect</li></ol><div><br></div><div>Ideally, I'd like to address issue 2 first (seems the easiest fix), and then focus on issue 1. Should that be ok?</div><div><br></div><div>I hope you guys can help me with these 2 issues. Thanks in advance!</div><div><br></div><div>Added as attachent:</div><ul><li>Commands used to replace the brick</li><li>"<span style="font-family: monospace;">gluster volume status data detail</span>" output</li><li>"<span style="font-family: monospace;">grep -n "shared-brick-count" /var/lib/glusterd/vols/data/*</span><span style="font-family: monospace;">" output, as I read somewhere that this may be relevant...</span></li></ul><div><br></div><div><span><pre><div>________<br></div>
<div><br></div>
<div><br></div>
<div><br></div>
<div>Community Meeting Calendar:<br></div>
<div><br></div>
<div>Schedule -<br></div>
<div>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br></div>
<div>Bridge: <a href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br></div>
<div>Gluster-users mailing list<br></div>
<div><a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br></div>
<div><a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br></div>
</pre></span></div></body></html>