[Gluster-users] Unusable volume after brick re-attach

Jon sphinx_man_60 at yahoo.com
Tue Aug 25 15:21:47 UTC 2015


Hello all, I have an 8 node, replicated (4 x 2) volume that has a missing node. It fell out of the cluster a few weeks ago and since then I've not been able to bring it back on-line without killing performance to the volume. After my initial attempts to bring the node back online failed I tried disabling the self-heal daemon after finding that recommendation from an archive of the mailing list. I then attempted to rsync the two bricks and they are above 95% in sync but the system still struggles. I lastly tried moving the brick data to a side locate on the server to emulate a brick replace. After doing the extended attribute modification and glusterd restart, it created the directory structure and appeared ok at first but once customer requests started hitting the system the response times slowed to a crawl. Navigating the directories via a FUSE mount was not even usable. 
Any one have any other recommendations for getting this node back on-line?
Other specs: Gluster version 3.5.2, CentOS 7.1, XFS for the bricks, 1 brick per node, 20 TB / brick. 
Thanks in advance, Jon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150825/c6363d7f/attachment.html>


More information about the Gluster-users mailing list