<p dir="ltr">Hi Patrick,</p>
<p dir="ltr">I guess you can collect some data via the 'gluster profile' command.<br>
At least it should show any issues from performance point of view.<br>
gluster volume profile volume start;<br>
Do an 'ls'<br>
gluster volume profile volume info<br>
gluster volume profile volume stop</p>
<p dir="ltr">Also, can you define top 3 errors seen in the logs. If you manage to fix them (with the help of the community) one by one - you might restore your full functionality.</p>
<p dir="ltr">By the way, do you have the option to archive the data and thus reduce the ammount stored - which obviously will increase ZFS performance.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov<br></p>
<p dir="ltr">On Apr 21, 2019 10:50, Patrick Rennie <patrickmrennie@gmail.com> wrote:<br>
</p>
<blockquote><p dir="ltr">><br>
> Hi Darrell, <br>
><br>
> Thanks again for your advice, I've left it for a while but unfortunately it's still just as slow and causing more problems for our operations now. I will need to try and take some steps to at least bring performance back to normal while continuing to investigate the issue longer term. I can definitely see one node with heavier CPU than the other, almost double, which I am OK with, but I think the heal process is going to take forever, trying to check the "gluster volume heal info" shows thousands and thousands of files which may need healing, I have no idea how many in total the command is still running after hours, so I am not sure what has gone so wrong to cause this. <br>
><br>
> I've checked cluster.op-version and cluster.max-op-version and it looks like I'm on the latest version there. <br>
><br>
> I have no idea how long the healing is going to take on this cluster, we have around 560TB of data on here, but I don't think I can wait that long to try and restore performance to normal. <br>
><br>
> Can anyone think of anything else I can try in the meantime to work out what's causing the extreme latency? <br>
><br>
> I've been going through cluster client the logs of some of our VMs and on some of our FTP servers I found this in the cluster mount log, but I am not seeing it on any of our other servers, just our FTP servers. <br>
><br>
> [2019-04-21 07:16:19.925388] E [MSGID: 101046] [dht-common.c:1904:dht_revalidate_cbk] 0-gvAA01-dht: dict is null<br>
> [2019-04-21 07:19:43.413834] W [MSGID: 114031] [client-rpc-fops.c:2203:client3_3_setattr_cbk] 0-gvAA01-client-19: remote operation failed [No such file or directory]<br>
> [2019-04-21 07:19:43.414153] W [MSGID: 114031] [client-rpc-fops.c:2203:client3_3_setattr_cbk] 0-gvAA01-client-20: remote operation failed [No such file or directory]<br>
> [2019-04-21 07:23:33.154717] E [MSGID: 101046] [dht-common.c:1904:dht_revalidate_cbk] 0-gvAA01-dht: dict is null<br>
> [2019-04-21 07:33:24.943913] E [MSGID: 101046] [dht-common.c:1904:dht_revalidate_cbk] 0-gvAA01-dht: dict is null<br>
><br>
> Any ideas what this could mean? I am basically just grasping at straws here.<br>
><br>
> I am going to hold off on the version upgrade until I know there are no files which need healing, which could be a while, from some reading I've done there shouldn't be any issues with this as both are on v3.12.x <br>
><br>
> I've free'd up a small amount of space, but I still need to work on this further. <br>
><br>
> I've read of a command "find .glusterfs -type f -links -2 -exec rm {} \;" which could be run on each brick and it would potentially clean up any files which were deleted straight from the bricks, but not via the client, I have a feeling this could help me free up about 5-10TB per brick from what I've been told about the history of this cluster. Can anyone confirm if this is actually safe to run? <br>
><br>
> At this stage, I'm open to any suggestions as to how to proceed, thanks again for any advice. <br>
><br>
> Cheers, <br>
><br>
> - Patrick<br>
><br>
> On Sun, Apr 21, 2019 at 1:22 AM Darrell Budic <<a href="mailto:budic@onholyground.com">budic@onholyground.com</a>> wrote:<br>
</p>
</blockquote>
<blockquote><blockquote><p dir="ltr">>><br>
>> Patrick,<br>
>><br>
>> Sounds like progress. Be aware that gluster is expected to max out the CPUs on at least one of your servers while healing. This is normal and won’t adversely affect overall performance (any more than having bricks in need of healing, at any rate) unless you’re overdoing it. shd threads <= 4 should not do that on your hardware. Other tunings may have also increased overall performance, so you may see higher CPU than previously anyway. I’d recommend upping those thread counts and letting it heal as fast as possible, especially if these are dedicated Gluster storage servers (Ie: not also running VMs, etc). You should see “normal” CPU use one heals are completed. I see ~15-30% overall normally, 95-98% while healing (x my 20 cores). It’s also likely to be different between your servers, in a pure replica, one tends to max and one tends to be a little higher, in a distributed-replica, I’d expect more than one to run harder while healin</p>
</blockquote>
</blockquote>