<div dir="ltr">Hi Strahil, <div><br></div><div>Thanks again for your help, I checked most of my clients are on 3.13.2 which I think is the default packaged with Ubuntu. </div><div>I upgraded a test VM to v5.6 and tested again and there is no difference, performance accessing the cluster is the same. </div><div><br></div><div>Cheers,</div><div>-Patrick</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, Apr 21, 2019 at 11:39 PM Strahil <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">This looks more like FUSE problem.<br>
Are the clients on v3.12.xx ?<br>
Can you setup a VM for a test and run FUSE mounts using v5.6 and with v6.x </p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="gmail-m_-9135797282107653697quote">On Apr 21, 2019 17:24, Patrick Rennie <<a href="mailto:patrickmrennie@gmail.com" target="_blank">patrickmrennie@gmail.com</a>> wrote:<br type="attribution"><blockquote class="gmail-m_-9135797282107653697quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi Strahil, </div><div><br></div><div>Thank you for your reply and your suggestions. I'm not sure which logs would be most relevant to be checking to diagnose this issue, we have the brick logs, the cluster mount logs, the shd logs or something else? I have posted a few that I have seen repeated a few times already. I will continue to post anything further that I see. </div><div>I am working on migrating data to some new storage, so this will slowly free up space, although this is a production cluster and new data is being uploaded every day, sometimes faster than I can migrate it off. I have several other similar clusters and none of them have the same problem, one the others is actually at 98-99% right now (big problem, I know) but still performs perfectly fine compared to this cluster, I am not sure low space is the root cause here. </div><div><br></div><div>I currently have 13 VMs accessing this cluster, I have checked each one and all of them use one of the two options below to mount the cluster in fstab</div><div><br></div><div>HOSTNAME:/gvAA01 /mountpoint glusterfs defaults,_netdev,rw,log-level=WARNING,direct-io-mode=disable,use-readdirp=no 0 0</div><div>HOSTNAME:/gvAA01 /mountpoint glusterfs defaults,_netdev,rw,log-level=WARNING,direct-io-mode=disable</div><div><br></div><div>I also have a few other VMs which use NFS to access the cluster, and these machines appear to be significantly quicker, initially I get a similar delay with NFS but if I cancel the first "ls" and try it again I get < 1 sec lookups, this can take over 10 minutes by FUSE/gluster client, but the same trick of cancelling and trying again doesn't work for FUSE/gluster. Sometimes the NFS queries have no delay at all, so this is a bit strange to me. </div><div>HOSTNAME:/gvAA01 /mountpoint/ nfs defaults,_netdev,vers=3,async,noatime 0 0</div><div><br></div><div>Example:</div><div>user@VM:~$ time ls /cluster/folder</div><div>^C</div><div><br></div><div>real 9m49.383s</div><div>user 0m0.001s</div><div>sys 0m0.010s</div><div><br></div><div>user@VM:~$ time ls /cluster/folder</div><div><results></div><div><br></div><div>real 0m0.069s</div><div>user 0m0.001s</div><div>sys 0m0.007s</div><div><br></div><div>---</div><div><br></div><div>I have checked the profiling as you suggested, I let it run for around a minute, then cancelled it and saved the profile info. </div><div><br></div><div>root@HOSTNAME:/var/log/glusterfs# gluster volume profile gvAA01 start</div><div>Starting volume profile on gvAA01 has been successful</div><div>root@HOSTNAME:/var/log/glusterfs# time ls /cluster/folder</div><div>^C</div><div><br></div><div>real 1m1.660s</div><div>user 0m0.000s</div><div>sys 0m0.002s</div><div><br></div><div>root@HOSTNAME:/var/log/glusterfs# gluster volume profile gvAA01 info >> ~/profile.txt</div><div>root@HOSTNAME:/var/log/glusterfs# gluster volume profile gvAA01 stop</div><div><br></div><div>I will attach the results to this email as it's over 1000 lines. Unfortunately, I'm not sure what I'm looking at but possibly somebody will be able to help me make sense of it and let me know if it highlights any specific issues. </div><div><br></div><div>Happy to try any further suggestions. Thank you,</div><div><br></div><div>-Patrick</div></div></div><br><div class="gmail-m_-9135797282107653697elided-text"><div dir="ltr">On Sun, Apr 21, 2019 at 7:55 PM Strahil <<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">By the way, can you provide the 'volume info' and the mount options on all clients?<br>
Maybe , there is an option that uses a lot of resources due to some client's mount options.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div>On Apr 21, 2019 10:55, Patrick Rennie <<a href="mailto:patrickmrennie@gmail.com" target="_blank">patrickmrennie@gmail.com</a>> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Just another small update, I'm continuing to watch my brick logs and I just saw these errors come up in the recent events too. I am going to continue to post any errors I see in the hope of finding the right one to try and fix.. <div>This is from the logs on brick1</div></div></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div>