<div id="yiv4798252078"><div id="yMail_cursorElementTracker_1639690962367">You can't move '.glusterfs' as it is full of hard lunks (thus it must be on the same FS).<div id="yiv4798252078yMail_cursorElementTracker_1639690804011">You can mount bia noatime and ensure that inode size is at least 512 bytes.</div><div id="yiv4798252078yMail_cursorElementTracker_1639690832497"><br clear="none"></div><div id="yiv4798252078yMail_cursorElementTracker_1639690832695">Also, you can use those NVMEs as a caching layer, so things will be faster.</div><div id="yiv4798252078yMail_cursorElementTracker_1639690860729"><br clear="none"></div><div id="yiv4798252078yMail_cursorElementTracker_1639690860943">Usually I recommend profiling your workload and then work towards optimizations.</div><div id="yiv4798252078yMail_cursorElementTracker_1639690886020"><br clear="none"></div><div id="yiv4798252078yMail_cursorElementTracker_1639690886220">GlusterFS suffers from negative searches (searching for file that doesn't exist) and lattency. The more nodes you have  -  the better is the situation.</div><div id="yiv4798252078yMail_cursorElementTracker_1639690941790"><br clear="none"></div><div id="yMail_cursorElementTracker_1639690968941"><br></div><div id="yMail_cursorElementTracker_1639690969124">Consider also splitting the workloads into 2 (if that is feasible) so you separate the small I/O from large I/O as the volume tunables for both workloads are self-contradicting.</div><div id="yMail_cursorElementTracker_1639691029747"><br></div><div id="yMail_cursorElementTracker_1639691030003">Best Regards,</div><div id="yMail_cursorElementTracker_1639691033539">Strahil Nikolov</div><div id="yiv4798252078yMail_cursorElementTracker_1639690942024"><br clear="none"></div><div id="yiv4798252078yMail_cursorElementTracker_1639690858569"> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div style="font-family:Roboto, sans-serif;color:#6D00F6;"> <div>On Wed, Dec 15, 2021 at 21:06, Arman Khalatyan</div><div id="yiv4798252078yqtfd37856" class="yiv4798252078yqt1090712965"><div><arm2arm@gmail.com> wrote:</div> </div></div><div id="yiv4798252078yqtfd08964" class="yiv4798252078yqt1090712965"> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> ________<br clear="none"><br clear="none"><br clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"> </div> </div></blockquote></div></div></div>