[Gluster-users] glusterfs on zfs on rockylinux
Arman Khalatyan
arm2arm at gmail.com
Wed Dec 15 19:05:56 UTC 2021
Thank you Darrel, now I have clear steps what to do. The data is very
valuable so 2xmirror +arbiter, or 3 replica nodes would be a setup.
Just for the clarification we have now LustreFS it is nice but no
redundancy. I am not using it for the VMs, the workloads are following -
gluster should be mounted on the multiple nodes, connection is Infiniband
or 10Gbit. The clients are pulling the data and making some data analysis,
IO pattern is very different - 26MB blocks or random 1k IO, different
codes, different projects. I am thinking to put all <128K files on the
special device (yes I am on the zfs 2.0.6 branch) On the gluster I have
seen .gluster folder has a lot of small folders or files, would improve
the performance if I move them to nvme as well or better to increase the
RAM(now I cant, but for the future)?
Unfortunately cannot add more RAM, but your tuning consideration is
important note.
a.
On Tue, Dec 14, 2021 at 12:25 AM Darrell Budic <budic at onholyground.com>
wrote:
> A few thoughts from another ZFS backend user:
>
> ZFS:
> use arcstats to look at your cache use over time and consider:
> Don’t mirror your cache drives, use them as 2x cache volumes to increase
> available cache.
> Add more RAM. Lots more RAM (if I’m reading that right and you have 32Gb
> ram per zfs server).
> Adjust ZFS’s max arc caching upwards if you have lots of RAM.
> Try more metadata caching & less content caching if you’re find heavy.
> compression on these volumes could help improve IO on the raidZ2s, but
> you’ll have to copy the data on with compression enabled if you didn’t
> already have it enabled. Different zStd levels are worth evaluating here.
> Read up on recordsize and consider if you would get any performance
> benefits from 64K or maybe something larger for your large data, depends on
> where the reads are being done.
> Use relatime or no atime tracking.
> Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1
>
> For gluster, sounds like gluster 10 would be good for your use case.
> Without knowing what your workload is (VMs, gluster mounts, nfs mounts?), I
> don’t have much else on that level, but you can probably play with the
> cluster.read-hash-mode (try 3) to spread the read load out amongst your
> servers. Search the list archives for general performance hints too, server
> & client .event-threads are probably good targets, and the various
> performance.*threads may/may not help depending on how the volumes are
> being used.
>
> More details (zfs version, gluster version, volume options currently
> applied, more details on the workload) may help if others use similar
> setups. You may be getting into the area where you just need to get your
> environment setup to try some A/B testing with different options though.
>
> Good luck!
>
> -Darrell
>
>
> On Dec 11, 2021, at 5:27 PM, Arman Khalatyan <arm2arm at gmail.com> wrote:
>
> Hello everybody,
> I was looking for some performance consideration on glusterfs with zfs.
> The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally
> over 100mln, about 100TB.
> 3replicated Jbods each one with:
> 2x8disks-RaidZ2 +special device mirror 2x1TBnvme+cache mirror 2xssd+32GB
> ram.
>
> most operations are reading and "find file".
> i put some parameters on zfs like: xattr=sa, primarycache=all, secondary
> cache=all
> what else could be tuned?
> thank you in advanced.
> greetings from Potsdam,
> Arman.
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211215/e80c8a4e/attachment.html>
More information about the Gluster-users
mailing list