[Gluster-users] hardware issues and new server advice
Strahil Nikolov
hunter86_bg at yahoo.com
Fri Mar 24 21:11:28 UTC 2023
Actually,
pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones) controllers.
@Martin,
in order to get a more reliable setup, you will have to either get more servers and switch to distributed-replicated volume(s) or consider getting server hardware.Dispersed volumes require a lot of CPU computations and the Ryzens won't cope with the load.
Best Regards,Strahil Nikolov
On Thu, Mar 23, 2023 at 12:16, Hu Bert<revirii at googlemail.com> wrote: Hi,
Am Di., 21. März 2023 um 23:36 Uhr schrieb Martin Bähr
<mbaehr+gluster at realss.com>:
> the primary data is photos. we get an average of 50000 new files per
> day, with a peak if 7 to 8 times as much during christmas.
>
> gluster has always been able to keep up with that, only when raid resync
> or checks happen the server load sometimes increases to cause issues.
Interesting, we have a similar workload: hundreds of millions of
images, small files, and especially on weekends with high traffic the
load+iowait is really heavy. Or if a hdd fails, or during a raid
check.
our hardware:
10x 10TB hdds -> 5x raid1, each raid1 is a brick, replicate 3 setup.
About 40TB of data.
Well, the bricks are bigger than recommended... Sooner or later we
will have to migrate that stuff, and use nvme for that, either 3.5TB
or bigger ones. Those should be faster... *fingerscrossed*
regards,
Hubert
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230324/eec6c87e/attachment.html>
More information about the Gluster-users
mailing list