[Gluster-users] hardware issues and new server advice
hunter86_bg at yahoo.com
Sun Mar 26 18:16:59 UTC 2023
I think it will be better to open a separate thread for your case .
If you have HW Raid1 presented as disks, then you can easily use striped LVM or md raid ( level 0 ) to stripe the disks.
One advantage is that you won't have to worry about gluster rebalance or overloaded brick (multiple file access requests to the same brick), but of course it has disadvantages.
Keep in mind that negative searches (searches of non-existing/deleted objects) has highest penalty.
В неделя, 26 март 2023 г., 08:52:18 ч. Гринуич+3, Hu Bert <revirii at googlemail.com> написа:
sry if i hijack this, but maybe it's helpful for other gluster users...
> pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data.
> I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones) controllers.
Well, we have to take what our provider (hetzner) offers - SATA hdds
or sata|nvme ssds.
Volume Name: workdata
Number of Bricks: 5 x 3 = 15
Below are the volume settings.
Each brick is a sw raid1 (made out of 10TB hdds). file access to the
backends is pretty slow, even with low system load (which reaches >100
on the servers on high traffic days); even a simple 'ls' on a
directory with ~1000 sub-directories will take a couple of seconds.
As you mentioned it: is a raid10 better than x*raid1? Anything misconfigured?
Thx a lot & best regards,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users