[Gluster-users] Performance: lots of small files, hdd, nvme etc.
revirii at googlemail.com
Thu Mar 30 10:24:58 UTC 2023
> > Just an observation: is there a performance difference between a sw
> > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
> Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.
Maybe i was unprecise?
md3 : active raid10 sdh1 sde1 sda1 sdg1 sdc1 sdd1
sdf1 sdb1 sdi1 sdj1
48831518720 blocks super 1.2 512K chunks 2 near-copies [10/10] [UUUUUUUUUU]
mdadm --detail /dev/md3
Version : 1.2
Creation Time : Fri Jan 18 08:59:51 2019
Raid Level : raid10
Number Major Minor RaidDevice State
0 8 1 0 active sync set-A /dev/sda1
1 8 17 1 active sync set-B /dev/sdb1
2 8 33 2 active sync set-A /dev/sdc1
3 8 49 3 active sync set-B /dev/sdd1
4 8 65 4 active sync set-A /dev/sde1
5 8 81 5 active sync set-B /dev/sdf1
9 8 145 6 active sync set-A /dev/sdj1
8 8 129 7 active sync set-B /dev/sdi1
7 8 113 8 active sync set-A /dev/sdh1
6 8 97 9 active sync set-B /dev/sdg1
> > with
> > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
> > seems faster. Just out of curiosity...
> It should be, since the bricks are smaller. But given you're using a
> replica 3 I don't understand why you're also using RAID1: for each 10T
> of user-facing capacity you're keeping 60TB of data on disks.
> I'd ditch local RAIDs to double the space available. Unless you
> desperately need the extra read performance.
Well, looooong time ago we used 10TB disks as bricks (JBOD). replicate
3 setup. Then one of the bricks failed: the volume was ok (since 2
bricks were left), but after the hdd change the reset-brick produced a
very high load/iowait. So a raid1 or raid10 is the attempt to avoid
the reset-brick in favor of a sw raid rebuild - iirc this can run with
a lower priority -> less problems in the running system.
More information about the Gluster-users