[Gluster-users] usage of harddisks: each hdd a brick? raid?

Hu Bert revirii at googlemail.com
Thu Jan 10 06:53:19 UTC 2019


Hi Mike,

> We have similar setup, and I do not test restoring...
> How many volumes do you have - one volume on one (*3) disk 10 TB in size
>   - then 4 volumes?

Testing could be quite easy: reset-brick start, then delete&re-create
partition/fs/etc., reset-brick commit force - and then watch.

We only have 1 big volume over all bricks. Details:

Volume Name: shared
Type: Distributed-Replicate
Number of Bricks: 4 x 3 = 12
Brick1: gluster11:/gluster/bricksda1/shared
Brick2: gluster12:/gluster/bricksda1/shared
Brick3: gluster13:/gluster/bricksda1/shared
Brick4: gluster11:/gluster/bricksdb1/shared
Brick5: gluster12:/gluster/bricksdb1/shared
Brick6: gluster13:/gluster/bricksdb1/shared
Brick7: gluster11:/gluster/bricksdc1/shared
Brick8: gluster12:/gluster/bricksdc1/shared
Brick9: gluster13:/gluster/bricksdc1/shared
Brick10: gluster11:/gluster/bricksdd1/shared
Brick11: gluster12:/gluster/bricksdd1_new/shared
Brick12: gluster13:/gluster/bricksdd1_new/shared

Didn't think about creating more volumes (in order to split data),
e.g. 4 volumes with 3*10TB each, or 2 volumes with 6*10TB each.

Just curious: after splitting into 2 or more volumes - would that make
the volume with the healthy/non-restoring disks better accessable? And
only the volume with the once faulty and now restoring disk would be
in a "bad mood"?

> > Any opinions on that? Maybe it would be better to use more servers and
> > smaller disks, but this isn't possible at the moment.
> Also interested. We can swap SSDs to HDDs for RAID10, but is it worthless?

Yeah, would be interested in how the glusterfs professionsals deal
with faulty disks, especially when these are as big as our ones.


Thx
Hubert


More information about the Gluster-users mailing list