[Gluster-users] usage of harddisks: each hdd a brick? raid?
Nithya Balachandran
nbalacha at redhat.com
Tue Jan 22 10:36:31 UTC 2019
On Tue, 22 Jan 2019 at 11:42, Amar Tumballi Suryanarayan <
atumball at redhat.com> wrote:
>
>
> On Thu, Jan 10, 2019 at 1:56 PM Hu Bert <revirii at googlemail.com> wrote:
>
>> Hi,
>>
>> > > We ara also using 10TB disks, heal takes 7-8 days.
>> > > You can play with "cluster.shd-max-threads" setting. It is default 1 I
>> > > think. I am using it with 4.
>> > > Below you can find more info:
>> > > https://access.redhat.com/solutions/882233
>> > cluster.shd-max-threads: 8
>> > cluster.shd-wait-qlength: 10000
>>
>> Our setup:
>> cluster.shd-max-threads: 2
>> cluster.shd-wait-qlength: 10000
>>
>> > >> Volume Name: shared
>> > >> Type: Distributed-Replicate
>> > A, you have distributed-replicated volume, but I choose only replicated
>> > (for beginning simplicity :)
>> > May be replicated volume are healing faster?
>>
>> Well, maybe our setup with 3 servers and 4 disks=bricks == 12 bricks,
>> resulting in a distributed-replicate volume (all /dev/sd{a,b,c,d}
>> identical) , isn't optimal? And it would be better to create a
>> replicate 3 volume with only 1 (big) brick per server (with 4 disks:
>> either a logical volume or sw/hw raid)?
>>
>> But it would be interesting to know if a replicate volume is healing
>> faster than a distributed-replicate volume - even if there was only 1
>> faulty brick.
>>
>>
> We don't have any data point to agree to this, but it may be true.
> Specially, as the crawling when DHT (ie, distribute) is involved can get
> little slower, which means, the healing would get slower too.
>
If the healing is being done by the Self heal daemon, the slowdown is not
due to dht (shd does not load dht).
>
> We are trying to experiment few performance enhancement patches (like
> https://review.gluster.org/20636), would be great to see how things work
> with newer base. Will keep the list updated about performance numbers once
> we have some more data on them.
>
> -Amar
>
>
>>
>> Thx
>> Hubert
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>
> --
> Amar Tumballi (amarts)
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190122/2f5c0d16/attachment.html>
More information about the Gluster-users
mailing list