[Gluster-users] Speed up heal performance

Pranith Kumar Karampuri pkarampu at redhat.com
Wed Oct 14 05:39:14 UTC 2015



On 10/13/2015 07:11 PM, Ben Turner wrote:
> ----- Original Message -----
>> From: "Humble Devassy Chirammal" <humble.devassy at gmail.com>
>> To: "Atin Mukherjee" <atin.mukherjee83 at gmail.com>
>> Cc: "Ben Turner" <bturner at redhat.com>, "gluster-users" <gluster-users at gluster.org>
>> Sent: Tuesday, October 13, 2015 6:14:46 AM
>> Subject: Re: [Gluster-users] Speed up heal performance
>>
>>> Good news is we already have a WIP patch review.glusterd.org/10851 to
>> introduce multi threaded shd. Credits to Richard/Shreyas from facebook for
>> this. IIRC, we also have a BZ for the same
>> Isnt it the same bugzilla (
>> https://bugzilla.redhat.com/show_bug.cgi?id=1221737) mentioned in the
>> commit log?
> @Lindsay - No need for a BZ, the above BZ should suffice.
>
> @Anyone - In the commit I see:
>
>          { .key        = "cluster.shd-max-threads",
>            .voltype    = "cluster/replicate",
>            .option     = "shd-max-threads",
>            .op_version = 1,
>            .flags      = OPT_FLAG_CLIENT_OPT
>          },
>          { .key        = "cluster.shd-thread-batch-size",
>            .voltype    = "cluster/replicate",
>            .option     = "shd-thread-batch-size",
>            .op_version = 1,
>            .flags      = OPT_FLAG_CLIENT_OPT
>          },
>
> So we can tune max threads and thread batch size?  I understand max threads, but what is batch size?  In my testing on 10G NICs with a backend that will service 10G throughput I see about 1.5 GB per minute of SH throughput.  To Lindsay's other point, will this patch improve SH throughput?  My systems can write at 1.5 GB / Sec and NICs can to 1.2 GB / sec but I only see ~1.5 GB per _minute_ of SH throughput.  If we can not only make SH multi threaded, but improve the performance of a single thread that would be awesome.  Super bonus points if we can have some sort of tunible that can limit the bandwidth each thread can consume.  It would be great to be able to crank things up when the systems aren't busy and slow things down when load increases.
This patch is not merged because I thought we needed throttling feature 
to go in before we can merge this for better control of the self-heal 
speed. We are doing that for 3.8. So expect to see both of these for 3.8.

Pranith
>
> -b
>
>
>> --Humble
>>
>>
>> On Tue, Oct 13, 2015 at 7:26 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
>> wrote:
>>
>>> -Atin
>>> Sent from one plus one
>>> On Oct 13, 2015 3:16 AM, "Ben Turner" <bturner at redhat.com> wrote:
>>>> ----- Original Message -----
>>>>> From: "Lindsay Mathieson" <lindsay.mathieson at gmail.com>
>>>>> To: "gluster-users" <gluster-users at gluster.org>
>>>>> Sent: Friday, October 9, 2015 9:18:11 AM
>>>>> Subject: [Gluster-users] Speed up heal performance
>>>>>
>>>>> Is there any way to max out heal performance? My cluster is unused
>>> overnight,
>>>>> and lightly used at lunchtimes, it would be handy to speed up a heal.
>>>>>
>>>>> The only tuneable I found was cluster.self-heal-window-size, which
>>> doesn't
>>>>> seem to make much difference.
>>>> I don't know of any way to speed this up, maybe someone else could chime
>>> in here that knows the heal daemon better than me.  Maybe you could open an
>>> RFE on this?  In my testing I only see 2 files getting healed at a time per
>>> replica pair.  I would like to see this be multi threaded(if its not
>>> already) with the ability to tune it to control resource usage(similar to
>>> what we did in the rebalance refactoring done recently).  If you let me
>>> know the BZ # I'll add my data + suggestions, I have been testing this
>>> pretty extensively in recent weeks and good data + some ideas on how to
>>> speed things up.
>>> Good news is we already have a WIP patch review.glusterd.org/10851 to
>>> introduce multi threaded shd. Credits to Richard/Shreyas from facebook for
>>> this. IIRC, we also have a BZ for the same but the patch is in rfc as of
>>> now. AFAIK, this is a candidate to land in 3.8 as well, Vijay can correct
>>> me otherwise.
>>>> -b
>>>>
>>>>> thanks,
>>>>> --
>>>>> Lindsay
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list