[Gluster-devel] libgfapi threads
Kelly Burkhart
kelly.burkhart at gmail.com
Thu Feb 13 04:26:50 UTC 2014
I created a bug for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1061229
On Tue, Feb 4, 2014 at 8:50 AM, Kelly Burkhart <kelly.burkhart at gmail.com>wrote:
> We've noticed that gfapi threads won't die until process exit, they aren't
> joined to in glfs_fini(). Is that expected? The following will create 4*N
> threads:
>
> for( idx=0; idx<N; ++idx) {
> glfs_new
> glfs_set_volfile_server
> glfs_init
> // pause a bit here
> glfs_fini
> }
>
> -K
>
>
>
> On Fri, Jan 31, 2014 at 9:07 AM, Kelly Burkhart <kelly.burkhart at gmail.com>wrote:
>
>> Thanks Anand,
>>
>> I notice three different kind of threads: gf_timer_proc and
>> syncenv_processor in libglusterfs and glfs_poller in the api. Right off
>> the bat two syncenv threads are created and one each of the other two are
>> created. In my limited testing, it doesn't seem to take much for more
>> threads to be created.
>>
>> The reason I'm concerned is that we intend to run our gluster client on a
>> machine with all but one core dedicated to latency critical apps. The
>> remaining core will handle all other things. In this scenario creating
>> scads of threads seems likely to be a pessimization compared to just having
>> one thread with an epoll loop handling everything. Would any of you
>> familiar with the guts of gluster predict a problem with pegging a gfapi
>> client and all of it's child threads to a single core?
>>
>> BTW, attached is a simple patch to help me track what threads are
>> created, it's linux specific, but I think it's useful. It adds an
>> identifier and instance count to each kind of child thread so I see this in
>> top:
>>
>> top - 08:35:47 up 48 min, 3 users, load average: 0.12, 0.07, 0.05
>> Tasks: 9 total, 0 running, 9 sleeping, 0 stopped, 0 zombie
>> Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 98.9%id, 0.0%wa, 0.0%hi, 0.7%si,
>> 0.0%st
>> Mem: 16007M total, 1372M used, 14634M free, 96M buffers
>> Swap: 2067M total, 0M used, 2067M free, 683M cached
>>
>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>> 22979 kelly 20 0 971m 133m 16m S 0 0.8 0:00.06 tst
>> 22987 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:0
>> 22988 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:1
>> 22989 kelly 20 0 971m 133m 16m S 0 0.8 0:00.03 tst/gp:0
>> 22990 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/tm:0
>> 22991 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:2
>> 22992 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/sp:3
>> 22993 kelly 20 0 971m 133m 16m S 0 0.8 0:01.98 tst/gp:1
>> 22994 kelly 20 0 971m 133m 16m S 0 0.8 0:00.00 tst/tm:1
>>
>> Thanks,
>>
>> -K
>>
>>
>>
>> On Thu, Jan 30, 2014 at 4:38 PM, Anand Avati <avati at gluster.org> wrote:
>>
>>> Thread count is independent of number of servers. The number of
>>> sockets/connections is a function of number of servers/bricks. There are a
>>> minimum number of threads (like the timer threads, syncop exec threads,
>>> io-threads, epoll thread, depending on interconnect RDMA event reaping
>>> threads) and some of them (syncop and io-thread) count are dependent on the
>>> work load. All communication with servers is completely asynchronous and we
>>> do not spawn a new thread per server.
>>>
>>> HTH
>>> Avati
>>>
>>>
>>>
>>> On Thu, Jan 30, 2014 at 1:17 PM, James <purpleidea at gmail.com> wrote:
>>>
>>>> On Thu, Jan 30, 2014 at 4:15 PM, Paul Cuzner <pcuzner at redhat.com>
>>>> wrote:
>>>> > Wouldn't the thread count relate to the number of bricks in the
>>>> volume,
>>>> > rather that peers in the cluster?
>>>>
>>>>
>>>> My naive understanding is:
>>>>
>>>> 1) Yes, you should expect to see one connection to each brick.
>>>>
>>>> 2) Some of the "scaling gluster to 1000" nodes work might address the
>>>> issue, as to avoid 1000 * brick count/perserver connections.
>>>>
>>>> But yeah, Kelly: I think you're seeing the right number of threads.
>>>> But this is outside of my expertise.
>>>>
>>>> James
>>>>
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at nongnu.org
>>>> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20140212/3762e06a/attachment-0001.html>
More information about the Gluster-devel
mailing list