[Gluster-users] thread model of glusterfs brick server?
Mingfan Lu
mingfan.lu at gmail.com
Tue Feb 11 01:15:10 UTC 2014
I found the tool pstack is what I need. thanks
On Sat, Feb 8, 2014 at 5:13 PM, Mingfan Lu <mingfan.lu at gmail.com> wrote:
> use pstree to get the threads of a brick server process
> I got something like below, could we know which threads are io-threads
> which are threads to run self-heal? how about others?
> (just according to the tid and know the sequence when to create)
>
> [root at 10.121.56.105 ~]# pstree -p 6226
> glusterfsd(6226)ââ¬â{glusterfsd}(6227)
> ââ{glusterfsd}(6228)
> ââ{glusterfsd}(6229)
> ââ{glusterfsd}(6230)
> ââ{glusterfsd}(6243)
> ââ{glusterfsd}(6244)
> ââ{glusterfsd}(6247)
> ââ{glusterfsd}(6262)
> ââ{glusterfsd}(6314)
> ââ{glusterfsd}(6315)
> ââ{glusterfsd}(6406)
> ââ{glusterfsd}(6490)
> ââ{glusterfsd}(6491)
> ââ{glusterfsd}(6493)
> ââ{glusterfsd}(6494)
> ââ{glusterfsd}(6506)
> ââ{glusterfsd}(6531)
> ââ{glusterfsd}(6532)
> ââ{glusterfsd}(6536)
> ââ{glusterfsd}(6539)
> ââ{glusterfsd}(6540)
> ââ{glusterfsd}(9127)
> ââ{glusterfsd}(22470)
> ââ{glusterfsd}(22471)
> ââ{glusterfsd}(22472)
> ââ{glusterfsd}(22473)
> ââ{glusterfsd}(22474)
> ââ{glusterfsd}(22475)
> ââ{glusterfsd}(22476)
> ââ{glusterfsd}(23217)
> ââ{glusterfsd}(23218)
> ââ{glusterfsd}(23219)
> ââ{glusterfsd}(23220)
> ââ{glusterfsd}(23221)
> ââ{glusterfsd}(23222)
> ââ{glusterfsd}(23223)
> ââ{glusterfsd}(23328)
> ââ{glusterfsd}(23329)
>
> my volume is:
>
> Volume Name: prodvol
> Type: Distributed-Replicate
> Volume ID: f3fc24b3-23c7-430d-8ab1-81a646b1ce06
> Status: Started
> Number of Bricks: 17 x 3 = 51
> Transport-type: tcp
> Bricks:
> ...
> Options Reconfigured:
> performance.io-thread-count: 32
> auth.allow: *,10.121.48.244,10.121.48.82
> features.limit-usage: /:400TB
> features.quota: on
> server.allow-insecure: on
> features.quota-timeout: 5
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140211/830ee636/attachment.html>
More information about the Gluster-users
mailing list