[Gluster-devel] gluster source code help
Ravishankar N
ravishankar at redhat.com
Mon Feb 6 10:05:25 UTC 2017
On 02/06/2017 03:15 PM, jayakrishnan mm wrote:
>
>
> On Mon, Feb 6, 2017 at 2:36 PM, jayakrishnan mm
> <jayakrishnan.mm at gmail.com <mailto:jayakrishnan.mm at gmail.com>> wrote:
>
>
>
> On Fri, Feb 3, 2017 at 7:58 PM, Ravishankar N
> <ravishankar at redhat.com <mailto:ravishankar at redhat.com>> wrote:
>
> On 02/03/2017 09:14 AM, jayakrishnan mm wrote:
>>
>>
>> On Thu, Feb 2, 2017 at 8:17 PM, Ravishankar N
>> <ravishankar at redhat.com <mailto:ravishankar at redhat.com>> wrote:
>>
>> On 02/02/2017 10:46 AM, jayakrishnan mm wrote:
>>> Hi
>>>
>>> How do I determine, which part of the code is run on
>>> the client, and which part of the code is run on the
>>> server nodes by merely looking at the the glusterfs
>>> source code ?
>>> I knew there are client side and server side
>>> translators which will run on respective platforms. I am
>>> looking at part of self heal daemon source (ec/afr)
>>> which will run on the server nodes and the part which
>>> run on the clients.
>>
>> The self-heal daemon that runs on the server is also a
>> client process in the sense that it has client side
>> xlators like ec or afr and protocol/client (see the shd
>> volfile 'glustershd-server.vol') loaded and talks to the
>> bricks like a normal client does.
>> The difference is that only self-heal related 'logic' get
>> executed on the shd while both self-heal and I/O related
>> logic get executed from the mount. The self-heal logic
>> resides mostly in afr-self-heal*.[ch] while I/O related
>> logic is there in the other files.
>> HTH,
>> Ravi
>>
>>
> Hi JK,
>> Dear Ravi,
>> Thanks for your kind explanation.
>> So, each server node will have a separate self-heal
>> daemon(shd) up and running , every time a child_up event
>> occurs, and this will be an index healer.
>> And each daemon will spawn "priv->child_count " number of
>> threads on each server node . correct ?
> shd is always running and yes those many threads are spawned
> for index heal when the process starts.
>> 1. When exactly a full healer spawns threads?
> Whenever you run `gluster volume heal volname full`. See
> afr_xl_op(). There are some bugs in launching full heal though.
>> 2. When can GF_EVENT_TRANSLATOR_OP & GF_SHD_OP_HEAL_INDEX
>> happen together (so that index healer spawns thread) ?
>> similarly when can GF_EVENT_TRANSLATOR_OP
>> & GF_SHD_OP_HEAL_FULL happen ? During replace-brick ?
>> Is it possible that index healer and full healer spawns
>> threads together (so that total number of threads is
>> 2*priv->child_count)?
>>
> index heal threads wake up and run once every 10 minutes or
> whatever the cluster.heal-timeout is. They are also run when a
> brick comes up like you said, via afr_notify(). It is also run
> when you manually launch 'gluster volume heal volname`. Again
> see afr_xl_op().
>> 3. In /var/lib/glusterd/glustershd/glustershd-server.vol ,
>> why debug/io-stats is chosen as the top xlator ?
>>
> io-stats is generally loaded as the top most xlator in all
> graphs at the appropriate place for gathering profile-info,
> but for shd, I'm not sure if it has any specific use other
> than acting as a placeholder as a parent to all replica xlators.
>
>
>
>
> Hi Ravi,
>
> The self heal daemon searches in .glusterfs/indices/xattrop
> directory for the files/dirs to be healed . Who is updating this
> information , and on what basis ?
>
>
Please see
https://github.com/gluster/glusterfs-specs/blob/master/done/Features/afr-v1.md,
it is a bit dated (relevant to AFR v1, which is in glusterfs 3.5 and
older I think) but the concepts are similar. The entries are
added/removed by the index translator during the pre-op/post-op phases
of the AFR transaction .
> Thanks Ravi, for the explanation.
> Regards
> JK
>
>
> Regards,
> Ravi
>> Thanks
>> Best regards
>>
>>>
>>> Best regards
>>> JK
>>>
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>> <http://lists.gluster.org/mailman/listinfo/gluster-devel>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170206/1aaa03a7/attachment.html>
More information about the Gluster-devel
mailing list