[Gluster-devel] Order of server-side xlators
Vijay Bellur
vbellur at redhat.com
Sun Feb 1 11:39:06 UTC 2015
On 01/13/2015 10:18 AM, Xavier Hernandez wrote:
> On 01/13/2015 05:45 AM, Anand Avati wrote:
>> Valid questions. access-control had to be as close to posix as possible
>> in its first implementation (to minimize the cost of the STAT calls
>> originated by it), but since the introduction of posix-acl there are no
>> extra STAT calls, and given the later introduction of quota, it
>> certainly makes sense to have access-control/posix-acl closer to
>> protocol/server. Some general constraints to consider while deciding the
>> order:
>>
>> - keep io-stats as close to protocol/server as possible
>> - keep io-threads as close to storage/posix as possible
>> - any xlator which performs direct filesystem operations (with system
>> calls, not STACK_WIND) are better placed between io-threads and posix to
>> keep epoll thread nonblocking (e.g changelog)
>>
>
> Based on these constraints and the requirements of each xlator, what do
> you think about this order:
>
> posix
> changelog (needs FS access)
> index (needs FS access)
> marker (needs FS access)
> io-threads
> barrier (just above io-threads as per documentation (*))
> quota
> access-control
> locks
> io-stats
> server
>
> (*) I'm not sure of the requirements/dependencies of barrier xlator.
>
> Do you think this order makes sense and it would be better ?
marker makes use of the STACK_WIND framework to perform updations in the
FS. I am not sure if we would want to place that below io-threads, I
vaguely recollect having observed some problems when marker was placed
below io-threads. Raghavendra G might have more appropriate details.
I think index can move below io-threads. Pranith - any thoughts here?
The rest looks ok to me. I have attempted a refactoring [1] of the brick
volgen process which should make changes like this easier to accomplish.
Once this patch is in we can load md-cache on the brick close enough to
posix.
Thanks,
Vijay
[1] http://review.gluster.org/#/c/9521/
More information about the Gluster-devel
mailing list