[Gluster-devel] Client side translators doubt

Gustavo Bervian Brand gugabrand at gmail.com
Wed Jan 16 17:41:51 UTC 2013


Hello,

  Actually I didn't give much context, sorry. I'm using the code from
gluster 3.3 around August 28th 2012.

  The translator I'm working with uses only 3 other types of translator for
now, basically the client, server and posix ones. So, there's not much in
the middle to deal with now, and that's why I was questioning that.
  I took a look at the bugs mentioned, applied the related patches on my
code tree, but the error is not the same.
  About the gain in performance Jeff Dardy questioned, probably it doesn't
worth so much really, but it was more of a technical doubt for now.

  Finally, let's get back to my original configuration with both subvolumes
as "protocol/client" types: it works ok until I try something unusual,
which is pointing at the server side of both nodes their "posix" type
subvolumes to the same shared path. This path is a mount point shared by
both nodes through a lustre FS. In this case, both posix subvolumes, at the
backend, are writing to the same place. Should I expect this to work
without problems or changes at the posix translator would be necessary?
  I know the scenario is unusual and you'd ask why using gluster over
lustre, but I'm working with the idea of federation of distributed file
systems, using gluster and a new translator to allow the expansion of an
installed DFS like lustre to use with some other nodes at another cluster.

Thanks,
Gustavo Brand
---------------------------------------------------------------------------------


On Wed, Jan 16, 2013 at 5:12 AM, Anand Avati <anand.avati at gmail.com> wrote:

>
>
> On Tue, Jan 15, 2013 at 12:29 PM, Jeff Darcy <jdarcy at redhat.com> wrote:
>
>> On 01/15/2013 01:43 PM, Gustavo Bervian Brand wrote:
>>
>>>    I'm trying some volumes configurations with 2 nodes, each one having a
>>> gluster client and server running.
>>>
>>>    Both clients have each one a volume related to my translator, which
>>> has as
>>> sub volumes two "protocol/client" subvolumes (one subvol pointing to the
>>> local
>>> node's IP/vol and another pointing to the remote node IP/vol).
>>>
>>>    This works OK, and here comes the problem: when I try to change the
>>> local
>>> vol at the client side from a "protocol/client" type to a "posix" type
>>> the read
>>> breaks with -1 (operation not permitted).
>>>
>>
>> You don't say what version you're using, but could it be one of these?
>>
>>         https://bugzilla.redhat.com/**show_bug.cgi?id=868478<https://bugzilla.redhat.com/show_bug.cgi?id=868478>
>>         (patch for previous at http://review.gluster.org/#**change,4114<http://review.gluster.org/#change,4114>
>> )
>>         https://bugzilla.redhat.com/**show_bug.cgi?id=822995<https://bugzilla.redhat.com/show_bug.cgi?id=822995>
>>
>> In general, going directly to storage/posix seems ill warranted.  It
>> bypasses a bunch of translators like marker and access-control, for
>> example.  As we go forward there are likely to be even more "helper"
>> translators for UID mapping, coordination for client-side encryption or
>> erasure coding.  Since it's not possible to create such a configuration
>> through the CLI or other supported tools, it's not going to work properly
>> when configurations change, either.  Is it really worth all that, for what
>> is likely to be a modest performance gain in most cases?
>>
>>
> All what Jeff says is valid. And, with the configuration described above,
> you end up with two instances of translator stacks exporting the same
> directory. One stack which is the local client which has a storage/posix at
> the bottom. The other stack is the brick which exports it for the second
> machine to connect via RPC. This results in two instances of locks
> translator each granting locks to respective clients without contending --
> resulting in split brains and what not.
>
> Avati
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130116/5a917046/attachment-0001.html>


More information about the Gluster-devel mailing list