[Gluster-users] What would the cli command look like for integrating custom xlator?

Xavier Hernandez xhernandez at datalab.es
Thu Jan 24 15:51:02 UTC 2013


What do you think about this ?

Modify the "gluster volume create" command in a way that allows two forms:

* One very similar to the current one:

      gluster volume create <volname> replica x ...

* The second one only defines the name and the list of bricks

      gluster volume create <volname> bricks <brick1> <brick2> ...

or even we can separate volume create from brick addition:

      gluster volume create <volname>
      gluster volume add-brick <volname> <brick1> <brick2> ...

In the first case a "standard" volume is created. This command will 
allow you to create any supported volume type. In the second case, a new 
volume is created but the vol file will only contain the definitions of 
the bricks (this could be modified to initialize the volume with more 
translators but it should only have debug, or performance translators at 
most). We could think about it as an advanced creation mode able to 
create non standard volumes and volumes with third party xlators, but 
without removing the fast and easy existing creation mode.

Of course this volume is still unusable and any attempt to start it 
should fail.

Once the volume is defined, it should be completed using commands like:

      gluster volume client-xlator <volname> add <new id> <xlator> 
<options> <existing id> <existing id> ...

With this command we will add, to the client side of the volume, the 
translator <xlator> (for example cluster/replicate) identified by <new 
id> (the name of the new subvolume on the vol file) using all the 
<existing id> as subvolumes. Here it would be interesting to have a hook 
so that each xlator could add or parse additional options to be added to 
the vol file. Initially, when there are only bricks in the vol file, the 
bricks could be identified by brick.0, brick.1, ...

Once all xlators are added, a commit command would mark (and check) the 
volume as valid and it could be started. At least the client side must 
be committed before starting the volume. The bricks corresponding to not 
committed server side will be handled as unavailable.

     gluster volume client-xlator test commit

To modify the layout of an existing volume, a maintenance command should 
be needed (the volume should be stopped before):

      gluster volume client-xlator test maintenance

Then we could do modifications to the layout:

      gluster volume client-xlator test remove <existing id>

This could break subvolume dependencies. This is why the volume should 
be stopped. Some of the modifications should be prohibited or at least a 
warning message shown and a confirmation requested because this could 
allow to convert a replica 2 with 6 bricks into a replica 3, which 
obviously will not work. May be the remove command should not be allowed 
at all ?

To simply expand the current volume with new bricks it wouldn't be 
necessary to put it in maintenance mode. Only add commands will be valid 
and a final commit would place the new bricks in production.

Maybe a rollback command would be interesting to dissmiss the 
uncommitted changes made to a volume.

Something similar could be done with the server side xlators (the main 
difference is that they could be modified with the volume started, it 
would only appear to the client as if the corresponding brick were down):

      gluster volume server-xlator test ...

For example, to creare a distributed-replicated volume with replica 2 
and 4 bricks we could do the following:

      gluster volume create test node1:/brick node2:/brick node3:/brick 
node4:/brick
      gluster volume client-xlator test add replica.0 cluster/replicate 
brick.0 brick.1
      gluster volume client-xlator test add replica.1 cluster/replicate 
brick.2 brick.3
      gluster volume client-xlator test add distribute.0 
cluster/distribute replica.0 replica.1

Here we could allow to add each performance or debug xlator one by one 
using commands like these or create a special command to add the 
standard remaining translators:

      gluster volume client-xlator test add-defaults distribute.0

and then:

      gluster volume client-xlator test commit

Server side:

      gluster volume server-xlator test add-defaults brick.0
      gluster volume server-xlator test add-defaults brick.1
      gluster volume server-xlator test add-defaults brick.2
      gluster volume server-xlator test add-defaults brick.3
      gluster volume server-xlator test commit brick.0
      gluster volume server-xlator test commit brick.1
      gluster volume server-xlator test commit brick.2
      gluster volume server-xlator test commit brick.3

As a side note: if we define the bricks as objects themselves (as you 
proposed in a previous email), the client definition would be more 
generic (without stating the path of the storage on the creation time of 
the volume) and we could allow commands like this on the server side:

      gluster volume server-xlator test add brick.0 storage/posix 
/path/to/brick

or (to support non posix storages):

      gluster volume server-xlator test add brick.0 storage/<any other 
storage> <options>

I'm not quite sure if the server xlators should be defined from up 
(protocol/server xlator) to down (storage/xxx) or like the client side.

An additional command that would be useful could be:

      gluster volume client-xlator test layout

This commad should show the current layout of the volume, with its id's 
and which components are committed and which not.

This offers a lower level view of the gluster volume, however who will 
be "playing" with it should be an administrator who should know what is 
he doing and maybe this could allow him to have a better understanding 
of how the volume works.

Xavi

Al 22/01/13 19:34, En/na Jeff Darcy ha escrit:
> On 01/22/2013 12:59 PM, Joe Julian wrote:
>> The cli has been great for adding the necessary management tools,but 
>> if you
>> want to use custom translators, you're back to writing vol files and 
>> losing
>> your ability to do online volume changes. This ability needs to be 
>> added in
>> order for custom translators to become viable.
>>
>> What would the cli command look like for integrating custom xlator?
>>
>> I can picture a couple ways, one of which would be that xlators would 
>> list
>> requirements and providers so the volgen would be able to intuit a 
>> valid graph
>> if that xlator is enabled for the volume. The cli would provide 
>> command hooks
>> for any new features that xlator would need to add to the cli.
>>
>> The gluster command should have a switch option listing the supported 
>> mount
>> options in case a xlator provides new ones (would be parsed by 
>> mount.glusterfs).
>>
>> Anybody else have a view?
>
> How about this?
>
>     gluster volume client-xlator myvol encryption/rot14 
> cluster/distribute
>
> This would tell the volfile-generation machinery that it should insert 
> something like this:
>
>     volume myvol-dht
>         type cluster/distribute
>         ...
>     subvolume
>
>     volume myvol-rot14
>         type encryption/rot14
>         ...
>         subvolumes myvol-dht
>     end-volume
>
> Basically the type/path is determined by the first argument, and the 
> position in the volfile by the second.  There'd be a server-xlator 
> equivalent, obviously, and it's up to you to make sure the translator 
> even exists at that location on each client/server.  Then you could do 
> this:
>
>     gluster volume set myvol encryption/rot14.algorithm Salsa20
>
> This covers most of the kinds of translator insertion that I've seen 
> both in GlusterFS and in HekaFS, though there are a few that require 
> deeper changes to the volfile-generation logic (e.g. when NUFA was 
> brought back or to do HekaFS multi-tenancy).  One could even have 
> gluster/d inspect the named .so and make sure that everything "looks 
> right" in terms of entry points and options. One thing I don't like 
> about this approach is that there's no way to specify a specific 
> instance of the new translator or its parent either in the original 
> insertion command or when setting options; there's sort of an implicit 
> "for each" in there.  In some situations we might also want separate 
> "above" and "below" qualifiers to say where the new translator should go.
>
> For HekaFS I actually developed a Python infrastructure for working 
> with volfiles (see volfile.py either there or in some of my subsequent 
> scripts), and there's a hook to enable them (see 
> volgen_apply_filters).  That provides total flexibility, but that 
> doesn't make it the right approach.  For one thing, it doesn't really 
> play well with the rest of our option-setting machinery.  I think the 
> more "structured" approach would be better for the vast majority of 
> cases, with this type of filter only as a last resort.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list