[Gluster-devel] [Gluster-users] Plans for Gluster 3.8

Saravanakumar Arumugam sarumuga at redhat.com
Fri Aug 14 07:15:57 UTC 2015


Hi Atin/Kaushal,
I am interested to take up "selective read-only mode" feature. (Bug#829042)
I will look into this and talk to you further.

Thanks,
Saravana

On 08/13/2015 08:58 PM, Atin Mukherjee wrote:
>
> Can we have some volunteers of these BZs?
>
> -Atin
> Sent from one plus one
>
> On Aug 12, 2015 12:34 PM, "Kaushal M" <kshlmster at gmail.com 
> <mailto:kshlmster at gmail.com>> wrote:
>
>     Hi Csaba,
>
>     These are the updates regarding the requirements, after our meeting
>     last week. The specific updates on the requirements are inline.
>
>     In general, we feel that the requirements for selective read-only mode
>     and immediate disconnection of clients on access revocation are doable
>     for GlusterFS-3.8. The only problem right now is that we do not have
>     any volunteers for it.
>
>     > 1.    Bug 829042 - [FEAT] selective read-only mode
>     > https://bugzilla.redhat.com/show_bug.cgi?id=829042
>     >
>     >       absolutely necessary for not getting tarred & feathered in
>     Tokyo ;)
>     >       either resurrect http://review.gluster.org/3526
>     >       and _find out integration with auth mechanism for special
>     >       mounts_, or come up with a completely different concept
>     >
>
>     With the availability of client_t, implementing this should become
>     easier. The server xlator would store the incoming connections common
>     name or address in the client_t associated with the connection. The
>     read-only xlator could then make use of this information to
>     selectively allow read-only clients. The read-only xlator would need
>     to implement a new option for selective read-only, which would be
>     populated with lists of common-names and addresses of clients which
>     would get read-only access.
>
>     > 2.    Bug 1245380 - [RFE] Render all mounts of a volume defunct
>     upon access revocation
>     > https://bugzilla.redhat.com/show_bug.cgi?id=1245380
>     >
>     >       necessary to let us enable a watershed scalability
>     >       enhancement
>     >
>
>     Currently, when auth.allow/reject and auth.ssl-allow options are
>     changed, the server xlator does a reconfigure to reload its access
>     list. It just does a reload, and doesn't affect any existing
>     connections. To bring this feature in, the server xlator would need to
>     iterate through its xprt_list and check every connection for
>     authorization again on a reconfigure. Those connections which have
>     lost authorization would be disconnected.
>
>     > 3.    Bug 1226776 – [RFE] volume capability query
>     > https://bugzilla.redhat.com/show_bug.cgi?id=1226776
>     >
>     >       eventually we'll be choking in spaghetti if we don't get
>     >       this feature. The ugly version checks we need to do against
>     >       GlusterFS as in
>     >
>     >
>     https://review.openstack.org/gitweb?p=openstack/manila.git;a=commitdiff;h=29456c#patch3
>     >
>     >       will proliferate and eat the guts of the code out of its
>     >       living body if this is not addressed.
>     >
>
>     This requires some more thought to figure out the correct solution.
>     One possible way to get the capabilities of the cluster would be to
>     look at the clusters running op-version. This can be obtained using
>     `gluster volume get all cluster.op-version` (the volume get command is
>     available in glusterfs-3.6 and above). But this doesn't provide much
>     improvement over the existing checks being done in the driver.
>     _______________________________________________
>     Gluster-devel mailing list
>     Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>     http://www.gluster.org/mailman/listinfo/gluster-devel
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150814/97af01b8/attachment.html>


More information about the Gluster-devel mailing list