[Gluster-devel] Ability to turn off 'glusterfs' protocol

Deepak Shetty dpkshetty at gmail.com
Wed Jan 28 13:22:26 UTC 2015


Hearing all the prev discussions, thinking more about this requirement
there are a few different scenarios here and this is my take on them:

1) Scenario 1: ganesha server running inside glusterfs TSP (trusted storage
pool) - we just need to use the option to turn off glusterfs protocol

2) Scenario 2: ganesha server running outside of glusterfs TSP (This is
very much possible and needed for Manila, since it has the concept of
serving shares over a service VM for providing network isolation): In this
case we don't use option of turning off glusterfs protocol, instead just
use auth.allow <service VM IP> to ensure only service VM can access the
glusterfs pool. There is a small caveat here tho'... the service VM in
openstack will be configured to access glusterfs server via the neutron
router and br-ex external bridge and the way neutron sets it up is that it
enables SNAT (source nat) which causes the source IP of the network packet
to change by the time it reaches glusterfs server. So auth.allow <serviceVM
IP> won't work. This issue is openstack specific but if this won't work,
then Scenario 2 will be difficult to achieve in Manila, something to ponder
upon by Manila folks.

3) Scenario 3: Manila using gluster volume's subdir as shares instead of
whole volume. Here each subdir inside the glusterfs volume is 1 Manila
share and it can be spread across different tenants in a real world public
cloud usecase. Going by the Manila servcie VM design, it creates 1 service
VM per tenant,so Manila driver need to keep track of each service VM and
adds its IP to the auth.allow as and when new service VM keeps coming up
which again points back to the issue depicted in #2 above.

4) Scenario 4 : We use gluster-nfs (for cases where older version of
gluster is present or some other reason): We can use glusterfs protocol off
feature to ensure the volume is not accessible to un-trusted client over
glusterfs protocol.

So it looks like the ability to turn off glusterfs protocol completely is
only useful for Scenario 1 & 4.
Scenarios 2 & 3 have bigger issues to deal with on Manila side given the
way neutron sets up networking in openstack (unless we don't use auth.allow
at all, which means the glusterfs volume is open for all)

thanx,
deepak


On Wed, Jan 28, 2015 at 1:49 PM, Niels de Vos <ndevos at redhat.com> wrote:

> On Tue, Jan 27, 2015 at 08:29:49PM +0000, Csaba Henk wrote:
> > On Tue, 27 Jan 2015 11:39:52, Niels de Vos <ndevos at redhat.com> wrote:
> > > On Tue, Jan 27, 2015 at 02:10:17AM +0100, Csaba Henk wrote:
> > > > Does it mean that the implementation of feature would essentially
> boil
> > > > down
> > > > to an auth ruleset calculated by glusterfs?
> > >
> > > I guess that depends on the goal of the feature. Where does the need
> > > arrise to "turn off the glusterfs protocol"? Should nothing outside of
> > > the trusted storage pool be able to connect to the bricks? This would
> > > effectively only allow NFS/Samba/... when the service is located on a
> > > system that is part of the trusted storage pool.
> >
> > So, the basic use case is when in a cloud environment we make a Gluster
> > volume (entirely or partially) accessible via Gluster-NFS. Then the NFS
> > server is part of the Gluster cluster and a simple semantics of "turn off
> > glusterfs proto" (involving the earlier discussed internal exceptions)
> > seems to do the job (for preventint uncurated access).
>
> Ok.
>
> > However, a variant of that -- which is supposed to become more prevalent
> > -- is when the Gluster volume is made accessible with the help of
> > NFS-Ganesha. The Ganesha server typically resides outside of the
> > cluster, but should be handled as a trusted entity. That is, we still
> > need a simplistic semantics which lets us (cloud integrators) to be
> > assured that uncurated glusterfs access is prevented, but we need to
> > allow execptions for the occasional external trusted entity.
>
> The High-Availability NFS-Ganesha design puts the NFS-Ganesha service in
> the trusted storage pool, possibly on the servers hosting bricks. Maybe
> you can comment on why this is not suitable or wished for in your
> environment? This design basically swaps Gluster/NFS for NFS-Ganesha.
>
>
> http://www.gluster.org/community/documentation/index.php/Features/HA_for_ganesha
>
> > Furthermore, the one who knows of these (transient) execptions is
> > (the admin of / some component of) the cloud, so their management
> > should also happen from the cloud side. That led me to asking about
> > "gluster volume set". (Anyway, the model case, turning of gluster
> > NFS, is also managed from the cloud side by "gluster volume set".)
>
> I'm not sure if you're talking about this?
>
>
> http://www.gluster.org/community/documentation/index.php/Features/Gluster_CLI_for_ganesha
>
> At the moment, I think that also assumes the NFS-Ganesha service is
> running inside the trusted storage pool. If you need any of those
> functions available outside of the trusted storage pool, get in touch
> with the feature owners and keep the gluster-devel list on CC.
>
> Thanks,
> Niels
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150128/751e233f/attachment.html>


More information about the Gluster-devel mailing list