[Gluster-users] auth.allow doesn't seem to work

Atin Mukherjee amukherj at redhat.com
Fri Sep 23 10:17:07 UTC 2016


Try for auth.reject as * and then go for specific auth.allow?

On Friday 23 September 2016, Kevin Lemonnier <lemonnierk at ulrar.net> wrote:

> Hi,
>
> Using GlusterFS 3.7.15 on Debian 8 I'm trying to limit access using
> auth.allow on my volume.
> I have 3 nodes in replication with both a public interface and a private
> interface on each.
> Gluster uses the private IPs to communicate, but I noticed it was possible
> to mount the volume
> from the internet (that's bad ..) so I googled a bit. auth.allow, if I
> understand it correctly,
> should allow me to limit access of the volume to a list of IPs, is that
> correct ?
>
> I ran gluster volume set VMs auth.allow 10.10.0.* and it said success (it
> does appear in the info),
> but I can still mount the volume from the internet. It works only using
> NFS because using fuse it's
> trying to use the private adresses, which won't work on the internet, but
> it still gets the volume
> config and the nodes names anyway.
>
> Should I do something specific after setting auth.allow ?
>
> Here is the volume info :
>
> Volume Name: VMs
> Type: Replicate
> Volume ID: d0ee13f2-055c-4f37-9c75-527d5e86b78d
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: ips1adm.clientname:/mnt/storage/VMs
> Brick2: ips2adm.clientname:/mnt/storage/VMs
> Brick3: ips3adm.clientname:/mnt/storage/VMs
> Options Reconfigured:
> auth.allow: 10.10.0.*
> network.ping-timeout: 15
> cluster.data-self-heal-algorithm: full
> features.shard-block-size: 64MB
> features.shard: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> performance.readdir-ahead: on
>
>
> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>


-- 
--Atin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160923/702361ef/attachment.html>


More information about the Gluster-users mailing list