[Gluster-users] [Gluster-devel] GlusterFS - Did you know? document
Justin Clift
justin at gluster.org
Wed Nov 12 13:26:50 UTC 2014
On Wed, 12 Nov 2014 07:17:25 -0500 (EST)
Krishnan Parthasarathi <kparthas at redhat.com> wrote:
> All,
>
> We have come across behaviours and features of GlusterFS that are left
> unexplained for various reasons. Thanks to Justin Clift for
> encouraging me to come up with a document that tries to fill this gap
> incrementally. We have decided to call it "did-you-know.md" and for a
> reason. We'd love to see updates to this document from all of you
> describing behaviours and features of GlusterFS that you have
> discovered, while using GlusterFS, which you don't see documented
> anywhere. We believe that it would be of great help for the fellow
> users and (new) developers.
>
> Here is the first cut of the patch - http://review.gluster.com/9103
> Comments and feedback welcome.
Here's the first "Did you know?" item (for everyone that doesn't want
to click through to Gerrit).
What other things like this should we include? :)
Regards and best wishes,
Justin Clift
********************************************************************
## Trusted Volfiles
Observant admins would have already wondered why there are two nearly
similar volume files, trusted-<VOLNAME>-fuse.vol and <VOLNAME>-fuse.vol.
To understand why that's the case, we need to understand two IP
address/hostname based access restrictions to volumes. There are two
options, namely "auth-allow" and "auth-reject", which allow the admin
to restrict which client machines can access the volume.
The "auth-allow" and "auth-reject" options take a comma separated list
of IP addresses/hostnames as value. The way "auth-allow" works is
that it allows access to volumes only for clients running on one
of the machines whose IP address/hostname is in that list.
It is easy to see an admin could configure an "auth-allow" option
which disallows access to a volume from within the trusted storage
pool. This is definitely undesirable.
e.g, the gluster-nfs process could be denied access to the bricks.
The work-around is to ask the admin to add the IP address/hostnames
of all the nodes in the trusted storage pool to the "auth-allow"
list. This is bad when using a reasonably large number of nodes.
So, an alternate authentication mechanism was developed for nodes in
the storage pool, that overrides the "auth-allow" configuration.
The following is a brief explanation of how this works. The volume
file with trusted prefix in its name (i.e trusted-volfile) has a
username and password option in the client xlator.
The trusted-volfile is used _only_ by mount processes running in the
trusted storage pool (hence the name).
The username and password, when present, allow "mount" (and other
glusterfs) processes to access the brick processes even if the node
they are running on is not explicitly added in "auth-allow" addresses.
'Regular' mount processes, running on nodes outside the trusted
storage pool, use the non-trusted-volfile.
The important thing to note here is that the way the word trusted is
used. "trusted" in this context only implies belonging to the
trusted storage pool.
********************************************************************
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.
My personal twitter: twitter.com/realjustinclift
More information about the Gluster-users
mailing list