[Gluster-users] GlusterFS Multitenancy -- supports multi-tenancy by partitioning users or groups into logical volumes on shared storage

Deepak Naidu dnaidu at nvidia.com
Mon Mar 6 04:33:24 UTC 2017


Anyone on how multi tenancy works on gluster


https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Introduction/

GlusterFS. It supports multi-tenancy by partitioning users or groups into logical volumes on shared storage.



--
Deepak

On Mar 2, 2017, at 3:38 PM, Deepak Naidu <dnaidu at nvidia.com<mailto:dnaidu at nvidia.com>> wrote:

Hello,

I have been reading the below statement in GlusterFS docs & articles regarding multi-tenancy. Is this statement related to virtual environment ie VM's. How valid is "partitioning users or groups into logical volumes". Can someone explain what it really means.
Is it that I can associate a user/group(UID/GID) like NFS to a glusterFS volumes ?

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/GlusterFS%20Introduction/

GlusterFS. It supports multi-tenancy by partitioning users or groups into logical volumes on shared storage.


My though was I can do multi-tenancy at volume level as below.


*         Create a distributed volume named data1 for tenant1 from StorageNode1-5 using Disk1(raided) using NIC-1 network

*         Similarly create distributed volume named data2 for tenant2 from StorageNode1-5 using Disk2(raided) using NIC-2 network

Is my understanding correct ? How is the user/group come into picture.


--
Deepak

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170306/319f202c/attachment.html>


More information about the Gluster-users mailing list