[Gluster-users] Gluster inside containers
Zach Lanich
zach at wecreate.com
Wed Aug 17 13:09:25 UTC 2016
It's good to hear the support is coming though. Thanks!
Best Regards,
Zach Lanich
Owner/Lead Developer
weCreate LLC
www.WeCreate.com
814.580.6636
> On Aug 17, 2016, at 8:54 AM, Kaushal M <kshlmster at gmail.com> wrote:
>
> On Wed, Aug 17, 2016 at 5:18 PM, Humble Devassy Chirammal
> <humble.devassy at gmail.com> wrote:
>> Hi Zach,
>>
>>>
>> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for
>> each website), mounting the respective subdirs into their containers & using
>> ACLs & LXD’s u/g id maps (mixed feelings about security here)
>>>
>>
>> Which version of GlusterFS is in use here ? because gluster sub directory
>> support patch is available in upstream, however I dont think its in a good
>> state to consume. Yeah, if the subdirectory mount is performed we have to
>> take enough care to make sure the isolation of the mounts between multiple
>> user, ie security is a concern here.
>
> A correction here. Sub-directory mount support hasn't been merged yet.
> It's still a patch under review.
>
>>
>>>
>> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating
>> website-specific volumes, then mounting those respective volumes into their
>> containers. Example:
>> gnode-1
>> - /data/website1/brick1
>> - /data/website2/brick1
>> gnode-2
>> - /data/website1/brick2
>> - /data/website2/brick2
>> gnode-3
>> - /data/website1/brick3
>> - /data/website2/brick3
>>>
>>
>> Yes, this looks to be an ideal or more consumable approach to me.
>>
>>>
>>
>> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster
>> Cluster” via LXD containers on the Gluster nodes. Example:
>> gnode-1
>> - gcontainer-website1
>> - /data/brick1
>> - gcontainer-website2
>> - /data/brick1
>> gnode-2
>> - gcontainer-website1
>> - /data/brick2
>> - gcontainer-website2
>> - /data/brick2
>> gnode-3
>> - gcontainer-website1
>> - /data/brick3
>> - gcontainer-website2
>> - /data/brick3
>>>
>>
>> This is very difficult or complex to achieve and maintain.
>>
>> In short, I would vote for option 2.
>>
>> Also for safer side, you may need take snapshot of the volumes or configure
>> a backup for these volumes to avoid single point of failure.
>>
>> Please let me know if you need any details.
>>
>> --Humble
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list