[Gluster-users] Gluster inside containers
Personal
zach at zachlanich.com
Wed Aug 17 13:50:04 UTC 2016
Thanks Humble. Re: The single point of failure, would there be a single point of failure in a 4 or 6 node Distributed Replicated setup? I still have to wrap my head around exactly how many nodes I need for H/A & linear scalability over time.
PS good to hear subdirectory mount support is coming.
Best Regards,
Zach Lanich
Business Owner, Entrepreneur, Creative
Owner/Lead Developer
weCreate LLC
www.WeCreate.com
> On Aug 17, 2016, at 7:48 AM, Humble Devassy Chirammal <humble.devassy at gmail.com> wrote:
>
> Hi Zach,
>
> >
> Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for each website), mounting the respective subdirs into their containers & using ACLs & LXD’s u/g id maps (mixed feelings about security here)
> >
>
> Which version of GlusterFS is in use here ? because gluster sub directory support patch is available in upstream, however I dont think its in a good state to consume. Yeah, if the subdirectory mount is performed we have to take enough care to make sure the isolation of the mounts between multiple user, ie security is a concern here.
>
> >
> Option 2. 3 Gluster nodes, website-specifc bricks on each, creating website-specific volumes, then mounting those respective volumes into their containers. Example:
> gnode-1
> - /data/website1/brick1
> - /data/website2/brick1
> gnode-2
> - /data/website1/brick2
> - /data/website2/brick2
> gnode-3
> - /data/website1/brick3
> - /data/website2/brick3
> >
>
> Yes, this looks to be an ideal or more consumable approach to me.
>
> >
>
> Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster Cluster” via LXD containers on the Gluster nodes. Example:
> gnode-1
> - gcontainer-website1
> - /data/brick1
> - gcontainer-website2
> - /data/brick1
> gnode-2
> - gcontainer-website1
> - /data/brick2
> - gcontainer-website2
> - /data/brick2
> gnode-3
> - gcontainer-website1
> - /data/brick3
> - gcontainer-website2
> - /data/brick3
> >
>
> This is very difficult or complex to achieve and maintain.
>
> In short, I would vote for option 2.
>
> Also for safer side, you may need take snapshot of the volumes or configure a backup for these volumes to avoid single point of failure.
>
> Please let me know if you need any details.
>
> --Humble
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160817/73d7a9c5/attachment.html>
More information about the Gluster-users
mailing list