[Gluster-users] Gluster inside containers

Zach Lanich zach at zachlanich.com
Tue Aug 16 05:04:45 UTC 2016


Hey guys, I’m having a real hard time figuring out how to handle my Gluster situation for the web hosting setup I’m working on. Here’s the rundown of what I’m trying to accomplish:

- Load-balanced web nodes (2 nodes right now), each with multiple LXD containers in them (1 container per website)
- Gluster vols mounted into the containers (I probably need site-specific volumes, not mounting the same volume into all of them)

Here are 3 scenarios I’ve come up with for a replica 3 (possibly w/ arbiter):

Option 1. 3 Gluster nodes, one large volume, divided up into subdirs (1 for each website), mounting the respective subdirs into their containers & using ACLs & LXD’s u/g id maps (mixed feelings about security here)

Option 2. 3 Gluster nodes, website-specifc bricks on each, creating website-specific volumes, then mounting those respective volumes into their containers. Example:
    gnode-1
    - /data/website1/brick1
    - /data/website2/brick1
    gnode-2
    - /data/website1/brick2
    - /data/website2/brick2
    gnode-3
    - /data/website1/brick3
    - /data/website2/brick3

Option 3. 3 Gluster nodes, every website get’s their own mini “Gluster Cluster” via LXD containers on the Gluster nodes. Example:
    gnode-1
    - gcontainer-website1
      - /data/brick1
    - gcontainer-website2
      - /data/brick1
    gnode-2
    - gcontainer-website1
      - /data/brick2
    - gcontainer-website2
      - /data/brick2
    gnode-3
    - gcontainer-website1
      - /data/brick3
    - gcontainer-website2
      - /data/brick3

Where I need help:

- I don’t know which method is best (or if all 3 are technically possible, though I feel they are)

My concerns/frustrations:

- Security
  - Option 1 - Gives me mixed feelings about putting all customers’ website files on one large volume and mounting subdirs of that volume into the LXD containers, giving the containers R/W to that sub dir using ACLs on the host. Mounting via "lxc device add” supposedly is secure itself, but I’m just not sure here.

- Performance 
  - Option 2 - Not sure if Gluster will suffer in any way by using it with say 50 volumes? (one for each customer website)
  - Option 3 - Not sure if I’m incurring any significant overhead running multiple instances of the Gluster Daemons, etc by creating an isolated Gluster cluster for every customer website. LXD itself is very lightweight, but would this be any worse than running say 50x the FOPs through a single more powerful Gluster cluster?

- Networking
  - Option 3 - If all these mini Gluster clusters will be in their own containers, it seems I will have some majorly annoying networking to do. I force a couple ways to do this (and please let me know if you see alt ways):
    - a. Send all Gluster traffic to the Gluster nodes, then use iptables & port forwarding to send traffic to the correct container - Seems like a nightmare. I think I’d have to use different sets ports for every website’s Gluster cluster.
    - b. Bridge the containers to their host’s internal network and assign the containers unique IPs on the host’s network - Much more realistic, but not 100% sure I can do this atm as I’m on Digital Ocean. I know there’s private networking, but I’m not 100% sure I can assign IPs on that network as DO seems to assign the Droplets private IPs automatically. I foresee IP collisions here. If I have to move to a diff provider to do this, then so be it, but I like the SSDs :)

I’d appreciate help on this as I’ma bit in over my head, but extremely eager to figure this out and make it happen. I’m not 100% aware of the Security/Performance/Networking implications are for the above decisions and I need an expert so I don’t go too far off in left field.

Best Regards,

Zach Lanich
Business Owner, Entrepreneur, Creative
Owner/CTO
weCreate LLC
www.WeCreate.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160816/83a86c99/attachment.html>


More information about the Gluster-users mailing list