[Gluster-users] Advice for running out of space on a replicated 4-brick gluster

Artem Russakovskii archon810 at gmail.com
Mon Feb 17 23:16:19 UTC 2020


Hi all,

We currently have an 8TB 4-brick replicated volume on our 4 servers, and
are at 80% capacity. The max disk size on our host is 10TB. I'm starting to
think about what happens closer to 100% and see 2 options.

Either we go with another new 4-brick replicated volume and start dealing
with symlinks in our webapp to make sure it knows which volumes the data is
on, which is a bit of a pain (but not too much) on the sysops side of
things. Right now the whole volume mount is symlinked to a single location
in the webapps (an uploads/ directory) and life is good. After such a
split, I'd have to split uploads into yeardir symlinks, make sure future
yeardir symlinks are created ahead of time and point to the right volume,
etc).

The other direction would be converting the replicated volume to a
distributed replicated one
https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes,
but I'm a bit scared to do it with production data (even after testing, of
course), and having never dealt with a distributed replicated volume.

   1. Is it possible to convert our existing volume on the fly by adding 4
   bricks but keeping the replica count at 4?
   2. What happens if bricks 5-8 which contain the replicated volume #2 go
   down for whatever reason or can't meet their quorum, but the replicated
   volume #1 is still up? Does the whole main combined volume become
   unavailable or only a portion of it which has data residing on replicated
   volume #2?
   3. Any other gotchas?

Thank you very much in advance.

Sincerely,
Artem

--
Founder, Android Police <http://www.androidpolice.com>, APK Mirror
<http://www.apkmirror.com/>, Illogical Robot LLC
beerpla.net | @ArtemR
<http://twitter.com/ArtemR>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200217/db771242/attachment.html>


More information about the Gluster-users mailing list