[Gluster-users] Advice for running out of space on a replicated 4-brick gluster

Strahil Nikolov hunter86_bg at yahoo.com
Mon Feb 17 23:29:22 UTC 2020


On February 18, 2020 1:16:19 AM GMT+02:00, Artem Russakovskii <archon810 at gmail.com> wrote:
>Hi all,
>
>We currently have an 8TB 4-brick replicated volume on our 4 servers,
>and
>are at 80% capacity. The max disk size on our host is 10TB. I'm
>starting to
>think about what happens closer to 100% and see 2 options.
>
>Either we go with another new 4-brick replicated volume and start
>dealing
>with symlinks in our webapp to make sure it knows which volumes the
>data is
>on, which is a bit of a pain (but not too much) on the sysops side of
>things. Right now the whole volume mount is symlinked to a single
>location
>in the webapps (an uploads/ directory) and life is good. After such a
>split, I'd have to split uploads into yeardir symlinks, make sure
>future
>yeardir symlinks are created ahead of time and point to the right
>volume,
>etc).
>
>The other direction would be converting the replicated volume to a
>distributed replicated one
>https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes,
>but I'm a bit scared to do it with production data (even after testing,
>of
>course), and having never dealt with a distributed replicated volume.
>
>1. Is it possible to convert our existing volume on the fly by adding 4
>   bricks but keeping the replica count at 4?
>2. What happens if bricks 5-8 which contain the replicated volume #2 go
>down for whatever reason or can't meet their quorum, but the replicated
>   volume #1 is still up? Does the whole main combined volume become
>unavailable or only a portion of it which has data residing on
>replicated
>   volume #2?
>   3. Any other gotchas?
>
>Thank you very much in advance.
>
>Sincerely,
>Artem
>
>--
>Founder, Android Police <http://www.androidpolice.com>, APK Mirror
><http://www.apkmirror.com/>, Illogical Robot LLC
>beerpla.net | @ArtemR
><http://twitter.com/ArtemR>

Distributed replicated sounds more reasonable.

Out of curiocity, why did you decide to have an even number of bricks in the replica - it can still suffer from split-brain?

1.  It should be OK, but I have never done it. Test on some VMs before proceeding.
Rebalance might take some time, so keep that in mind.

2.All files on replica 5-8 will be unavailable untill yoiu recover that set of bricks.

Best Regards,
Strahil Nikolov



More information about the Gluster-users mailing list