[Gluster-users] Advice for running out of space on a replicated 4-brick gluster

Amar Tumballi amar at kadalu.io
Tue Feb 18 05:06:24 UTC 2020

On Tue, Feb 18, 2020 at 4:47 AM Artem Russakovskii <archon810 at gmail.com>

> Hi all,
> We currently have an 8TB 4-brick replicated volume on our 4 servers, and
> are at 80% capacity. The max disk size on our host is 10TB. I'm starting to
> think about what happens closer to 100% and see 2 options.
> Either we go with another new 4-brick replicated volume and start dealing
> with symlinks in our webapp to make sure it knows which volumes the data is
> on, which is a bit of a pain (but not too much) on the sysops side of
> things. Right now the whole volume mount is symlinked to a single location
> in the webapps (an uploads/ directory) and life is good. After such a
> split, I'd have to split uploads into yeardir symlinks, make sure future
> yeardir symlinks are created ahead of time and point to the right volume,
> etc).
> The other direction would be converting the replicated volume to a
> distributed replicated one
> https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes,
> but I'm a bit scared to do it with production data (even after testing, of
> course), and having never dealt with a distributed replicated volume.

This is the idea behind calling gluster as a 'scale-out' storage, and we
expect our users to not bothering change of application for scale-out
operations. But you are right, such things are 'not' day-to-day activity.
But rebalance has been tried over all these gluster versions, and a lot of
fixes have gone in to stabilize the feature.

>    1. Is it possible to convert our existing volume on the fly by adding
>    4 bricks but keeping the replica count at 4?
> Yes, technically possible. All you have to use is 'add-brick' CLI.

>    1. What happens if bricks 5-8 which contain the replicated volume #2
>    go down for whatever reason or can't meet their quorum, but the replicated
>    volume #1 is still up? Does the whole main combined volume become
>    unavailable or only a portion of it which has data residing on replicated
>    volume #2?
> Portion of it. This is similar situation as one of 2 subvols of DHT going
down (without any replica). Volume will serve data from available nodes.
But be warned that depending the hash of the file, file creations also may
not work 50% of the time in that case.

>    1. Any other gotchas?
> You have to do a minimum of `rebalance fix-layout` if you don't want to
move data. But if you don't move data, if the current files may grow and
consume storage in node (if its like logs or something). Else, it should be


Thank you very much in advance.
> Sincerely,
> Artem
> --
> Founder, Android Police <http://www.androidpolice.com>, APK Mirror
> <http://www.apkmirror.com/>, Illogical Robot LLC
> beerpla.net | @ArtemR
> <http://twitter.com/ArtemR>
> ________
> Community Meeting Calendar:
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

Container Storage made easy!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200218/ab9ad7fb/attachment.html>

More information about the Gluster-users mailing list