[Gluster-devel] [Gluster-users] Evergrowing distributed volume question

Nux! nux at li.nux.ro
Fri Mar 19 17:14:46 UTC 2021


So then, in theory my plan could work if I always rebalance.

Thanks

On 19 March 2021 17:12:07 GMT, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>As gluster does not have metadata server, the client:s identify the
>brick via a special algorithm based on the file/dir name.
>Each brick corresponds to a 'range' of hashes , thus when you add a new
>brick, you always need to rebalance the volume.
>Best Regards,Strahil Nikolov
> 
> 
>    Hello,
>
>A while ago I attempted and failed to maintain an "evergrowing" storage
>
>solution based on GlusterFS.
>I was relying on a distributed non-replicated volume to host backups
>and 
>so on, in the idea that when it was close to full I would just add 
>another brick (server) and keep it going like that.
>In reality what happened was that many of the writes were distributed
>to 
>the brick that was (in time) full, ending up with "out of space"
>errors, 
>despite having one or more bricks with plenty of space.
>
>Can anyone advise whether current Glusterfs behaviour has improved in 
>this regard, ie does it check if a brick is full and redirect the 
>"write" to one that is not?
>
>Regards,
>Lucian
>________
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://meet.google.com/cpu-eiue-hvk
>Gluster-users mailing list
>Gluster-users at gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
>  

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20210319/bbde1561/attachment.html>


More information about the Gluster-devel mailing list