[Gluster-users] Understanding Gluster Replication/Distribute

Dan Mons dmons at cuttingedge.com.au
Sun Feb 9 04:06:34 UTC 2014

I have 32GB RAM in all of my production GlusterFS nodes.  GlusterFS
itself takes up very little of that.  There's other services on there
that use up a bit (Samba, rsnapshot, etc) but even then
kernel+applications don't get over 4GB (even with stupid Java-based
proprietary LSI RAID card monitoring software that gobbles up a GB all
on it's own).

Lots of RAM in file servers (even nodes for distributed file systems
like GlusterFS) isn't a bad thing.  Linux itself will happily use up
everything else as filecache, which is a good thing.


Dan Mons
Skunk Works
Cutting Edge

On 8 February 2014 03:19, Scott Dungan <scott at gps.caltech.edu> wrote:
> Thanks Dan. I think I get it now. One more question:
> The size of the Gluster volume we want to create is 150TB. We are either
> going to do a distribute only with 4 nodes or a distribute+repl2 with 8
> nodes (depends on budget). Considering this, do you have any server ram
> recommendations. The starting point is going to be 32GB, but should we be
> thinking of 64 or 128?
> -Scott
> On 2/6/2014 7:07 PM, Dan Mons wrote:
>> Replies inline:
>> On 7 February 2014 10:11, Scott Dungan <scott at gps.caltech.edu> wrote:
>>> I am new to Gluster and I am having a hard time grasping how Gluster
>>> functions in distribute mode vs. distribute+replication. I am planning on
>>> having 5 servers, with each server hosting a raid6-backed 36TB brick. For
>>> simplicity, lets just pretend this is a 40TB brick. Here are my
>>> questions:
>>> 1. If I do a distribute configuration only, usable capacity of the
>>> Gluster
>>> volume will be 5x40TB or 200TB?
>> Using "40TB" as a round number per brick:
>> distribute (no replicate) would be a single ~200TB GlusterFS volume.
>>> 2. In this configuration, what would clients see if one of the servers
>>> were
>>> to fail?
>> Lots of errors.  Typically, every fifth file or directory would be
>> missing, and you'd see lots of question marks in your "ls -l" output.
>>> 3. When the server comes back up, what steps would need to be taken to
>>> make
>>> the Gluster volume consistent again?
>> In a distribute-only setup, there's no redundancy.  So there's no
>> "consistency" so to speak.  When the missing volume came online, the
>> files it holds would be available again.
>>> 4. if I do a distributed replicated (2) volume, will my usable capacity
>>> become 160TB or 100TB, or perhaps something else entirely?
>> 5 servers is an uneven amount of bricks.  You'd end up with 120TB, but
>> 40TB of that wouldn't be replicated.  A 6th brick would solve that
>> problem, and you'd have ~120TB in full distribute+replicate(2).
>>> 5. In this configuration, one server may be removed for maintenance and
>>> the
>>> file system stays consistent?
>> Theoretically yes.  I try to keep my replicated brick downtime to a
>> minimum though.  Similar to the ideas behind a RAID mirror, I don't
>> like running in production on only one copy of something for too long.
>> -Dan
> --
> Scott A Dungan
> Senior Systems Administrator
> Geological and Planetary Sciences
> California Institute of Technology
> Office: (626) 395-3170
> Cell:   (626) 993-4932
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

More information about the Gluster-users mailing list