[Gluster-users] RAM/Disk ratio question
kostas.makedos at gmail.com
Tue May 31 19:41:54 UTC 2016
thanks for the info and sorry for the late reply.
I will try to explain our complex setup.
We are using OpenStack to create clusters of VMs for our clients.
In these VMs we provide basic services and customers add their own
applications which run on top of our clusters.
These clusters have HA systems provided by us.
To keep data persistent and shared across the cluster we use 3VMs to serve
as Storage Nodes.
We use 2 out of 3 VMs to store Gluster Bricks and the third one to have a
There are 4-5 "partitions" served to all other VMs of the cluster.
These partitions may contain or store static data (like text configuration
files) or other user-generated data.
Most files stored in these partitions are text-based and ranges vary from
4-10Kbytes to 200-300MB with smaller files to be the majority of the
Total data stored per cluster is about 50G.
So we have to tune somehow this installations, and to tune based on
underlying hardware (assuming it is applicable in our case) is a way to go,
though IMHO it could not give us too much of a benefit.
That's why I ask for a generic dimensioning document or guideline, if we
leave hardware out of the equation, how can someone tune Gluster to make
best use of RAM in cases complex like this?
kostas.makedos at gmail.com
2016-05-27 21:33 GMT+03:00 Paul Robert Marino <prmarino1 at gmail.com>:
> Unfortunately that kind of tuning doesn't have any simple answers, and
> any one who says there is should not be listened to.
> It really depends on your workload and a lot of other factors such as
> your hardware. for example a 20 plater RAID 1+0 on spinning disks with
> a wide stripe needs very little cache for streaming large (MultiGB)
> files due to the large IOPS they can do, but would need a large cache
> for lots of files smaller than the stripe due to the fact that each
> file access is a minimum of 1 IOP which means a full read of the
> stripe. The reverse may be true if the files are only 4k or less on
> average, in which case a standalone SATA SSD would be way faster and
> need very little cache,but on large (MultiGB) files it would need a
> huge amount of cache due to the 4k per IOP size limitation in SSD's.
> Furthermore those scenarios assume your filesystem is correctly
> aligned, unfortunately they usually aren't. The reasons for this are
> complicated but in short the drivers (and in many cases the chipsets)
> for many RAID and SATA controllers do not provide the information the
> OS (/sys, LVM, and the filesystem) requires to align the filesystem
> automatically when its created.
> Now most DBA's will tell you they need an insane number of IOPs, what
> they are really telling you is how many operations the database is
> doing, not how many IOP's its doing. In reality databases do
> surprisingly few IOPs and tend more to do large (MultiGB) sequential
> reads into the ram used by the database processes, then do all their
> operations there.
> Also an other key factor is the IO scheduler (elevator="....." in the
> kernel boot options) you are using in the kernel. CFQ which is the
> default is great for desktops and servers running the 10 or more
> different services on inexpensive hardware. on most dedicated servers
> deadline or possibly if you have a good raid controller noop is much
> better. using the proper IO scheduler can have a dramatic impact on
> how much ram you use for cache, especially for writes.
> As I said there is no easy answer to this but if you can give us an
> idea of the typical workload then we may be able to give some advice.
> On Fri, May 27, 2016 at 8:27 AM, kostas makedos
> <kostas.makedos at gmail.com> wrote:
> > Hello,
> > Can someone give me an estimate ratio between RAM consumption in
> > a node in respect to the GB stored in its bricks?
> > Is there a rule of thumb or a guideline document?
> > Thank you,
> > Best Regards
> > Kostas Makedos
> > kostas.makedos at gmail.com
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users