[Gluster-users] GlusterFS best practices

David Gossage dgossage at carouselchecks.com
Mon Dec 5 18:54:54 UTC 2016


On Mon, Dec 5, 2016 at 4:53 AM, Momonth <momonth at gmail.com> wrote:

> Hi All,
>
> I've just joined this list as I'm working on a project and looking for
> a persistent and shared storage for docker based infra. I'm entirely
> new to the GlusterFS project, however have been involved into "storage
> business" for quite a while, including proprietary and opensource
> solutions.
>
> I've already deployed my first 2 nodes GlusterFS cluster, based on
> CentOS 6.8, I must admit it was really easy and everything just works
> =) So thumbsup!
>
> I'm now looking for $subj, just not to repeat common mistake a newbie
> like me would do. Things like "to use or not to use any RAID configs
> on nodes", "bricks capacities vs brick performance", "best network
> topologies" etc. Does anyone know a good source of that kind of info?
>

Redhat I believe still recommends to have bricks raided.  I haven't checked
again lately but when I was looking at RHS(gluster) they recommended I want
to say it was 8-12 disks in a raid6.  I myself use ZFS disks passed through
in JBOD in a raid10, I've also seen some that make each brick a disk or
disk pair and let gluster handle the redundancy.  It all comes own to what
level of protection vs performance you want and the workload I think.

>
> I'm also curious to know any baseline performances, eg I have a 2
> nodes cluster in "replica" mode, each brick is SSD x2 in RAID1 mode.
> For the following workload:
>

First thing I notice is that without 3 nodes for quorum you run risk of
split brain issues.  A 3rd node for 3 way replication, or an arbiter node
would help with that.  I like 3 nodes replication, but that also has effect
on network throughput as you are now copying data once more simultaneously.

>
> 100% random, 30% reads / 70% writes, 4KB block size, single thread
>
> I observe ~ 220 read IOPS + ~515 write IOPS, 95th percentile read
> latency 1.9 ms, write - 1.9 ms.
>
> Is it OK or not OK? Should I look into optimizing it?
>
> Thanks,
> Vladimir
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161205/801397da/attachment.html>


More information about the Gluster-users mailing list