[Gluster-users] Finding my bottle neck

Carl Sirotic csirotic at gmail.com
Wed Dec 19 14:32:20 UTC 2018


Thank you for those answers.
I will take time to ponder if glusterfs is the solution I was looking for
in this case.


Thank you.

On Tue, Dec 18, 2018 at 10:36 PM csirotic <csirotic at gmail.com> wrote:

> Hi,
> I am new to using gluster and I am running some tests right now. I am
> fairly inexperienced as well, so it's a good learning experience for me.
>
> My problem right now is the small file create iops, using smallfile. I
> cannot get more than 800 files/second 4k.
>
> My setup is fairly simple.
> I have 4 servers.
> 3 first server have each one brick that is three way replicated.
> Server 4 simply mount the volume using the fuse native client.
>
> The first three servers, all have the same hardware. Its common supermicro
> servers with a raid 6 array of 8 x 6tb hgst 7200 drives.
> If I test smallfile directly on the brick location, I get very high
> results.
>
> For the networking part of it, the 4 servers are using 10GBytes. Iperf3
> give me steady 10GBytes when I test between all the servers.
>
> When I transfer files from the client-server with the fuse mount, large
> .qcow files, I get around 150 MB/s. Why is not low, but is not great either.
>
> What would you look at first?
> The options that I am pondering are buying ssd drives to put cache on each
> servers.
> Also, it seems to me that having only a 3 way replication, instead of 2+2
> setup, is really hurting.
> Any other tests that could help my process?
>
> Any input is much appreciated.
> Thank you.
>
>
>
>
> Sent from my Bell Samsung device over Canada's largest network.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181219/894865ff/attachment.html>


More information about the Gluster-users mailing list