[Gluster-users] Gluster-users Digest, Vol 77, Issue 2
Ben England
bengland at redhat.com
Mon Sep 8 19:32:23 UTC 2014
> Message: 9
> Date: Tue, 2 Sep 2014 17:17:25 +0800
> From: Jaden Liang <jaden1q84 at gmail.com>
> To: gluster-devel at gluster.org, gluster-users at gluster.org
> Subject: [Gluster-users] [Gluster-devel] Regarding the write
> performance in replica 1 volume in 1Gbps Ethernet, get about 50MB/s
> while writing single file.
> Message-ID:
> <CA+Vqw5nDLmA+a92wkEk2v1foOM55uSHrNyz-yfAhj_32UBQ1yg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello, gluster-devel and gluster-users team,
>
> We are running a performance test in a replica 1 volume and find out the
> single file sequence writing performance only get about 50MB/s in a 1Gbps
> Ethernet. However, if we test multiple files sequence writing, the writing
> performance can go up to 120MB/s which is the top speed of network.
>
not sure what you mean, are you writing multiple files concurrently or 1 at a time? With FUSE, this matters -- I typically see best throughput with more than one file being transferred at the same time.
> We also tried to use the stat xlator to find out where is the bottleneck of
> single file write performance. Here is the stat data:
>
> Client-side:
> ......
> vs_vol_rep1-client-8.latency.WRITE=total:21834371.000000us,
> mean:2665.328491us, count:8192, max:4063475, min:1849
> ......
>
> Server-side:
> ......
> /data/sdb1/brick1.latency.WRITE=total:6156857.000000us, mean:751.569458us,
> count:8192, max:230864, min:611
> ......
>
what's your write transfer size? with FUSE, this really matters a lot, since FUSE does not aggregate writes, so each write has to travel from the application to the glusterfs mountpoint process, resulting in slow performance for small transfer sizes. In general, it's a good idea to supply the details of your workload generator and how it was run, so we can compare with other known workloads and results.
> Note that the test is write a 1GB single file sequentially to a replica 1
> volume through 1Gbps Ethernet network.
>
So for example try using
# dd if=/dev/zero of=/mnt/glusterfs/your-file.dd bs=1024k count=1k
and see whether your throughput is still 50 MB/s.
> On the client-side, we can see there are 8192 write requests totally. Every
> request will write 128KB data. Total eclipsed time is 21834371us, about 21
> seconds. The mean time of request is 2665us, about 2.6ms which means it
> could only serves about 380 requests in 1 seconds. Plus there are other
> time consuming like statfs, lookup, but those are not major reasons.
>
> On the server-side, the mean time of request is 751us include write data to
> HDD disk. So we think that is not the major reason.
>
> And we also modify some codes to do the statistic of system epoll elapsed
> time. It only took about 20us from enqueue data to finish sent-out.
>
> Now we are heading to the rpc mechanism in glusterfs. Still, we think this
> issue maybe encountered in gluster-devel or gluster-users teams. Therefor,
> any suggestions would be grateful. Or have anyone know such issue?
>
> Best regards,
> Jaden Liang
> 9/2/2014
>
>
> --
> Best regards,
> Jaden Liang
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140902/5dcbc91b/attachment-0001.html>
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
> End of Gluster-users Digest, Vol 77, Issue 2
> ********************************************
>
More information about the Gluster-users
mailing list