[Gluster-users] performance due to network?

Santosh Pradhan spradhan at redhat.com
Fri Jun 13 13:22:24 UTC 2014


Hi Erik,
Could you just turn the DRC off and retry your test case?

1. Turn the DRC off:
gluster volume set <volume name> nfs.drc off

2. Restart all the gluster processes
a. killall glusterd glusterfs glusterfsd
b. glusterd

2.b should bring back all the gluster proc's.

3. Retry your large copy test.

Thanks,
Santosh


On 06/13/2014 05:16 PM, Aronesty, Erik wrote:
>
> glusterfs 3.5.0 built on Apr 24 2014 01:38:34
>
> *From:*Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
> *Sent:* Friday, June 13, 2014 1:21 AM
> *To:* Aronesty, Erik; gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] performance due to network?
>
> Erik,
> What version of glusterfs are you using?
>
> Pranith
>
> On 06/13/2014 02:09 AM, Aronesty, Erik wrote:
>
>     I suspect I'm having performance issues because of network speeds.
>
>     /Supposedly/ I have 10gbit connections on all my NAS devices,
>     however, it seems to me that the fastest I can write is 1Gbit.  
>     When I'm copying very large files, etc, I see 'D' as the cp waits
>     to I/O, but when I go the gluster servers, I don't see glusterfsd
>     waiting (D) to write to the bricks themselves.  I have 4 nodes,
>     each with  10Gbit connection, each has 2 Areca RAID controllers
>     with 12 disk raid5, and the 2 controllers stripped into 1 large
>     volume.   Pretty sure there's plenty of i/o left on the bricks
>     themselves.
>
>     Is it possible that "one big file" isn't the right test... should
>     I try 20 big files, and see how saturated my network can get?
>
>     Erik Aronesty
>     Senior Bioinformatics Architect
>
>     *EA | Quintiles
>     **/Genomic Services/*
>
>     4820 Emperor Boulevard
>
>     Durham, NC 27703 USA
>
>
>     Office: + 919.287.4011
>     erik.aronesty at quintiles.com <mailto:kmichailo at expressionanalysis.com>
>
>     www.quintiles.com <http://www.quintiles.com/>
>     www.expressionanalysis.com <http://www.expressionanalysis.com/>
>     cid:image001.jpg at 01CDEF4B.84C3E9F0
>     <https://www.twitter.com/simulx>cid:image002.jpg at 01CDEF4B.84C3E9F0
>     <http://www.facebook.com/aronesty>cid:image003.jpg at 01CDEF4B.84C3E9F0
>     <http://www.linkedin.com/in/earonesty>
>
>
>
>
>     _______________________________________________
>
>     Gluster-users mailing list
>
>     Gluster-users at gluster.org  <mailto:Gluster-users at gluster.org>
>
>     http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140613/1cf59971/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 778 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140613/1cf59971/attachment.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 784 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140613/1cf59971/attachment-0001.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 791 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140613/1cf59971/attachment-0002.jpe>


More information about the Gluster-users mailing list