[Gluster-users] Slow read performance
Anand Avati
anand.avati at gmail.com
Tue Mar 26 22:58:39 UTC 2013
Sorry for the late reply. The call profiles look OK on the server side. I
suspect it is still something to do with the client or network. Have you
mounted the FUSE client with any special options? like --direct-io-mode?
That can have a significant impact on read performance as read-ahead in the
page-cache (which is way more efficient than gluster's read-ahead
translator due to lack of context switch to serve the future page) is
effectively turned off.
I'm not sure if any of your networking (tcp/ip) configuration is either
good or bad.
Avati
On Mon, Mar 11, 2013 at 9:02 AM, Thomas Wakefield <twake at cola.iges.org>wrote:
> Is there a way to make a ramdisk support extended attributes?
>
> These are my current sysctl settings (and I have tried many different
> options):
> net.ipv4.ip_forward = 0
> net.ipv4.conf.default.rp_filter = 1
> net.ipv4.conf.default.accept_source_route = 0
> kernel.sysrq = 0
> kernel.core_uses_pid = 1
> net.ipv4.tcp_syncookies = 1
> kernel.msgmnb = 65536
> kernel.msgmax = 65536
> kernel.shmmax = 68719476736
> kernel.shmall = 4294967296
> kernel.panic = 5
> net.core.rmem_max = 67108864
> net.core.wmem_max = 67108864
> net.ipv4.tcp_rmem = 4096 87380 67108864
> net.ipv4.tcp_wmem = 4096 65536 67108864
> net.core.netdev_max_backlog = 250000
> net.ipv4.tcp_congestion_control = htcp
> net.ipv4.tcp_mtu_probing = 1
>
>
> Here is the output from a dd write and dd read.
>
> [root at cpu_crew1 ~]# dd if=/dev/zero
> of=/shared/working/benchmark/test.cpucrew1 bs=512k count=10000 ; dd
> if=/shared/working/benchmark/test.cpucrew1 of=/dev/null bs=512k
> 10000+0 records in
> 10000+0 records out
> 5242880000 bytes (5.2 GB) copied, 7.21958 seconds, 726 MB/s
> 10000+0 records in
> 10000+0 records out
> 5242880000 bytes (5.2 GB) copied, 86.4165 seconds, 60.7 MB/s
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130326/2c585b42/attachment.html>
More information about the Gluster-users
mailing list