[Gluster-users] AFR, writebehind, and debug/trace
Edmond Lo
elo at storefront.com
Mon Jun 29 21:59:30 UTC 2009
Did you delete the output file after running your dd test? I saw significant improvement when I modified my dd test to delete the out file after each run.
Ed
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Barry Jaspan
Sent: Monday, June 29, 2009 2:49 PM
To: gluster-users at gluster.org
Subject: [Gluster-users] AFR, writebehind, and debug/trace
I just got started with glusterfs. I read the docs over the weekend
and today created a simple setup: two servers exporting a brick and
one client mounting them with AFR. I am seeing very poor write
performance on a dd test, e.g.:
time dd if=/dev/zero of=./local-file bs=8192 count=125000
presumably due to a very large number of write operations (because
when I increase the blocksize to 64K, the performance increases by
2x). I enabled the writebehind translator but see no improvement. I
then enabled a trace translator on both sides of the writebehind and
seem to be seeing that write-behind is not batching any of the
operations.
Server vol file:
volume posix
type storage/posix
option directory /mnt/glusterfsd-export
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick
end-volume
Client vol file:
volume remote1
type protocol/client
option transport-type tcp
option remote-host web-1
option remote-subvolume brick
end-volume
volume remote2
type protocol/client
option transport-type tcp
option remote-host web-2
option remote-subvolume brick
end-volume
volume replicate
type cluster/replicate
subvolumes remote1 remote2
end-volume
volume trace-below
type debug/trace
subvolumes replicate
end-volume
volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes trace-below
end-volume
volume trace-above
type debug/trace
subvolumes writebehind
end-volume
With this configuration, I re-ran by dd test but with only
count=100. The log shows:
[root at web-3 glusterfs-mount]# grep trace /var/log/glusterfs/mnt-
glusterfs-mount.log | grep above | wc
245 3591 42117
[root at web-3 glusterfs-mount]# grep trace /var/log/glusterfs/mnt-
glusterfs-mount.log | grep below | wc
252 3678 43095
So, there are as many writes to trace-below as trace-above.
What am I not understanding?
Thanks!
Barry
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list