[Gluster-devel] afr_open_cbk Error in 1.3.0_pre5.4
del
del at delsite.ru
Fri Jul 20 13:56:50 UTC 2007
Hello,
I've installed the latest glusterfs on 4 bricks and 2 clients and been
benchmarking it for a while, when I noticed the following in my
glusterfs.log (on the client nodes)
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_12) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_96) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_79) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_04) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_99) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_23) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_15) op_ret=0 op_errno=2
2007-07-20 21:34:00 E [afr.c:696:afr_open_cbk] afr:
(path=/top_00/second_65/file_65) op_ret=0 op_errno=2
I get a lot of these when I create a bunch of 10Kb files in a bunch of
directories. And when I see another bunch of log lines being added
glusterfs IO throughput decreases for a moment, i.e. files are being
created slower.
Here is my glusterfs-client.vol
volume client1
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 10.1.3.115 # IP address of the remote brick
option remote-port 6996 # default server port is 6996
option transport-timeout 30 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client2
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 10.1.3.114 # IP address of the remote brick
option remote-port 6996 # default server port is 6996
option transport-timeout 30 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client3
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 10.1.3.121 # IP address of the remote brick
option remote-port 6996 # default server port is 6996
option transport-timeout 30 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client4
type protocol/client
option transport-type tcp/client # for TCP/IP transport
option remote-host 10.1.3.112 # IP address of the remote brick
option remote-port 6996 # default server port is 6996
option transport-timeout 30 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume afr
type cluster/afr
subvolumes client1 client2 client3 client4
option replicate *:2
option self-heal on
end-volume
and my glusterfs-server.vol
volume brick
type storage/posix # POSIX FS translator
option directory /home/export # Export this directory
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
subvolumes brick
option listen-port 6996
option auth.ip.brick.allow * # Allow access to "brick" volume
end-volume
I appreciate any help on this.
Many thanks!
Denis.
More information about the Gluster-devel
mailing list