[Gluster-users] Problem of data restore with afr (help)
eagleeyes
eagleeyes at 126.com
Wed Apr 22 06:52:23 UTC 2009
Hello
When i start a virtual machine,which image was stored in Glusterfs , i met a problem like this :
009-04-22 13:26:58 W [afr.c:609:afr_open] afr1: returning EIO, file has to be manually corrected in backend
2009-04-22 13:26:58 W [stripe.c:1869:stripe_open_cbk] bricks: afr1 returned error Input/output error
2009-04-22 13:26:58 E [fuse-bridge.c:662:fuse_fd_cbk] glusterfs-fuse: 50149: OPEN() /rhel5-210/disk0 => -1 (Input/output error)
2009-04-22 13:26:58 W [afr.c:609:afr_open] afr1: returning EIO, file has to be manually corrected in backend
2009-04-22 13:26:58 W [stripe.c:1869:stripe_open_cbk] bricks: afr1 returned error Input/output error
2009-04-22 13:26:58 E [fuse-bridge.c:662:fuse_fd_cbk] glusterfs-fuse: 50150: OPEN() /rhel5-210/disk0 => -1 (Input/output error)
2009-04-22 13:26:58 W [afr.c:609:afr_open] afr1: returning EIO, file has to be manually corrected in backend
2009-04-22 13:26:58 W [stripe.c:1869:stripe_open_cbk] bricks: afr1 returned error Input/output error
2009-04-22 13:26:58 E [fuse-bridge.c:662:fuse_fd_cbk] glusterfs-fuse: 50151: OPEN() /rhel5-210/disk0 => -1 (Input/output error)
2009-04-22 13:30:03 E [afr-self-heal-data.c:778:afr_sh_data_fix] afr1: Unable to resolve conflicting data of /rhel5-210/disk0. Pleas
e resolve manually by deleting the file /rhel5-210/disk0 from all but the preferred subvolume. Please consider 'option favorite-chil
d <>'
so i delete the films in one side of afr mode , the log was :
2009-04-22 13:30:56 W [afr-self-heal-common.c:871:sh_missing_entries_lookup_cbk] afr1: path /rhel5-210/disk0 on subvolume client4 =>
-1 (No such file or directory)
2009-04-22 13:30:56 W [afr-self-heal-common.c:871:sh_missing_entries_lookup_cbk] afr3: path /rhel5-210/disk0 on subvolume client6 =>
-1 (No such file or directory)
2009-04-22 13:30:56 W [afr-self-heal-common.c:871:sh_missing_entries_lookup_cbk] afr2: path /rhel5-210/disk0 on subvolume client5 =>
-1 (No such file or directory)
2009-04-22 13:30:56 W [afr-self-heal-data.c:615:afr_sh_data_open_cbk] afr1: sourcing file /rhel5-210/disk0 from client1 to other sin
ks
2009-04-22 13:30:56 W [afr-self-heal-data.c:615:afr_sh_data_open_cbk] afr3: sourcing file /rhel5-210/disk0 from client3 to other sin
ks
2009-04-22 13:30:56 W [afr-self-heal-data.c:615:afr_sh_data_open_cbk] afr2: sourcing file /rhel5-210/disk0 from client2 to other sin
ks
At last i found the delete side , file was larger than other side ,like that was 3.1G and this is 8.0G which was whole image size .
Why the data was not the same with afr ?
My configure is this :
volume client1 #####
type protocol/client
option transport-type tcp
option remote-host 172.20.92.184 # IP address of the remote brick
option remote-port 6997
option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client2 #####
type protocol/client
option transport-type tcp
option remote-host 172.20.92.78 # IP address of the remote brick
option remote-port 6997
option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client3 #####
type protocol/client
option transport-type tcp
option remote-host 172.20.92.190 # IP address of the remote brick
option remote-port 6997
option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client4 #####
type protocol/client
option transport-type tcp
option remote-host 172.20.92.153 # IP address of the remote brick
option remote-port 6997
option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client5 #####
type protocol/client
option transport-type tcp
option remote-host 172.20.93.13 # IP address of the remote brick
option remote-port 6997
option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume client6 #####
type protocol/client
option transport-type tcp
option remote-host 172.20.92.209 # IP address of the remote brick
option remote-port 6997
option transport-timeout 10 # seconds to wait for a reply
option remote-subvolume brick # name of the remote volume
end-volume
volume afr1 #####
type cluster/afr
subvolumes client1 client4
end-volume
volume afr2 #####
type cluster/afr
subvolumes client2 client5
end-volume
volume afr3 #####
type cluster/afr
subvolumes client3 client6
end-volume
volume bricks #####
type cluster/stripe
option block-size 256MB
subvolumes afr1 afr2 afr3
end-volume
2009-04-22
eagleeyes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090422/496cc529/attachment.html>
More information about the Gluster-users
mailing list