[Gluster-devel] afr and self-heal issues
Nathan Allen Stratton
nathan at robotics.net
Fri Aug 3 16:54:31 UTC 2007
My setup is 3 servers each with 3 volumes:
vs0 ns brick-a mirror-c
vs0 ns brick-b mirror-a
vs0 ns brick-c mirror-b
I afr replicate *:3 the ns bricks into block-ns-afr and afr replicate *:2
each brick-(a-c) and mirror(a-c) with replicate *:2 into block-(a-c)-afr.
I then unify block-(a-c)-afr into share-unify with option namesspace
block-ns-afr.
If a server goes down things lock up and then crash (known issue that the
gluster guys are working on). If I leave one server (vs0) off and restart
the crashed servers I can write to my share unify, I would expect files
going to block-b-afr to land in vs1 brick-b and vs2 mirror-b and that is
exactly what happens. Unify is using rr scheduler and as expected files
also are sent to block-c-afr. Server vs2 brick-c gets the block-c-afr
files, but the odd part is, so does vs1 mirror-a....
Why would that happen? block-c-afr is made up of vs2 brick-c and vs0
(server that is down) mirror-c.
Also, when I bring back up vs0, I would expect ns to be brought back up to
date with the others since it was part of the afr *:3, but it is not. I
also would expect that files part of block-c-afr that are in vs2 brick-c
would also be copied to vs0 mirror-c, but that also does not happen.
Also, I was playing around with stripe, does it work in the latest code?
If I edit my configs and comment out my unify and replace it with stripe I
only get what looks like unify, but without the namespace requirements.
I.E. no matter what I put for block-size my files are still their normally
300 or so megs. Is the issue that I am using it server side rather then
client side?
Any ideas?
Full configs are at:
http://share.robotics.net/client.vol
http://share.robotics.net/vs0_server.vol
http://share.robotics.net/vs1_server.vol
http://share.robotics.net/vs2_server.vol
><>
Nathan Stratton CTO, Voila IP Communications
nathan at robotics.net nathan at voilaip.com
http://www.robotics.net http://www.voilaip.com
More information about the Gluster-devel
mailing list