[Gluster-users] GlusterFS running, but not syncing is done
Stas Oskin
stas.oskin at gmail.com
Mon Mar 9 14:31:23 UTC 2009
Hi.
Nope, same results - no sync and "breaking reconnect chain" error.
Regards.
2009/3/9 Piotr Findeisen <piotr.findeisen at azouk.com>
> Probably you already tried to restart glusterfs and run ls -R?
> What happens if you correct AFR subvolumes order?
>
>
> regards,
> Piotr
>
>
> Stas Oskin wrote:
>
> Hi.
> I have a different case - the files are not appearing at all, no matter
> whether they 0 or full size.
>
> Regards.
>
> 2009/3/9 Piotr Findeisen <piotr.findeisen at azouk.com>
>
>> Hi, Stas!
>>
>> Did you see https://savannah.nongnu.org/bugs/?25681 ?
>> Maybe you have the same problem with AFR as mentioned there (one of
>> mentioned).
>>
>> regards,
>> Piotr
>>
>> Stas Oskin wrote:
>>
>> Hi.
>>
>>> Was it working for your previously? Any other error logs on machine
>>> with afr? what version are you using? If it was working previously
>>> what changed in your setup recently? Can you paste your vol files
>>> (just to be sure)
>>>
>>
>>
>> Nope, it actually my first setup in lab. No errors - it just seems as
>> not synchronizing anything. The version I'm using is the latest one - 2 rc2.
>>
>> Perhaps I need to modify anything else in addition to GlusterFS
>> installation - like file-systems attributes or something?
>>
>> The approach I'm using is the one that was recommended by Kieth over
>> direct emails (Keith, hope you don't mind me posting them :) ).
>> The idea is basically to have single vol file both for client and for
>> server, and to have one glusterfs process doing the job both as client and
>> as server.
>>
>> Thanks for the help.
>>
>> Server 1:
>> volume home1
>> type storage/posix # POSIX FS translator
>> option directory /media/storage # Export this directory
>> end-volume
>>
>> volume posix-locks-home1
>> type features/posix-locks
>> option mandatory-locks on
>> subvolumes home1
>> end-volume
>>
>> ## Reference volume "home2" from remote server
>> volume home2
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 192.168.253.42 # IP address of remote host
>> option remote-subvolume posix-locks-home1 # use home1 on remote host
>> option transport-timeout 10 # value in seconds; it should be
>> set relatively low
>> end-volume
>>
>> ### Add network serving capability to above home.
>> volume server
>> type protocol/server
>> option transport-type tcp
>> subvolumes posix-locks-home1
>> option auth.addr.posix-locks-home1.allow 192.168.253.42,127.0.0.1 #
>> Allow access to "home1" volume
>> end-volume
>>
>> ### Create automatic file replication
>> volume home
>> type cluster/afr
>> option metadata-self-heal on
>> option read-subvolume posix-locks-home1
>> # option favorite-child home2
>> subvolumes home2 posix-locks-home1
>> end-volume
>>
>>
>> Server 2:
>>
>> volume home1
>> type storage/posix # POSIX FS translator
>> option directory /media/storage # Export this directory
>> end-volume
>>
>> volume posix-locks-home1
>> type features/posix-locks
>> option mandatory-locks on
>> subvolumes home1
>> end-volume
>>
>> ## Reference volume "home2" from remote server
>> volume home2
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host 192.168.253.41 # IP address of remote host
>> option remote-subvolume posix-locks-home1 # use home1 on remote host
>> option transport-timeout 10 # value in seconds; it should be
>> set relatively low
>> end-volume
>>
>> ### Add network serving capability to above home.
>> volume server
>> type protocol/server
>> option transport-type tcp
>> subvolumes posix-locks-home1
>> option auth.addr.posix-locks-home1.allow 192.168.253.41,127.0.0.1 #
>> Allow access to "home1" volume
>> end-volume
>>
>> ### Create automatic file replication
>> volume home
>> type cluster/afr
>> option metadata-self-heal on
>> option read-subvolume posix-locks-home1
>> # option favorite-child home2
>> subvolumes home2 posix-locks-home1
>> end-volume
>>
>> ------------------------------
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttp://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090309/3fc7ab77/attachment.html>
More information about the Gluster-users
mailing list