[Gluster-users] GlusterFS running, but not syncing is done

Stas Oskin stas.oskin at gmail.com
Mon Mar 9 12:14:50 UTC 2009


Actually, I see a new version came out, rc4.
Any idea if anything related was fixed?

Regards.

2009/3/9 Stas Oskin <stas.oskin at gmail.com>

> Hi.
>
>> Was it working for your previously? Any other error logs on machine
>> with afr? what version are you using? If it was working previously
>> what changed in your setup recently? Can you paste your vol files
>> (just to be sure)
>>
>
>
> Nope, it actually my first setup in lab. No errors - it just seems as not
> synchronizing anything. The version I'm using is the latest one - 2 rc2.
>
> Perhaps I need to modify anything else in addition to GlusterFS
> installation - like file-systems attributes or something?
>
> The approach I'm using is the one that was recommended by Kieth over direct
> emails (Keith, hope you don't mind me posting them :) ).
> The idea is basically to have single vol file both for client and for
> server, and to have one glusterfs process doing the job both as client and
> as server.
>
> Thanks for the help.
>
> Server 1:
> volume home1
>  type storage/posix                   # POSIX FS translator
>  option directory /media/storage        # Export this directory
> end-volume
>
> volume posix-locks-home1
>  type features/posix-locks
>  option mandatory-locks on
>  subvolumes home1
> end-volume
>
> ## Reference volume "home2" from remote server
> volume home2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.42      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> ### Add network serving capability to above home.
> volume server
>  type protocol/server
>  option transport-type tcp
>  subvolumes posix-locks-home1
>  option auth.addr.posix-locks-home1.allow 192.168.253.42,127.0.0.1 # Allow
> access to "home1" volume
> end-volume
>
> ### Create automatic file replication
> volume home
>  type cluster/afr
>  option metadata-self-heal on
>  option read-subvolume posix-locks-home1
> #  option favorite-child home2
>  subvolumes home2 posix-locks-home1
> end-volume
>
>
> Server 2:
>
> volume home1
>  type storage/posix                   # POSIX FS translator
>  option directory /media/storage        # Export this directory
> end-volume
>
> volume posix-locks-home1
>  type features/posix-locks
>  option mandatory-locks on
>  subvolumes home1
> end-volume
>
> ## Reference volume "home2" from remote server
> volume home2
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.253.41      # IP address of remote host
>  option remote-subvolume posix-locks-home1     # use home1 on remote host
>  option transport-timeout 10           # value in seconds; it should be set
> relatively low
> end-volume
>
> ### Add network serving capability to above home.
> volume server
>  type protocol/server
>  option transport-type tcp
>  subvolumes posix-locks-home1
>  option auth.addr.posix-locks-home1.allow 192.168.253.41,127.0.0.1 # Allow
> access to "home1" volume
> end-volume
>
> ### Create automatic file replication
> volume home
>  type cluster/afr
>  option metadata-self-heal on
>  option read-subvolume posix-locks-home1
> #  option favorite-child home2
>  subvolumes home2 posix-locks-home1
> end-volume
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090309/49da3cf5/attachment.html>


More information about the Gluster-users mailing list