[Gluster-users] GlusterFS running, but not syncing is done

Stas Oskin stas.oskin at gmail.com
Mon Mar 9 14:35:23 UTC 2009


Hi.
These are my new 2 vol files, one for client and one for server.

Can you advice if they are correct?

Thanks in advance.

glusterfs.vol (client)

## Reference volume "home2" from remote server
volume home2
 type protocol/client
 option transport-type tcp/client
 option remote-host 192.168.253.41      # IP address of remote host
 option remote-subvolume posix-locks-home1     # use home1 on remote host
 option transport-timeout 10           # value in seconds; it should be set
relatively low
end-volume

### Create automatic file replication
volume home
 type cluster/afr
 option metadata-self-heal on
 option read-subvolume posix-locks-home1
#  option favorite-child home2
 subvolumes posix-locks-home1 home2
end-volume


glusterfsd.vol (server)

volume home1
 type storage/posix                   # POSIX FS translator
 option directory /media/storage        # Export this directory
end-volume

volume posix-locks-home1
 type features/posix-locks
 option mandatory-locks on
 subvolumes home1
end-volume

### Add network serving capability to above home.
volume server
 type protocol/server
 option transport-type tcp
 subvolumes posix-locks-home1
 option auth.addr.posix-locks-home1.allow 192.168.253.41,127.0.0.1 # Allow
access to "home1" volume
end-volume

2009/3/9 Krishna Srinivas <krishna at zresearch.com>

> Stats,
>
> I think there was nothing changed between rc2 and rc4 that could
> affect this functionality.
>
> Your vol files look fine, i will look into why it is not working.
>
> Do not use single process as both server and client as we saw issues
> related to locking. Can you see if using different processes for
> server and client works fine w.r.t replication?
>
> Also subvolumes list of all AFRs should be in same order (in your case
> its interchanged)
>
> Regards
> Krishna
>
> On Mon, Mar 9, 2009 at 5:44 PM, Stas Oskin <stas.oskin at gmail.com> wrote:
> > Actually, I see a new version came out, rc4.
> > Any idea if anything related was fixed?
> > Regards.
> > 2009/3/9 Stas Oskin <stas.oskin at gmail.com>
> >>
> >> Hi.
> >>>
> >>> Was it working for your previously? Any other error logs on machine
> >>> with afr? what version are you using? If it was working previously
> >>> what changed in your setup recently? Can you paste your vol files
> >>> (just to be sure)
> >>
> >>
> >> Nope, it actually my first setup in lab. No errors - it just seems as
> not
> >> synchronizing anything. The version I'm using is the latest one - 2 rc2.
> >> Perhaps I need to modify anything else in addition to GlusterFS
> >> installation - like file-systems attributes or something?
> >> The approach I'm using is the one that was recommended by Kieth over
> >> direct emails (Keith, hope you don't mind me posting them :) ).
> >> The idea is basically to have single vol file both for client and for
> >> server, and to have one glusterfs process doing the job both as client
> and
> >> as server.
> >> Thanks for the help.
> >> Server 1:
> >> volume home1
> >>  type storage/posix                   # POSIX FS translator
> >>  option directory /media/storage        # Export this directory
> >> end-volume
> >>
> >> volume posix-locks-home1
> >>  type features/posix-locks
> >>  option mandatory-locks on
> >>  subvolumes home1
> >> end-volume
> >>
> >> ## Reference volume "home2" from remote server
> >> volume home2
> >>  type protocol/client
> >>  option transport-type tcp/client
> >>  option remote-host 192.168.253.42      # IP address of remote host
> >>  option remote-subvolume posix-locks-home1     # use home1 on remote
> host
> >>  option transport-timeout 10           # value in seconds; it should be
> >> set relatively low
> >> end-volume
> >>
> >> ### Add network serving capability to above home.
> >> volume server
> >>  type protocol/server
> >>  option transport-type tcp
> >>  subvolumes posix-locks-home1
> >>  option auth.addr.posix-locks-home1.allow 192.168.253.42,127.0.0.1 #
> Allow
> >> access to "home1" volume
> >> end-volume
> >>
> >> ### Create automatic file replication
> >> volume home
> >>  type cluster/afr
> >>  option metadata-self-heal on
> >>  option read-subvolume posix-locks-home1
> >> #  option favorite-child home2
> >>  subvolumes home2 posix-locks-home1
> >> end-volume
> >>
> >>
> >> Server 2:
> >>
> >> volume home1
> >>  type storage/posix                   # POSIX FS translator
> >>  option directory /media/storage        # Export this directory
> >> end-volume
> >>
> >> volume posix-locks-home1
> >>  type features/posix-locks
> >>  option mandatory-locks on
> >>  subvolumes home1
> >> end-volume
> >>
> >> ## Reference volume "home2" from remote server
> >> volume home2
> >>  type protocol/client
> >>  option transport-type tcp/client
> >>  option remote-host 192.168.253.41      # IP address of remote host
> >>  option remote-subvolume posix-locks-home1     # use home1 on remote
> host
> >>  option transport-timeout 10           # value in seconds; it should be
> >> set relatively low
> >> end-volume
> >>
> >> ### Add network serving capability to above home.
> >> volume server
> >>  type protocol/server
> >>  option transport-type tcp
> >>  subvolumes posix-locks-home1
> >>  option auth.addr.posix-locks-home1.allow 192.168.253.41,127.0.0.1 #
> Allow
> >> access to "home1" volume
> >> end-volume
> >>
> >> ### Create automatic file replication
> >> volume home
> >>  type cluster/afr
> >>  option metadata-self-heal on
> >>  option read-subvolume posix-locks-home1
> >> #  option favorite-child home2
> >>  subvolumes home2 posix-locks-home1
> >> end-volume
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090309/02c4ae61/attachment.html>


More information about the Gluster-users mailing list