[Gluster-devel] posix-locks under AFR not working for server+client in one process
Krishna Srinivas
krishna at zresearch.com
Tue Oct 14 15:34:02 UTC 2008
Hi Rommer,
Can you paste the spec file of the other server too?
Thanks
Krishna
On Sat, Oct 11, 2008 at 8:23 PM, Rommer <rommer at active.by> wrote:
> Hello Everybody,
>
> I'm testing glusterfs on two nodes, and found one problem.
> I decided to use one process as server and client, cause of
> best performance in my case. This is my configuration file:
>
> ####### local brick #########
> volume posix
> type storage/posix
> option directory /mnt/export
> end-volume
> volume posix-locks
> type features/posix-locks
> subvolumes posix
> end-volume
> volume io-thr
> type performance/io-threads
> subvolumes posix-locks
> option thread-count 4
> end-volume
>
> #### export local brick #####
> volume server
> type protocol/server
> subvolumes io-thr
> option transport-type tcp/server
> option bind-address 192.168.1.12
> option auth.ip.io-thr.allow 192.168.1.*
> end-volume
>
> ####### remote bricks #######
> volume remote
> type protocol/client
> option transport-type tcp/client
> option remote-host 192.168.1.13
> option remote-subvolume io-thr
> option transport-timeout 3
> end-volume
>
> ########### afr #############
> volume afr
> type cluster/afr
> subvolumes io-thr remote
> option read-subvolume io-thr
> end-volume
> volume wb
> type performance/write-behind
> subvolumes afr
> end-volume
> volume ra
> type performance/read-ahead
> subvolumes wb
> end-volume
> #############################
>
> Mounting on both nodes by:
> # glusterfs --spec-file=/etc/glusterfs/glfs.vol -n ra /mnt/glusterfs
>
> However, locking doesn't work in that configuration.
> I can still lock the same file on both nodes at the same time.
>
> If I use server (glusterfsd) and client (glusterfs) as separated
> processes, the same lock tests work perfectly.
>
> Rommer
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
More information about the Gluster-devel
mailing list