[Gluster-users] Issue with geo-replication and nfs auth
Kaushik BV
kaushikbv at gluster.com
Tue May 3 10:55:06 UTC 2011
Hi Cedric,
Regarding the geo-replication state, the log essentially means that the
client-server communication between geo-rep master & slave has gone down,
this could be for various reasons.
We could narrow down to the exact cause, if you could run the session again
with debug log-level and send us the log-files of master & slave.
run it in debug level by executing the following command:
#gluster volume geo-replication test
ssh://root@slave.mydomain.com:file:///data/test
config log-level DEBUG
to locate the master's log-file execute the following command:
#gluster volume geo-replication test
ssh://root@slave.mydomain.com:file:///data/test
config log-file
to locate the slave log-file do the following:
execute this command on the slave domain:
#gluster volume geo-replication ssh://root@slave.mydomain.com:
file:///data/test config log-file
you would get a template which includes ${session_owner}
to get the session_owner of the geo-replication session execute the
following command in MASTER:
#gluster volume geo-replication test
ssh://root@slave.mydomain.com:file:///data/test config session-owner
Regarding nfs.rpc-auth-allow:
It is a bug which would be addressed in the next minor-release, you can
follow the status of it at http://bugs.gluster.com/show_bug.cgi?id=2866
<http://bugs.gluster.com/show_bug.cgi?id=2866>Regards,
Kaushik BV
On Tue, May 3, 2011 at 1:57 PM, Cedric Lagneau
<cedric.lagneau at openwide.fr>wrote:
> hi,
>
> I've some issue with geo-replication (since 3.2.0) and nfs auth (since
> initial release).
>
>
> Geo-replication
> ---------------
> System : Debian 6.0 amd64
> Glusterfs: 3.2.0
>
> MASTER (volume) => SLAVE (directory)
> For some volume it works, but for others i can't enable geo-replication and
> have this error with a faulty status:
> 2011-05-03 09:57:40.315774] E [syncdutils:131:log_raise_exception] <top>:
> FAIL:
> Traceback (most recent call last):
> File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/syncdutils.py", line
> 152, in twrap
> tf(*aa)
> File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 118,
> in listen
> rid, exc, res = recv(self.inf)
> File "/usr/lib/glusterfs/glusterfs/python/syncdaemon/repce.py", line 42,
> in recv
> return pickle.load(inf)
> EOFError
>
> Command line :
> gluster volume geo-replication test slave.mydomain.com:/data/test/ start
>
> On /etc/glusterd i don't see any diff between /etc/glusterd files.
>
> #gluster volume geo-replication status
> MASTER SLAVE
> STATUS
>
> --------------------------------------------------------------------------------
> test ssh://root@slave.mydomain.com:file:///data/test
> faulty
> test2 ssh://root@slave.mydomain.com:file:///data/test2
> OK
>
>
>
> NFS auth allow
> --------------
> Even if i set nfs.rpc-auth-allow on volume to restrict acces on some ip i
> can always mount via nfs. How it works ?
>
> Sample:
>
> #gluster volume set test nfs.rpc-auth-allow 10.0.0.10
> #gluster volume info test
> Options Reconfigured:
> nfs.rpc-auth-allow: 10.0.0.10
>
> My client ip 192.168.10.25 can mount the test volume with nfs with:
> mount -t nfs -o vers=3 glusterserveur:/test /mnt/test
>
>
> thanks for your help,
>
>
> best regards,
>
>
> --
>
> Cédric Lagneau
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110503/03343e93/attachment.html>
More information about the Gluster-users
mailing list