[Gluster-users] Geo rep fail
anthony garnier
sokar6012 at hotmail.com
Tue Jul 31 11:43:13 UTC 2012
Hi Vijay,
I used the tarball here : http://download.gluster.org/pub/gluster/glusterfs/LATEST/
> Date: Tue, 31 Jul 2012 07:39:51 -0400
> From: vkoppad at redhat.com
> To: sokar6012 at hotmail.com
> CC: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Geo rep fail
>
> Hi anthony,
>
> By Glusterfs-3.3 version, you mean this rpm
> http://bits.gluster.com/pub/gluster/glusterfs/3.3.0/.
> or If you are working with git repo, can you give me branch and Head.
>
> -Vijaykumar
>
> ----- Original Message -----
> From: "anthony garnier" <sokar6012 at hotmail.com>
> To: gluster-users at gluster.org
> Sent: Tuesday, July 31, 2012 2:47:40 PM
> Subject: [Gluster-users] Geo rep fail
>
>
>
> Hello everyone,
>
> I'm using Glusterfs 3.3 and I have some difficulties to setup geo-replication over ssh.
>
> # gluster volume geo-replication test status
> MASTER SLAVE STATUS
> --------------------------------------------------------------------------------
> test ssh://sshux@yval1020:/users/geo-rep faulty
> test file:///users/geo-rep OK
>
> As you can see, the one in a local folder works fine.
>
> This is my config :
>
> Volume Name: test
> Type: Replicate
> Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: yval1010:/users/exp
> Brick2: yval1020:/users/exp
> Options Reconfigured:
> geo-replication.indexing: on
> cluster.eager-lock: on
> performance.cache-refresh-timeout: 60
> network.ping-timeout: 10
> performance.cache-size: 512MB
> performance.write-behind-window-size: 256MB
> features.quota-timeout: 30
> features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB
> features.quota: on
> nfs.port: 2049
>
>
> This is the log :
>
> [2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
> [2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] <top>: syncing: gluster://localhost:test -> ssh://sshux@yval1020:/users/geo-rep
> [2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is 2f0b0eff-6166-4601-8667-6530561eea1c
> [2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master with volume id 2f0b0eff-6166-4601-8667-6530561eea1c ...
> [2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call 10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError
> [2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] <top>: FAIL:
> Traceback (most recent call last):
> File "/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line 115, in main
> main_i()
> File "/soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py", line 365, in main_i
> local.service_loop(*[r for r in [remote] if r])
> File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py", line 756, in service_loop
> GMaster(self, args[0]).crawl_loop()
> File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 143, in crawl_loop
> self.crawl()
> File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 308, in crawl
> xtr0 = self.xtime(path, self.slave)
> File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py", line 74, in xtime
> xt = rsc.server.xtime(path, self.uuid)
> File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__
> return self.ins(self.meth, *a)
> File "/soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__
> raise res
> OSError: [Errno 95] Operation not supported
>
>
> Apparently there is some errors with xtime and yet I have extended attribute activated.
> Any help will be gladly appreciated.
>
> Anthony
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120731/f7d24a9e/attachment.html>
More information about the Gluster-users
mailing list