[Gluster-users] issues with geo-replication

Greg Swift gregswift at gmail.com
Thu Mar 22 16:07:47 UTC 2012


Thanks... the last item on that page was the resolution.

"If GlusterFS 3.2 or higher is not installed in the default location
(in slave) and has been prefixed to be installed in a custom location,
configure the remote-gsyncd-command for it to point to the exact place
where gsyncd is located. "

But since my target is a remote directory, not a gluster volume, I did
not install gluster software on the remote side.

That being said, I think the per-requisites could use some clarifications.

http://docs.redhat.com/docs/en-US/Red_Hat_Storage/2/html/User_Guide/chap-User_Guide-Geo_Rep-Preparation-Minimum_Reqs.html

States:
Before deploying Geo-replication, you must ensure that both Master and
Slave are Red Hat Storage instances.

I realize that in a strictly literal sense this tells you that,
however it would make more sense to clearly state it.  A
geo-replication target not running glusterfs just needs
glusterfs-{core,geo-replication} not a full RH Storage instance.

-greg


On Thu, Mar 22, 2012 at 10:00, Venky Shankar <vshankar at redhat.com> wrote:
> Hey Greg,
>
> Have a look at this: http://docs.redhat.com/docs/en-US/Red_Hat_Storage/2/html/User_Guide/ch15s02s05.html
>
> Can you make sure you have everything setup as per the points mentioned in the doc.
>
> Thanks,
> -Venky
>
> ----- Original Message -----
> From: "Greg Swift" <gregswift at gmail.com>
> To: "gluster-users" <gluster-users at gluster.org>
> Sent: Thursday, March 22, 2012 6:34:39 PM
> Subject: Re: [Gluster-users] issues with geo-replication
>
> On Tue, Mar 20, 2012 at 14:34, Greg Swift <gregswift at gmail.com> wrote:
>> Hi all.  I'm looking to see if anyone can tell me this is already
>> working for them or if they wouldn't mind performing a quick test.
>>
>> I'm trying to set up a geo-replication instance on 3.2.5 from a local
>> volume to a remote directory.  This is the command I am using:
>>
>> gluster volume geo-replication myvol ssh://root@remoteip:/data/path start
>>
>> I am able to perform a geo-replication from a local volume to a remote
>> volume with no problem using the following command:
>>
>> gluster volume geo-replication myvol ssh://root@remoteip::remotevol start
>>
>> The steps I am using to implement this:
>>
>> 1: Create key for geo-replication in
>> /etc/glusterd/geo-replication/secret.pem.pub and secret.pem.pub
>> 2: Add pub key to ~root/.ssh/authorized_keys on target systems
>> 3: Verify key works (using geo-replication's ssh syntax):
>> [root at myboxen ~]# ssh -oPasswordAuthentication=no
>> -oStrictHostKeyChecking=no -i /etc/glusterd/geo-replication/secret.pem
>> root at remoteip "ls -l /data"
>> drwxr-xr-x 2 root root 4096 Mar 15 11:53 path
>>
>> 4: Run the geo-replication command
>> gluster volume geo-replication myvol ssh://root@remoteip:/data/path start
>>
>>
>> I'd expect to get :
>>
>> [root at myboxen ~]# gluster volume geo-replication myvol
>> ssh://root@remoteip:/data/path start
>> Starting geo-replication session between ecfcerts &
>> ssh://root@remoteip:/data/path has been successful
>>
>> [root at myboxen ~]# gluster volume geo-replication status
>> MASTER               SLAVE                                              STATUS
>> --------------------------------------------------------------------------------
>> myvol             ssh://root@remoteip:file:///data/path OK
>>
>> Instead I get:
>>
>> [root at myboxen ~]# gluster volume geo-replication myvol
>> ssh://root@remoteip:/data/path start
>> geo-replication start failed for myvol ssh://root@remoteip:/data/path
>> geo-replication command failed
>>
>> [root at myboxen ~]# gluster volume geo-replication status
>> MASTER               SLAVE                                              STATUS
>> --------------------------------------------------------------------------------
>> myvol             ssh://root@remoteip:file:///data/path    corrupt
>>
>>
>> I was not getting any logs about this either.
>>
>> I then set log-level to DEBUG:
>> gluster volume geo-replication myvol ssh://root@remoteip:/data/path
>> config log-level debug
>>
>> My responses started being a bit different, but still a fail:
>>
>> [root at myboxen ecfcerts]# gluster volume geo-replication myvol
>> ssh://root@remoteip:/data/path start
>> Starting geo-replication session between myvol &
>> ssh://root@remoteip:/data/path has been successful
>> [root at myboxen ecfcerts]# gluster volume geo-replication status
>> MASTER               SLAVE                                              STATUS
>> --------------------------------------------------------------------------------
>> myvol             ssh://root@remoteip:file:///data/path    faulty
>>
>> At which I do start getting logs.  I've attached it.
>
> So no one is doing geo-replication to a remote directory?
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list