[Gluster-users] 3.6.2 geo-replication session created but no files are copied

Sander Zijlstra sander.zijlstra at surfsara.nl
Wed Apr 8 07:18:41 UTC 2015


Aravinda,

I wasn’t really referring to the SSH command itself but the error thrown by gsyncd.py:

>> error: incorrect number of arguments
>> 
>> Usage: gsyncd.py [options...] <master> <slave>

The status is still like this:

# gluster volume geo-replication gv0 v39-app-01::gv0 status

MASTER NODE                 MASTER VOL    MASTER BRICK                      SLAVE                                        STATUS     CHECKPOINT STATUS    CRAWL STATUS
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
s35-06    gv0           /glusterfs/bricks/brick1/brick    v39-app-05::gv0    Active     N/A                  Changelog Crawl
s35-07   gv0           /glusterfs/bricks/brick1/brick    v39-app-02::gv0    Passive    N/A                  N/A
s35-08    gv0           /glusterfs/bricks/brick1/brick    v39-app-01::gv0    Active     N/A                  Changelog Crawl
s35-09   gv0           /glusterfs/bricks/brick1/brick    v39-app-04::gv0    Passive    N/A                  N/A


Met vriendelijke groet / kind regards,

Sander Zijlstra

| Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | sander.zijlstra at surfsara.nl | www.surfsara.nl |

Regular day off on friday

> On 08 Apr 2015, at 05:39, Aravinda <avishwan at redhat.com> wrote:
> 
> I don't see any issue with ssh command here. Geo-rep will use this command as prefix and adds additional parameters while running.
> 
> Please let us know the Current status and any errors in log files(/var/log/glusterfs/geo-replication/<MASTERVOL_SLAVEHOST_SLAVEVOL>/)
> 
> --
> regards
> Aravinda
> 
> On 04/08/2015 12:18 AM, Sander Zijlstra wrote:
>> LS,
>> 
>> Last week I configured geo-replication between two glusterFS clusters, both version 3.6.2 and all looks ok:
>> 
>> [root at s35-06 gv0]# gluster volume geo-replication gv0 status
>> 
>> 
>> MASTER NODE MASTER VOL    MASTER BRICK                      SLAVE      STATUS     CHECKPOINT STATUS    CRAWL STATUS
>> --------------------------------------------------------------------------------------------------------------------------------------
>> s35-06   gv0           /glusterfs/bricks/brick1/brick v39-app-05::gv0    Active     N/A                  Changelog Crawl
>> s35-07   gv0           /glusterfs/bricks/brick1/brick v39-app-02::gv0    Passive    N/A                  N/A
>> s35-09   gv0           /glusterfs/bricks/brick1/brick v39-app-04::gv0    Passive    N/A                  N/A
>> s35-08   gv0           /glusterfs/bricks/brick1/brick v39-app-01::gv0    Active     N/A                  Changelog Crawl
>> 
>> I started the replication at the end of the day, hoping that all 40TB would be copied by the next day or so, but I discovered that not a single bit has been copied.
>> 
>> When looking at the volume config settings I found the “ssh command” used so I tried that and discovered the following issue between my master and slave cluster when executing the configured “ssh command"
>> 
>> [root at s35-06 gv0]# ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem v39-app-01
>> [2015-04-07 14:43:43.613130] I [cli.c:593:cli_rpc_init] 0-cli: Connecting to remote glusterd at localhost
>> [2015-04-07 14:43:43.613178] D [rpc-clnt.c:972:rpc_clnt_connection_init] 0-glusterfs: defaulting frame-timeout to 30mins
>> [2015-04-07 14:43:43.613187] D [rpc-clnt.c:986:rpc_clnt_connection_init] 0-glusterfs: disable ping-timeout
>> [2015-04-07 14:43:43.613202] D [rpc-transport.c:188:rpc_transport_load] 0-rpc-transport: missing 'option transport-type'. defaulting to "socket"
>> [2015-04-07 14:43:43.613211] D [rpc-transport.c:262:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/3.6.2/rpc-transport/socket.so
>> [2015-04-07 14:43:43.615501] T [options.c:87:xlator_option_validate_int] 0-glusterfs: no range check required for 'option remote-port 24007'
>> [2015-04-07 14:43:43.615528] D [socket.c:3799:socket_init] 0-glusterfs: SSL support on the I/O path is NOT enabled
>> [2015-04-07 14:43:43.615537] D [socket.c:3802:socket_init] 0-glusterfs: SSL support for glusterd is NOT enabled
>> [2015-04-07 14:43:43.615543] D [socket.c:3819:socket_init] 0-glusterfs: using system polling thread
>> 
>> ——%<———
>> 
>> [2015-04-07 14:43:43.733052] I [cli-rpc-ops.c:5386:gf_cli_getwd_cbk] 0-cli: Received resp to getwd
>> [2015-04-07 14:43:43.733085] D [cli-cmd.c:384:cli_cmd_submit] 0-cli: Returning 0
>> [2015-04-07 14:43:43.733097] D [cli-rpc-ops.c:5415:gf_cli_getwd] 0-cli: Returning 0
>> [2015-04-07 14:43:43.733104] I [input.c:36:cli_batch] 0-: Exiting with: 0
>> error: incorrect number of arguments
>> 
>> Usage: gsyncd.py [options...] <master> <slave>
>> 
>> Connection to v39-app-01 closed.
>> 
>> Can somebody point me to how to fix this “gsycnd” issue? I didn’t find any updated packages from CentOS for my release (6.6), so I expect it should be a working setup.
>> 
>> any help would appreciated
>> 
>> Met vriendelijke groet / kind regards,
>> 
>> *Sander Zijlstra*
>> 
>> | Linux Engineer | SURFsara | Science Park 140 | 1098XG Amsterdam | T +31 (0)6 43 99 12 47 | sander.zijlstra at surfsara.nl <mailto:sander.zijlstra at surfsara.nl> | www.surfsara.nl <http://www.surfsara.nl> |
>> 
>> /Regular day off on friday/
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 496 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150408/20626dad/attachment.sig>


More information about the Gluster-users mailing list