[Gluster-users] georeplication woes

Maarten van Baarsel mrten_glusterusers at ii.nl
Mon Jul 23 10:07:15 UTC 2018


Hi Sunny,

>> Can't run that command on the slave, it doesn't know the gl0 volume:
>>
>> root at gluster-4:/home/mrten# gluster volume geo-rep gl0 ssh://georep@gluster-4.glstr::glbackup config

> please do not use ssh://
> gluster volume geo-rep gl0 georep at gluster-4.glstr::glbackup config
> just use it like config command.

Sorry, does not make a difference on geo-rep slave side:

root at gluster-4:/home/mrten# gluster volume geo-rep gl0 georep at gluster-4.glstr::glbackup config
Volume gl0 does not exist
geo-replication command failed


Does work on the master side but the output is the same:

root at gluster-3:/home/mrten# gluster volume geo-rep gl0 georep at gluster-4.glstr::glbackup config
access_mount:false
allow_network:
change_detector:changelog
change_interval:5
changelog_archive_format:%Y%m
changelog_batch_size:727040
changelog_log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/changes-${local_id}.log
changelog_log_level:INFO
checkpoint:0
chnagelog_archive_format:%Y%m
cli_log_file:/var/log/glusterfs/geo-replication/cli.log
cli_log_level:INFO
connection_timeout:60
georep_session_working_dir:/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/
gluster_cli_options:
gluster_command:gluster
gluster_command_dir:/usr/sbin/
gluster_log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/mnt-${local_id}.log
gluster_log_level:INFO
gluster_logdir:/var/log/glusterfs
gluster_params:aux-gfid-mount acl
gluster_rundir:/var/run/gluster
glusterd_workdir:/var/lib/glusterd
gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
ignore_deletes:false
isolated_slaves:
log_file:/var/log/glusterfs/geo-replication/gl0_gluster-4.glstr_glbackup/gsyncd.log
log_level:INFO
log_rsync_performance:false
master_disperse_count:1
master_replica_count:1
max_rsync_retries:10
meta_volume_mnt:/var/run/gluster/shared_storage
pid_file:/var/run/gluster/gsyncd-gl0-gluster-4.glstr-glbackup.pid
remote_gsyncd:
replica_failover_interval:1
rsync_command:rsync
rsync_opt_existing:
rsync_opt_ignore_missing_args:
rsync_options:
rsync_ssh_options:
slave_access_mount:false
slave_gluster_command_dir:/usr/sbin/
slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-${master_node}-${master_brick_id}.log
slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/mnt-mbr-${master_node}-${master_brick_id}.log
slave_gluster_log_level:INFO
slave_gluster_params:aux-gfid-mount acl
slave_log_file:/var/log/glusterfs/geo-replication-slaves/gl0_gluster-4.glstr_glbackup/gsyncd.log
slave_log_level:INFO
slave_timeout:120
special_sync_mode:
ssh_command:ssh
ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
ssh_port:22
state_file:/var/lib/glusterd/geo-replication/gl0_gluster-4.glstr_glbackup/monitor.status
state_socket_unencoded:
stime_xattr_prefix:trusted.glusterfs.4054e7ad-7eb9-41fe-94cf-b52a690bb655.f7ce9a54-0ce4-4056-9958-4fa3f1630154
sync_acls:true
sync_jobs:3
sync_xattrs:true
tar_command:tar
use_meta_volume:true
use_rsync_xattrs:false
use_tarssh:false
working_dir:/var/lib/misc/gluster/gsyncd/gl0_gluster-4.glstr_glbackup/


This should be non-root georeplication, should this work? 

georep at gluster-4:~$ /usr/sbin/gluster volume status all
Connection failed. Please check if gluster daemon is operational.

(run as the geo-replication user on the slave side)


Can I test something else? Is the command normally run in a jail?



Maarten.


More information about the Gluster-users mailing list