[Bugs] [Bug 1652887] Geo-rep help looks to have a typo.
bugzilla at redhat.com
bugzilla at redhat.com
Tue Nov 27 13:19:30 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1652887
--- Comment #2 from Kotresh HR <khiremat at redhat.com> ---
Description of problem:
When I run:
[root at dell-per730-01-priv ~]# gluster v geo-replication help
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n]
[[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause
[force]|resume [force]|config|status [detail]|delete [reset-sync-time]}
[options...]
I see:
[force]|config|status
And:
[detail]|delete
I think this should read:
|config
[detail]|status
[force]|delete
For example when I run:
[root at dell-per730-01-priv ~]# gluster v geo-replication data
root at gqas015.sbu.lab.eng.bos.redhat.com::georep-vol config
special_sync_mode: partial
gluster_log_file:
/var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem
change_detector: changelog
use_meta_volume: true
session_owner: 71be0011-6af3-4250-8028-65eb6563d820
state_file:
/var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.status
gluster_params: aux-gfid-mount acl
remote_gsyncd: /nonexistent/gsyncd
working_dir:
/var/lib/misc/glusterfsd/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol
state_detail_file:
/var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-detail.status
gluster_command_dir: /usr/sbin/
pid_file:
/var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/monitor.pid
georep_session_working_dir:
/var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/tar_ssh.pem
master.stime_xattr_name:
trusted.glusterfs.71be0011-6af3-4250-8028-65eb6563d820.91505f86-9440-47e1-a2d0-8fb817778f71.stime
changelog_log_file:
/var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol-changes.log
socketdir: /var/run/gluster
volume_id: 71be0011-6af3-4250-8028-65eb6563d820
ignore_deletes: false
state_socket_unencoded:
/var/lib/glusterd/geo-replication/data_gqas015.sbu.lab.eng.bos.redhat.com_georep-vol/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.socket
log_file:
/var/log/glusterfs/geo-replication/data/ssh%3A%2F%2Froot%4010.16.156.42%3Agluster%3A%2F%2F127.0.0.1%3Ageorep-vol.log
It is successful. But when I try detail I get:
[root at dell-per730-01-priv ~]# gluster v geo-replication data
root at gqas015.sbu.lab.eng.bos.redhat.com::georep-vol config detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n]
[[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause
[force]|resume [force]|config|status [detail]|delete [reset-sync-time]}
[options...]
Also when I run:
[root at dell-per730-01-priv ~]# gluster v geo-replication data
root at gqas015.sbu.lab.eng.bos.redhat.com::georep-vol status
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS
CRAWL STATUS LAST_SYNCED
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A
192.168.50.6 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A
192.168.50.2 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A
192.168.50.3 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A
192.168.50.5 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A
192.168.50.4 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A
As well as:
[root at dell-per730-01-priv ~]# gluster v geo-replication data
root at gqas015.sbu.lab.eng.bos.redhat.com::georep-vol status detail
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE
SLAVE NODE STATUS
CRAWL STATUS LAST_SYNCED ENTRY DATA META FAILURES
CHECKPOINT TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
192.168.50.1 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A
0 431 0 0 N/A N/A
N/A
192.168.50.5 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A
N/A N/A N/A N/A N/A N/A
N/A
192.168.50.6 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A
N/A N/A N/A N/A N/A N/A
N/A
192.168.50.4 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas015.sbu.lab.eng.bos.redhat.com Active Hybrid Crawl N/A
0 792 0 0 N/A N/A
N/A
192.168.50.2 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas011.sbu.lab.eng.bos.redhat.com Passive N/A N/A
N/A N/A N/A N/A N/A N/A
N/A
192.168.50.3 data /rhgs/brick2/data root
gqas015.sbu.lab.eng.bos.redhat.com::georep-vol
gqas014.sbu.lab.eng.bos.redhat.com Passive N/A N/A
N/A N/A N/A N/A N/A N/A
N/A
And when I run:
[root at dell-per730-01-priv ~]# gluster v geo-replication data
root at gqas015.sbu.lab.eng.bos.redhat.com::georep-vol delete detail
Usage: volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port n]
[[no-verify]|[push-pem]]] [force]|start [force]|stop [force]|pause
[force]|resume [force]|config|status [detail]|delete [reset-sync-time]}
[options...]
It fails. I didn't want to delete my session so I didn't run:
[root at dell-per730-01-priv ~]# gluster v geo-replication data
root at gqas015.sbu.lab.eng.bos.redhat.com::georep-vol delete force
But I feel force is applicable here and not detail.
Version-Release number of selected component (if applicable):
[root at dell-per730-01-priv ~]# rpm -q glusterfs
glusterfs-3.8.4-18.4.el7rhgs.x86_64
How reproducible:
Every time.
Steps to Reproduce:
1. Run gluster v geo-rep help
2. Look at the config / status / delete and weather force / detail apply
Actual results:
I think that there is a typo I listed above.
Expected results:
Proper Usage
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list