[Bugs] [Bug 1660225] New: geo-rep does not replicate mv or rename of file
bugzilla at redhat.com
bugzilla at redhat.com
Mon Dec 17 21:36:55 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1660225
Bug ID: 1660225
Summary: geo-rep does not replicate mv or rename of file
Product: GlusterFS
Version: 4.1
Hardware: aarch64
OS: Linux
Status: NEW
Component: geo-replication
Severity: high
Assignee: bugs at gluster.org
Reporter: perplexed767 at outlook.com
CC: bugs at gluster.org
Target Milestone: ---
Classification: Community
Description of problem:
gluster 4.1.6 geo-replication is not replicating renames
Our application is basically a mail server. We have mail stored on several
(configurable) mount points, each having it's own volume (gluster)
The volumes are geo-replicated to a mirror server that is set up identically on
the other site, the domain name basically being the only difference.
The mail application writes to a spool file, which is renamed when completed
spooling.
the spool file still exists on the geo-slave and the spool file is still there,
basically the rename does not get replicated.
the slave site the volumes are set to read-only, until we want to fail them
over.
But to simplify the issue, I simply create a txt file on the root and
immediately rename it, only the original name gets replicated, the rename is
never done on the slave.
Version-Release number of selected component (if applicable):
4.1.6
How reproducible:
easy.
Steps to Reproduce:
1. create a file on the root mount of the volume
2. rename (mv) the file
3. verify on slave that file has been renamed.
Actual results:
[vfeuk][xbesite1][fs-5][root at xbevmio01el-opco2-fs-25]:
/glusterVol/.test/mfs_opco1_int_17 # echo "test 123" >>test4.txt;mv test4.txt
test4_renamed.txt;ls -l test4.txt test4_renamed.txt
ls: cannot access 'test4.txt': No such file or directory
-rw-r----- 1 root root 9 Dec 17 22:15 test4_renamed.txt
[vfeuk][xbesite1][fs-5][root at xbevmio01el-opco2-fs-25]:
/glusterVol/.test/mfs_opco1_int_17 # findmnt .
TARGET SOURCE FSTYPE
OPTIONS
/glusterVol/.test/mfs_opco1_int_17 fs-5:/mfs_opco1_int_17 fuse.glusterfs
rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072
slave server:
[vfeuk][xmssite2][fs-5][root at xmsvmio02el-opco2-fs-25]:
/glusterVol/.test/mfs_opco1_int_17 # ls -l test4.txt test4_renamed.txt
ls: cannot access 'test4_renamed.txt': No such file or directory
-rw-r----- 1 root root 9 Dec 17 22:15 test4.txt
Expected results:
file is renamed.
Additional info:
gluster v geo mfs_opco1_int_17
fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17 status
MASTER NODE MASTER VOL MASTER BRICK
SLAVE USER SLAVE
SLAVE NODE STATUS CRAWL
STATUS LAST_SYNCED
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
fs-5-b.vfeuk.xbesite1.sero.gic.ericsson.se mfs_opco1_int_17
/exportg/mfs_opco1_int_17_b root
fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17
fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se Active Changelog Crawl
2018-12-17 22:18:32
fs-6-b.vfeuk.xbesite1.sero.gic.ericsson.se mfs_opco1_int_17
/exportg/mfs_opco1_int_17_b root
fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17
fs-6-b.vfeuk.xmssite2.sero.gic.ericsson.se Passive N/A N/A
/exportg/mfs_opco1_int_17_b/internal # gluster v geo mfs_opco1_int_17
fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se::mfs_opco1_int_17 config
access_mount:false
allow_network:
change_detector:changelog
change_interval:5
changelog_archive_format:%Y%m
changelog_batch_size:727040
changelog_log_file:/var/log/glusterfs/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/changes-${local_id}.log
changelog_log_level:WARNING
checkpoint:1545081336
chnagelog_archive_format:%Y%m
cli_log_file:/var/log/glusterfs/geo-replication/cli.log
cli_log_level:INFO
connection_timeout:60
georep_session_working_dir:/var/lib/glusterd/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/
gluster_cli_options:
gluster_command:gluster
gluster_command_dir:/usr/sbin
gluster_log_file:/var/log/glusterfs/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/mnt-${local_id}.log
gluster_log_level:WARNING
gluster_logdir:/var/log/glusterfs
gluster_params:aux-gfid-mount acl
gluster_rundir:/var/run/gluster
glusterd_workdir:/var/lib/glusterd
gsyncd_miscdir:/var/lib/misc/gluster/gsyncd
ignore_deletes:false
isolated_slaves:
log_file:/var/log/glusterfs/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/gsyncd.log
log_level:WARNING
log_rsync_performance:true
master_disperse_count:1
master_replica_count:1
max_rsync_retries:10
meta_volume_mnt:/var/run/gluster/shared_storage
pid_file:/var/run/gluster/gsyncd-mfs_opco1_int_17-fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se-mfs_opco1_int_17.pid
remote_gsyncd:
replica_failover_interval:1
rsync_command:rsync
rsync_opt_existing:true
rsync_opt_ignore_missing_args:true
rsync_options:
rsync_ssh_options:
slave_access_mount:false
slave_gluster_command_dir:/usr/sbin
slave_gluster_log_file:/var/log/glusterfs/geo-replication-slaves/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/mnt-${master_node}-${master_brick_id}.log
slave_gluster_log_file_mbr:/var/log/glusterfs/geo-replication-slaves/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/mnt-mbr-${master_node}-${master_brick_id}.log
slave_gluster_log_level:INFO
slave_gluster_params:aux-gfid-mount acl
slave_log_file:/var/log/glusterfs/geo-replication-slaves/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/gsyncd.log
slave_log_level:INFO
slave_timeout:120
special_sync_mode:
ssh_command:ssh
ssh_options:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/secret.pem
ssh_options_tar:-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i
/var/lib/glusterd/geo-replication/tar_ssh.pem
ssh_port:22
state_file:/var/lib/glusterd/geo-replication/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/monitor.status
state_socket_unencoded:
stime_xattr_prefix:trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.cf526ed9-a7a2-4f56-a94a-ba9e0d70d166
sync_acls:true
sync_jobs:1
sync_xattrs:true
tar_command:tar
use_meta_volume:true
use_rsync_xattrs:false
use_tarssh:false
working_dir:/var/lib/misc/gluster/gsyncd/mfs_opco1_int_17_fs-5-b.vfeuk.xmssite2.sero.gic.ericsson.se_mfs_opco1_int_17/
[vfeuk][xbesite1][fs-5][root at xbevmio01el-opco2-fs-25]:
/exportg/mfs_opco1_int_17_b/internal # gluster v info mfs_opco1_int_17
Volume Name: mfs_opco1_int_17
Type: Replicate
Volume ID: 9762dbff-67fc-41fa-b326-a327476869be
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fs-5-b.vfeuk.xbesite1.sero.gic.ericsson.se:/exportg/mfs_opco1_int_17_b
Brick2: fs-6-b.vfeuk.xbesite1.sero.gic.ericsson.se:/exportg/mfs_opco1_int_17_b
Options Reconfigured:
features.read-only: off
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
diagnostics.client-log-level: WARNING
diagnostics.brick-log-level: WARNING
network.ping-timeout: 5
performance.write-behind-window-size: 64MB
server.allow-insecure: on
server.event-threads: 3
client.event-threads: 8
network.inode-lru-limit: 200000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
performance.parallel-readdir: on
cluster.lookup-optimize: on
cluster.favorite-child-policy: mtime
performance.io-thread-count: 64
performance.readdir-ahead: on
performance.cache-size: 512MB
nfs.disable: on
cluster.enable-shared-storage: enable
[vfeuk][xbesite1][fs-5][root at xbevmio01el-opco2-fs-25]:
/exportg/mfs_opco1_int_17_b/internal # gluster v status mfs_opco1_int_17
Status of volume: mfs_opco1_int_17
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick fs-5-b.vfeuk.xbesite1.sero.gic.ericss
on.se:/exportg/mfs_opco1_int_17_b 49159 0 Y 28908
Brick fs-6-b.vfeuk.xbesite1.sero.gic.ericss
on.se:/exportg/mfs_opco1_int_17_b 49159 0 Y 3611
Self-heal Daemon on localhost N/A N/A Y 22663
Self-heal Daemon on fs-6-b.vfeuk.xbesite1.s
ero.gic.ericsson.se N/A N/A Y 14227
Task Status of Volume mfs_opco1_int_17
------------------------------------------------------------------------------
There are no active volume task
[vfeuk][xbesite1][fs-5][root at xbevmio01el-opco2-fs-25]:
/exportg/mfs_opco1_int_17_b/internal # gluster peer status
Number of Peers: 1
Hostname: fs-6-b.vfeuk.xbesite1.sero.gic.ericsson.se
Uuid: 5c8b7596-840a-48e9-a3fe-e9ced3a48df0
State: Peer in Cluster (Connected)
gluster volume statedump mfs_opco1_int_17
Segmentation fault (core dumped)
xfs_info /exportg/mfs_opco1_int_17_b
meta-data=/dev/mapper/glustervg-lvm_mfs_opco1_int_17_b isize=1024 agcount=16,
agsize=163808 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0 spinodes=0
data = bsize=4096 blocks=2620928, imaxpct=33
= sunit=32 swidth=32 blks
naming =version 2 bsize=8192 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=32 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
getfattr -d -m. -ehex /exportg/mfs_opco1_int_17_b
getfattr: Removing leading '/' from absolute path names
# file: exportg/mfs_opco1_int_17_b
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.cf526ed9-a7a2-4f56-a94a-ba9e0d70d166.entry_stime=0x5c18140100000000
trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.cf526ed9-a7a2-4f56-a94a-ba9e0d70d166.stime=0x5c18140100000000
trusted.glusterfs.9762dbff-67fc-41fa-b326-a327476869be.xtime=0x5c18140e000c9211
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0x9762dbff67fc41fab326a327476869be
[vfeuk][xbesite1][fs-5][root at xbevmio01el-opco2-fs-25]:
/exportg/mfs_opco1_int_17_b/internal # uname -r; cat /etc/issue
4.4.103-6.38-default
Welcome to SUSE Linux Enterprise Server 12 SP3 (x86_64) - Kernel \r (\l).
[vfeuk][xmssite2][fs-5][root at xmsvmio02el-opco2-fs-25]:
/glusterVol/.test/mfs_opco1_int_17 # df -Th
Filesystem Type Size
Used Avail Use% Mounted on
devtmpfs devtmpfs 9.8G
12K 9.8G 1% /dev
tmpfs tmpfs 9.9G
4.0K 9.9G 1% /dev/shm
tmpfs tmpfs 9.9G
483M 9.4G 5% /run
tmpfs tmpfs 9.9G
0 9.9G 0% /sys/fs/cgroup
/dev/sda2 btrfs 32G
1.6G 29G 6% /
/dev/sda1 ext3 259M
46M 200M 19% /boot
tmpfs tmpfs 2.0G
0 2.0G 0% /run/user/0
/dev/mapper/glustervg-lvm_glusterVarLog xfs 50G
232M 50G 1% /exportg/gluster_var_log
/dev/mapper/glustervg-lvm_mfs_opco1_int_10_b xfs 10G
4.7G 5.4G 47% /exportg/mfs_opco1_int_10_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_11_b xfs 10G
446M 9.6G 5% /exportg/mfs_opco1_int_11_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_12_b xfs 10G
4.6G 5.5G 46% /exportg/mfs_opco1_int_12_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_13_b xfs 10G
1.1G 9.0G 11% /exportg/mfs_opco1_int_13_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_14_b xfs 10G
518M 9.5G 6% /exportg/mfs_opco1_int_14_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_15_b xfs 10G
1.2G 8.8G 12% /exportg/mfs_opco1_int_15_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_16_b xfs 10G
637M 9.4G 7% /exportg/mfs_opco1_int_16_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_17_b xfs 10G
2.6G 7.5G 26% /exportg/mfs_opco1_int_17_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_18_b xfs 10G
38M 10G 1% /exportg/mfs_opco1_int_18_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_19_b xfs 10G
38M 10G 1% /exportg/mfs_opco1_int_19_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_1a_b xfs 10G
38M 10G 1% /exportg/mfs_opco1_int_1a_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_1b_b xfs 10G
38M 10G 1% /exportg/mfs_opco1_int_1b_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_1c_b xfs 10G
38M 10G 1% /exportg/mfs_opco1_int_1c_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_1d_b xfs 10G
42M 10G 1% /exportg/mfs_opco1_int_1d_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_1e_b xfs 10G
42M 10G 1% /exportg/mfs_opco1_int_1e_b
/dev/mapper/glustervg-lvm_mfs_opco1_int_1f_b xfs 10G
38M 10G 1% /exportg/mfs_opco1_int_1f_b
10.221.81.224:/gluster_shared_storage fuse.glusterfs 32G
2.1G 29G 7% /run/gluster/shared_storage
om-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/gcluster fuse.glusterfs 25G
957M 25G 4% /cluster
fs-5:/gluster_shared_storage fuse.glusterfs 32G
2.1G 29G 7% /glusterVol/.test/gluster_shared_storage
fs-5:/mfs_opco1_int_10 fuse.glusterfs 10G
4.8G 5.3G 48% /glusterVol/.test/mfs_opco1_int_10
fs-5:/mfs_opco1_int_11 fuse.glusterfs 10G
548M 9.5G 6% /glusterVol/.test/mfs_opco1_int_11
fs-5:/mfs_opco1_int_12 fuse.glusterfs 10G
4.7G 5.4G 47% /glusterVol/.test/mfs_opco1_int_12
fs-5:/mfs_opco1_int_13 fuse.glusterfs 10G
1.2G 8.9G 12% /glusterVol/.test/mfs_opco1_int_13
fs-5:/mfs_opco1_int_14 fuse.glusterfs 10G
620M 9.4G 7% /glusterVol/.test/mfs_opco1_int_14
fs-5:/mfs_opco1_int_15 fuse.glusterfs 10G
1.3G 8.7G 13% /glusterVol/.test/mfs_opco1_int_15
fs-5:/mfs_opco1_int_16 fuse.glusterfs 10G
740M 9.3G 8% /glusterVol/.test/mfs_opco1_int_16
fs-5:/mfs_opco1_int_17 fuse.glusterfs 10G
2.7G 7.4G 27% /glusterVol/.test/mfs_opco1_int_17
fs-5:/mfs_opco1_int_18 fuse.glusterfs 10G
140M 9.9G 2% /glusterVol/.test/mfs_opco1_int_18
fs-5:/mfs_opco1_int_19 fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_19
fs-5:/mfs_opco1_int_1a fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1a
fs-5:/mfs_opco1_int_1b fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1b
fs-5:/mfs_opco1_int_1c fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1c
fs-5:/mfs_opco1_int_1d fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1d
fs-5:/mfs_opco1_int_1e fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1e
fs-5:/mfs_opco1_int_1f fuse.glusterfs 10G
144M 9.9G 2% /glusterVol/.test/mfs_opco1_int_1f
fs-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/perf_opco1 fuse.glusterfs 16G
625M 16G 4% /glusterVol/perf_opco1
fs-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/cps_opco1 fuse.glusterfs 2.0G
64M 2.0G 4% /opt/global/cps
fs-1-b.vfeuk.xmssite2.sero.gic.ericsson.se:/logs_opco1 fuse.glusterfs 50G
2.1G 48G 5% /glusterVol/logs_opco1
==================
If any logs required please ask.
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list