[Bugs] [Bug 1760399] WORMed files couldn't be migrated during rebalancing

bugzilla at redhat.com bugzilla at redhat.com
Wed Oct 16 08:37:58 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1760399



--- Comment #7 from Mohit Agrawal <moagrawa at redhat.com> ---
Hi,

Yes wormed file should be move on newly added brick.
I have tried to reproduce the same on the below version

glusterfs-libs-5.5-1.el7.x86_64
glusterfs-fuse-5.5-1.el7.x86_64
glusterfs-devel-5.5-1.el7.x86_64
glusterfs-rdma-5.5-1.el7.x86_64
glusterfs-5.5-1.el7.x86_64
glusterfs-cli-5.5-1.el7.x86_64
glusterfs-api-devel-5.5-1.el7.x86_64
glusterfs-cloudsync-plugins-5.5-1.el7.x86_64
glusterfs-client-xlators-5.5-1.el7.x86_64
glusterfs-server-5.5-1.el7.x86_64
glusterfs-events-5.5-1.el7.x86_64
glusterfs-debuginfo-5.5-1.el7.x86_64
glusterfs-api-5.5-1.el7.x86_64
glusterfs-extra-xlators-5.5-1.el7.x86_64
glusterfs-geo-replication-5.5-1.el7.x86_64

Reproducer Steps:
 1) gluster v create test1 replica 3 10.74.251.224:/dist1/b{0..2} force
 2) gluster v set test1 features.worm-file-level on
 3) Mount the volume /mnt 
 4) Write the data 
    time for (( i=0 ; i<=10 ; i++ ));     do        dd if=/dev/urandom
of=/mnt/file$i bs=1M count=100;        mkdir -p
/mnt/dir$i/dir1/dir2/dir3/dir4/dir5/;     done
 5) Run add-brick
    gluster v add-brick test1 10.74.251.224:/dist2/b{0..2}
 6) Start rebalance 
    gluster v rebalance test1 start
    5 files are successfully transferred on dist2/b{0..2}

I am not able to reproduce the issue, please correct me if I have missed any
steps to reproduce the same.
Can you please share the rebalance logs and confirm about the reproducer
steps.In most of of the fops
in worm xlator it is checking if fop request has come from internal client then
wind a fop to next xlator.
For some of the fops it is not checking the same, to confirm the same need
rebalance logs along with reproducer steps.

Regards,
Mohit Agrawal

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list