[Bugs] [Bug 1203739] Self-heal of sparse image files on 3-way replica "unsparsifies" the image

bugzilla at redhat.com bugzilla at redhat.com
Tue Apr 7 17:40:33 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1203739

SATHEESARAN <sasundar at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |sasundar at redhat.com



--- Comment #5 from SATHEESARAN <sasundar at redhat.com> ---
I have found the same behaviour with latest glusterfs-3.7 nightly build,
without involving oVirt in picture.

These are the steps that I used to reproduce this issue :

1. Installed 3 RHEL servers with latest glusterfs-3.7 nightly build
2. Created a gluster cluster ( Trusted Storage Pool )
3. Created a replica 3 volume
4. Fuse mounted the volume on another RHEL 6.6 server
5. Created a sparse file
(i.e) dd if=/dev/urandom of=vm.img bs=1024 count=0 seek=24M
6. Performed fallocate on that file for 5G
(i.e) fallocate -l5G vm.img
7. Reduce the replica count to 2
(i.e) gluster volume remove-brick <vol-name> replica 2 <brick3>
8. Added a new brick
(i.e) gluster volume add-brick <vol-name> replica 3 <new-brick>
9. Triggered self-heal
(i.e) gluster volume heal <vol-name> full

10. Wait till the self-heal gets completed
11. Check for the file size across all the bricks
(i.e) du -sh <file> - would give the disk space 
      du -sh <file> --apparent-size - would give the sparse file size

>From the above test, I could note that in one of the server, the file size has
become 24G which proves the existence of this problem

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=TK1C7t9Ah8&a=cc_unsubscribe


More information about the Bugs mailing list