[Bugs] [Bug 1187547] New: self-heal-algorithm with option "full" doesn't heal sparse files correctly
bugzilla at redhat.com
bugzilla at redhat.com
Fri Jan 30 12:08:07 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1187547
Bug ID: 1187547
Summary: self-heal-algorithm with option "full" doesn't heal
sparse files correctly
Product: GlusterFS
Version: 3.6.1
Component: replicate
Keywords: Triaged
Assignee: bugs at gluster.org
Reporter: ravishankar at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com,
lindsay.mathieson at gmail.com, pkarampu at redhat.com,
ravishankar at redhat.com
Depends On: 1166020
Blocks: 1167012, 1179563
+++ This bug was initially created as a clone of Bug #1166020 +++
Description of problem:
Here is Lindsay Mathieson's email on gluster-users with the description of the
problems she faced.
On 11/18/2014 05:35 PM, Lindsay Mathieson wrote:
>
> I have a VM image which is a sparse file - 512GB allocated, but only 32GB used.
>
>
>
>
>
> root at vnb:~# ls -lh /mnt/gluster-brick1/datastore/images/100
>
> total 31G
>
> -rw------- 2 root root 513G Nov 18 19:57 vm-100-disk-1.qcow2
>
>
>
>
>
> I switched to full sync and rebooted.
>
>
>
> heal was started on the image and it seemed to be just transfering the full file from node vnb to vng. iftop showed bandwidth at 500 Mb/s
>
>
>
> Eventually the cumulative transfer got to 140GB which seemed odd as the real file size was 31G. I logged onto the second node (vng) and the *real* file size size was up to 191Gb.
>
>
>
> It looks like the heal is not handling sparse files, rather it is transferring empty bytes to make up the allocated size. Thats a serious problem for the common habit of over committing your disk space with vm images. Not to mention the inefficiency.
Ah! this problem doesn't exist in diff self-heal :-(. Because the checksums of
the files will match in the sparse regions. In full self-heal it just reads
from the source file and writes to the sink file. What we can change there is
if the file is a sparse file and the data that is read is all zeros (read will
return all zeros as data in the sparse region) then read the stale file and
compare if it is also all zeros. If both are 'zeros' then skip the write. I
also checked that if the sparse file is created while the other brick is down,
then also it preserves the holes(i.e. sparse regions). This problem only
appears when both the files in their full size exist on both the bricks and
full self-heal is done like here :-(.
Thanks for your valuable inputs. So basically you found 2 issues. I will raise
2 bugs one for each of the issues you found. I can CC you to the bugzilla, so
that you can see the update on the bug once it is fixed. Do you want to be CCed
to the bug?
Pranith
>
>
>
> thanks,
>
>
>
> --
>
> Lindsay
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Version-Release number of selected component (if applicable):
Reported on 3.5.2 but issue exists everywhere.
How reproducible:
always
Steps to Reproduce:
1. Create a plain/distributed replicate volume.
2. Create a sparse VM
3. Configure the volume with cluster.data-self-heal-algorithm full.
4. Bring a brick down and modify data in the VM.
5. Bring the brick back up.
6. This will write sparse regions with data nullifying their usage for sparse
VMs.
--- Additional comment from Anand Avati on 2015-01-23 00:52:55 EST ---
REVIEW: http://review.gluster.org/9480 (afr: Don't write to sparse regions of
sink.) posted (#1) for review on master by Ravishankar N
(ravishankar at redhat.com)
--- Additional comment from Anand Avati on 2015-01-28 07:03:52 EST ---
REVIEW: http://review.gluster.org/9480 (afr: Don't write to sparse regions of
sink.) posted (#2) for review on master by Ravishankar N
(ravishankar at redhat.com)
--- Additional comment from Anand Avati on 2015-01-29 08:05:33 EST ---
REVIEW: http://review.gluster.org/9480 (afr: Don't write to sparse regions of
sink.) posted (#3) for review on master by Ravishankar N
(ravishankar at redhat.com)
--- Additional comment from Anand Avati on 2015-01-30 02:01:10 EST ---
REVIEW: http://review.gluster.org/9480 (afr: Don't write to sparse regions of
sink.) posted (#4) for review on master by Pranith Kumar Karampuri
(pkarampu at redhat.com)
--- Additional comment from Anand Avati on 2015-01-30 07:02:49 EST ---
COMMIT: http://review.gluster.org/9480 committed in master by Pranith Kumar
Karampuri (pkarampu at redhat.com)
------
commit 0f84f8e8048367737a2dd6ddf0c57403e757441d
Author: Ravishankar N <ravishankar at redhat.com>
Date: Fri Jan 23 11:12:54 2015 +0530
afr: Don't write to sparse regions of sink.
Problem:
When data-self-heal-algorithm is set to 'full', shd just reads from
source and writes to sink. If source file happened to be sparse (VM
workloads), we end up actually writing 0s to the corresponding regions
of the sink causing it to lose its sparseness.
Fix:
If the source file is sparse, and the data read from source and sink are
both zeros for that range, skip writing that range to the sink.
Change-Id: I787b06a553803247f43a40c00139cb483a22f9ca
BUG: 1166020
Signed-off-by: Ravishankar N <ravishankar at redhat.com>
Reviewed-on: http://review.gluster.org/9480
Tested-by: Gluster Build System <jenkins at build.gluster.com>
Reviewed-by: Pranith Kumar Karampuri <pkarampu at redhat.com>
Tested-by: Pranith Kumar Karampuri <pkarampu at redhat.com>
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1166020
[Bug 1166020] self-heal-algorithm with option "full" doesn't heal sparse
files correctly
https://bugzilla.redhat.com/show_bug.cgi?id=1167012
[Bug 1167012] self-heal-algorithm with option "full" doesn't heal sparse
files correctly
https://bugzilla.redhat.com/show_bug.cgi?id=1179563
[Bug 1179563] self-heal-algorithm with option "full" doesn't heal sparse
files correctly
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list