[Gluster-users] Sparse Files and Heal

Pranith Kumar Karampuri pkarampu at redhat.com
Sat Nov 22 16:58:50 UTC 2014


On 11/22/2014 09:59 PM, Adrian Kan wrote:
> Thanks a lot!  Just one more question.
> I know the file sizes are different.  However, I run a md5sum against the
> original and the one after reheal (with smaller file size), they are the
> same.
> I would like to know if there is any side-effect to keep them in different
> file sizes on different bricks until the bug has been fixed?
No. To an application sparse region is nothing but region filled with 
'0' value. Because of this bug, the data with '0' is filled in those 
sparse region which is equivalent, but increases the disk usage. That is 
the reason md5sum matches.

Pranith.
>
> Fortunately, I mount the healed VM image (the one with smaller file size)
> using losetup,kpartx,etc, I can still see the files in the VM file system.
>
>
> Thanks,
> Adrian
>
> -----Original Message-----
> From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
> Sent: Sunday, November 23, 2014 12:22 AM
> To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at gluster.org
> Subject: Re: [Gluster-users] Sparse Files and Heal
>
>
> On 11/22/2014 09:46 PM, Adrian Kan wrote:
>> I would like to have 3.4.x fixed if possible.  I have a plan to
>> upgrade, but have to review the procedure such as:
> Okay, done, https://bugzilla.redhat.com/show_bug.cgi?id=1167012
>> 1) the sequence - upgrade the client first or the brick first?
>> 2) can one brick taken down for upgrade and bring it back up to have
>> everything in sync between 3.4.x and 3.5.x before I upgrade the next
>> brick to 3.5.x?
> This fix will be in client. So you will anyway have downtime. So do a normal
> upgrade by stopping the VMs, unmount the mounts, stop the volume.
> Upgrade both clients, servers. Start volume and remount, start the vms
> again.
>
> Pranith
>>
>> Thanks,
>> Adrian
>>
>> -----Original Message-----
>> From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
>> Sent: Sunday, November 23, 2014 12:11 AM
>> To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at gluster.org
>> Subject: Re: [Gluster-users] Sparse Files and Heal
>>
>>
>> On 11/22/2014 09:31 PM, Adrian Kan wrote:
>>> I'm currently using 3.4.2
>> Do you mind upgrading to 3.5.x or want to stay with 3.4.x?
>>
>> Pranith
>>> Thanks,
>>> Adrian
>>>
>>> -----Original Message-----
>>> From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
>>> Sent: Saturday, November 22, 2014 11:57 PM
>>> To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at gluster.org
>>> Subject: Re: [Gluster-users] Sparse Files and Heal
>>>
>>>
>>> On 11/22/2014 09:25 PM, Adrian Kan wrote:
>>>> Thanks a lot Pranith.  Could you CC me the bug as well because I am
>>>> very interested in the status.
>>>> I'm getting the same issue since the middle of this year
>>>> (http://gluster.org/pipermail/gluster-users.old/2014-March/016322.ht
>>>> m
>>>> l
>>>> ) so I hope this can be fixed.
>>> Are you using 3.4.x or 3.5.x? There will be different bugs(clones)
>>> for the two releases. Based on that I will CC
>>>
>>> Pranith
>>>> Thanks,
>>>> Adrian
>>>>
>>>> -----Original Message-----
>>>> From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
>>>> Sent: Saturday, November 22, 2014 11:49 PM
>>>> To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at gluster.org
>>>> Subject: Re: [Gluster-users] Sparse Files and Heal
>>>>
>>>>
>>>> On 11/22/2014 01:17 PM, Adrian Kan wrote:
>>>>> Pranith,
>>>>>
>>>>> I'm wondering if this is a better method to take down a brick for
>>>>> maintenance purpose and reheal:
>>>>>
>>>>> 1) Detach the brick from the cluster - gluster volume remove-brick
>>>>> datastore1 replica 1 brick1:/mnt/datastore1
>>>>> 2) Take down the brick1
>>>>> 3) Do whatever maintenance needed to brick1
>>>>> 4) Turn the brick1 back on
>>>>> 5) I'm pretty sure glusterfs would not allow brick1 to be
>>>>> re-attached to the cluster because there are attributes set in the
>>>>> volume.  The only way is to remove everything in it.
>>>>> 6) Re-attach brick1 after emptying the directory in brick1 -
>>>>> gluster volume add-brick datastore1 replica brick1:/mnt/datastore1
>>>>> 7) Initiate full heal
>>>> Best method is just 2), 3), 4). The only bug that is preventing that
>>>> from happening now is 'full' heal filling sparse regions of the
>>>> file, which will be fixed shortly, we even identified the fix.
>>>>
>>>> Pranith
>>>>> Thanks,
>>>>> Adrian
>>>>>
>>>>> -----Original Message-----
>>>>> From: gluster-users-bounces at gluster.org
>>>>> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Lindsay
>>>>> Mathieson
>>>>> Sent: Saturday, November 22, 2014 3:35 PM
>>>>> To: gluster-users at gluster.org
>>>>> Subject: Re: [Gluster-users] Sparse Files and Heal
>>>>>
>>>>> On Sat, 22 Nov 2014 12:54:48 PM you wrote:
>>>>>> Lindsay,
>>>>>>            You said, you restored it from some backup. How did you
>>>>>> do
>> that?
>>>>>> If you copy the VM image from back up to the location where you
>>>>>> deleted it from on the brick directly. Then the VM hypervisor
>>>>>> still doesn't write to the new file that is copied. Basically we
>>>>>> need to make the mount close old fd that was opened on the VM(now
>>>>>> deleted on one
>>>>> of the bricks).
>>>>>
>>>>>
>>>>> I stopped the the VM and the restore creates an image with a new
>>>>> name, so it should be fine.
>>>>>
>>>>> thanks,
>>>>> --
>>>>> Lindsay
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list