[Gluster-users] 3.7.13, index healing broken?
Dmitry Melekhov
dm at belkam.com
Wed Jul 13 04:11:49 UTC 2016
13.07.2016 07:46, Pranith Kumar Karampuri пишет:
>
>
> On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov <dm at belkam.com
> <mailto:dm at belkam.com>> wrote:
>
>
>
> 12.07.2016 17:39, Pranith Kumar Karampuri пишет:
>> Wow, what are the steps to recreate the problem?
>
> just set file length to zero, always reproducible.
>
>
> Changing things on the brick i.e. not from gluster volume mount is not
> something you want to do. In the worst case(I have seen this only once
> in the last 5 years though) where you do this it can lead to data loss
> also. So please be aware of it.
Data replication with gluster is a way to avoid data loss, right? Or no?
If not- why use gluster then?
I though that gluster self-healing will heal or at least report missed
files or files with wrong lenths- i.e. corruptions visible just by
reading brick's directory,
not comparing data as bit rot detection...
If this is not a bug, then gluster is not what I expected :-(
Thank you!
>
>
>
>>
>> On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov <dm at belkam.com
>> <mailto:dm at belkam.com>> wrote:
>>
>> 12.07.2016 13:33, Pranith Kumar Karampuri пишет:
>>> What was "gluster volume heal <volname> info" showing when
>>> you saw this issue?
>>
>> just reproduced :
>>
>>
>> [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm
>>
>> [root at father brick]# gluster volume heal pool
>> Launching heal operation to perform index self heal on volume
>> pool has been successful
>> Use heal info commands to check status
>> [root at father brick]# gluster volume heal pool info
>> Brick father:/wall/pool/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Brick son:/wall/pool/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Brick spirit:/wall/pool/brick
>> Status: Connected
>> Number of entries: 0
>>
>> [root at father brick]#
>>
>>
>>
>>>
>>> On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov
>>> <dm at belkam.com <mailto:dm at belkam.com>> wrote:
>>>
>>> Hello!
>>>
>>> 3.7.13, 3 bricks volume.
>>>
>>> inside one of bricks:
>>>
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 52268 июл 11 13:00
>>> gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>>
>>> [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 0 июл 11 13:54
>>> gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>> so now file has 0 length.
>>>
>>> try to heal:
>>>
>>>
>>>
>>> [root at father brick]# gluster volume heal pool
>>> Launching heal operation to perform index self heal on
>>> volume pool has been successful
>>> Use heal info commands to check status
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 0 июл 11 13:54
>>> gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>>
>>> nothing!
>>>
>>> [root at father brick]# gluster volume heal pool full
>>> Launching heal operation to perform full self heal on
>>> volume pool has been successful
>>> Use heal info commands to check status
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 52268 июл 11 13:00
>>> gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>>
>>> full heal is OK.
>>>
>>> But, self-heal is doing index heal according to
>>>
>>> http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/
>>>
>>> Is this bug?
>>>
>>>
>>> As far as I remember it worked in 3.7.10....
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>>
>>> --
>>> Pranith
>>
>>
>>
>>
>> --
>> Pranith
>
>
>
>
> --
> Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160713/cb7c6d73/attachment.html>
More information about the Gluster-users
mailing list