[Gluster-users] 3.7.13, index healing broken?
Pranith Kumar Karampuri
pkarampu at redhat.com
Wed Jul 13 03:46:56 UTC 2016
On Tue, Jul 12, 2016 at 9:27 PM, Dmitry Melekhov <dm at belkam.com> wrote:
>
>
> 12.07.2016 17:39, Pranith Kumar Karampuri пишет:
>
> Wow, what are the steps to recreate the problem?
>
>
> just set file length to zero, always reproducible.
>
Changing things on the brick i.e. not from gluster volume mount is not
something you want to do. In the worst case(I have seen this only once in
the last 5 years though) where you do this it can lead to data loss also.
So please be aware of it.
>
>
>
> On Tue, Jul 12, 2016 at 3:09 PM, Dmitry Melekhov <dm at belkam.com> wrote:
>
>> 12.07.2016 13:33, Pranith Kumar Karampuri пишет:
>>
>> What was "gluster volume heal <volname> info" showing when you saw this
>> issue?
>>
>>
>> just reproduced :
>>
>>
>> [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm
>>
>> [root at father brick]# gluster volume heal pool
>> Launching heal operation to perform index self heal on volume pool has
>> been successful
>> Use heal info commands to check status
>> [root at father brick]# gluster volume heal pool info
>> Brick father:/wall/pool/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Brick son:/wall/pool/brick
>> Status: Connected
>> Number of entries: 0
>>
>> Brick spirit:/wall/pool/brick
>> Status: Connected
>> Number of entries: 0
>>
>> [root at father brick]#
>>
>>
>>
>>
>> On Mon, Jul 11, 2016 at 3:28 PM, Dmitry Melekhov < <dm at belkam.com>
>> dm at belkam.com> wrote:
>>
>>> Hello!
>>>
>>> 3.7.13, 3 bricks volume.
>>>
>>> inside one of bricks:
>>>
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 52268 июл 11 13:00 gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>>
>>> [root at father brick]# > gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 0 июл 11 13:54 gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>> so now file has 0 length.
>>>
>>> try to heal:
>>>
>>>
>>>
>>> [root at father brick]# gluster volume heal pool
>>> Launching heal operation to perform index self heal on volume pool has
>>> been successful
>>> Use heal info commands to check status
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 0 июл 11 13:54 gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>>
>>> nothing!
>>>
>>> [root at father brick]# gluster volume heal pool full
>>> Launching heal operation to perform full self heal on volume pool has
>>> been successful
>>> Use heal info commands to check status
>>> [root at father brick]# ls -l gstatus-0.64-3.el7.x86_64.rpm
>>> -rw-r--r-- 2 root root 52268 июл 11 13:00 gstatus-0.64-3.el7.x86_64.rpm
>>> [root at father brick]#
>>>
>>>
>>> full heal is OK.
>>>
>>> But, self-heal is doing index heal according to
>>>
>>>
>>> http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Developer-guide/afr-self-heal-daemon/
>>>
>>> Is this bug?
>>>
>>>
>>> As far as I remember it worked in 3.7.10....
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>> --
>> Pranith
>>
>>
>>
>
>
> --
> Pranith
>
>
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160713/122f300f/attachment.html>
More information about the Gluster-users
mailing list