[Gluster-users] Dir split brain resolution
Alex K
rightkicktech at gmail.com
Mon Feb 5 13:42:02 UTC 2018
Hi Karthik,
I tried to delete one file at one node and that is probably the reason.
After several deletes seems that I deleted some files that shouldn't and
the ovirt engine hosted on this volume was not able to start.
Now I am setting up the engine from scratch...
In case I see this kind of split brain again I will get back before I start
deleting :)
Alex
On Mon, Feb 5, 2018 at 2:34 PM, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
> Hi,
>
> I am wondering why the other brick is not showing any entry in split brain
> in the heal info split-brain output.
> Can you give the output of stat & getfattr -d -m . -e hex
> <file-path-on-brick> from both the bricks.
>
> Regards,
> Karthik
>
> On Mon, Feb 5, 2018 at 5:03 PM, Alex K <rightkicktech at gmail.com> wrote:
>
>> After stoping/starting the volume I have:
>>
>> gluster volume heal engine info split-brain
>> Brick gluster0:/gluster/engine/brick
>> <gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8>
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> Brick gluster1:/gluster/engine/brick
>> Status: Connected
>> Number of entries in split-brain: 0
>>
>> gluster volume heal engine split-brain latest-mtime
>> gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8
>> Healing gfid:bb675ea6-0622-4852-9e59-27a4c93ac0f8 failed:Operation not
>> permitted.
>> Volume heal failed.
>>
>> I will appreciate any help.
>> thanx,
>> Alex
>>
>> On Mon, Feb 5, 2018 at 1:11 PM, Alex K <rightkicktech at gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I have a split brain issue and have the following situation:
>>>
>>> gluster volume heal engine info split-brain
>>>
>>> Brick gluster0:/gluster/engine/brick
>>> /ad1f38d7-36df-4cee-a092-ab0ce1f98ce9/ha_agent
>>> Status: Connected
>>> Number of entries in split-brain: 1
>>>
>>> Brick gluster1:/gluster/engine/brick
>>> Status: Connected
>>> Number of entries in split-brain: 0
>>>
>>> cd ha_agent/
>>> [root at v0 ha_agent]# ls -al
>>> ls: cannot access hosted-engine.metadata: Input/output error
>>> ls: cannot access hosted-engine.lockspace: Input/output error
>>> total 8
>>> drwxrwx--- 2 vdsm kvm 4096 Feb 5 10:52 .
>>> drwxr-xr-x 5 vdsm kvm 4096 Jan 18 01:17 ..
>>> l????????? ? ? ? ? ? hosted-engine.lockspace
>>> l????????? ? ? ? ? ? hosted-engine.metadata
>>>
>>> I tried to delete the directory from one node but it gives Input/output
>>> error.
>>> How would one proceed to resolve this?
>>>
>>> Thanx,
>>> Alex
>>>
>>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180205/10d66e62/attachment.html>
More information about the Gluster-users
mailing list