[Gluster-users] / - is in split-brain
Pablo Schandin
schandinp at gmail.com
Wed Mar 20 14:38:15 UTC 2019
Here is the output
root at gluster-gu1:~# gluster volume info gv1
>
> Volume Name: gv1
> Type: Replicate
> Volume ID: 3bb5023c-93bb-433e-8b95-56cfca82b68a
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gluster-gu1.xcade.net:/mnt/gv_gu1/brick
> Brick2: gluster-gu2.xcade.net:/mnt/gv_gu1/brick
> Options Reconfigured:
> performance.client-io-threads: off
> transport.address-family: inet
> nfs.disable: on
> performance.readdir-ahead: on
> diagnostics.brick-log-level: WARNING
> diagnostics.client-log-level: WARNING
El mié., 20 mar. 2019 a las 0:16, Nithya Balachandran (<nbalacha at redhat.com>)
escribió:
> Hi,
>
> What is the output of the gluster volume info ?
>
> Thanks,
> Nithya
>
> On Wed, 20 Mar 2019 at 01:58, Pablo Schandin <schandinp at gmail.com> wrote:
>
>> Hello all!
>>
>> I had a volume with only a local brick running vms and recently added a
>> second (remote) brick to the volume. After adding the brick, the heal
>> command reported the following:
>>
>> root at gluster-gu1:~# gluster volume heal gv1 info
>>> Brick gluster-gu1:/mnt/gv_gu1/brick
>>> / - Is in split-brain
>>> Status: Connected
>>> Number of entries: 1
>>> Brick gluster-gu2:/mnt/gv_gu1/brick
>>> Status: Connected
>>> Number of entries: 0
>>
>>
>> All other files healed correctly. I noticed that in the xfs of the brick
>> I see a directory named localadmin but when I ls the gluster volume
>> mountpoint I got an error and a lot of ???
>>
>> root at gluster-gu1:/var/lib/vmImages_gu1# ll
>>> ls: cannot access 'localadmin': No data available
>>> d????????? ? ? ? ? ? localadmin/
>>
>>
>> This goes for both servers that have that volume gv1 mounted. Both see
>> that directory like that. While in the xfs brick
>> /mnt/gv_gu1/brick/localadmin is an accessible directory.
>>
>> root at gluster-gu1:/mnt/gv_gu1/brick/localadmin# ll
>>> total 4
>>> drwxr-xr-x 2 localadmin root 6 Mar 7 09:40 ./
>>> drwxr-xr-x 6 root root 4096 Mar 7 09:40 ../
>>
>>
>> When I added the second brick to the volume, this localadmin folder was
>> not replicated there I imagine because of this strange behavior.
>>
>> Can someone help me with this?
>> Thanks!
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190320/0e41f239/attachment.html>
More information about the Gluster-users
mailing list