[Gluster-users] broken gluster config

Diego Remolina dijuremo at gmail.com
Thu May 10 09:31:27 UTC 2018


https://docs.gluster.org/en/v3/Troubleshooting/resolving-splitbrain/

Hopefully the link above will help you fix it.

Diego

On Wed, May 9, 2018, 21:53 Thing <thing.thing at gmail.com> wrote:

> [trying to read,
>
>
> I cant understand what is wrong?
>
> root at glusterp1 gv0]# gluster volume heal gv0 info
> Brick glusterp1:/bricks/brick1/gv0
> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>
> Status: Connected
> Number of entries: 1
>
> Brick glusterp2:/bricks/brick1/gv0
> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>
> Status: Connected
> Number of entries: 1
>
> Brick glusterp3:/bricks/brick1/gv0
> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>
> Status: Connected
> Number of entries: 1
>
> [root at glusterp1 gv0]# getfattr -d -m . -e hex /bricks/brick1/gv0
> getfattr: Removing leading '/' from absolute path names
> # file: bricks/brick1/gv0
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
> trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
>
> [root at glusterp1 gv0]# gluster volume info vol
> Volume vol does not exist
> [root at glusterp1 gv0]# gluster volume info gv0
>
> Volume Name: gv0
> Type: Replicate
> Volume ID: cfceb353-5f0e-4cf1-8b53-3ccfb1f091d3
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: glusterp1:/bricks/brick1/gv0
> Brick2: glusterp2:/bricks/brick1/gv0
> Brick3: glusterp3:/bricks/brick1/gv0
> Options Reconfigured:
> performance.client-io-threads: off
> nfs.disable: on
> transport.address-family: inet
> [root at glusterp1 gv0]#
>
>
> ================
>
> [root at glusterp2 gv0]# getfattr -d -m . -e hex /bricks/brick1/gv0
> getfattr: Removing leading '/' from absolute path names
> # file: bricks/brick1/gv0
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
> trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
>
> [root at glusterp2 gv0]#
>
> ================
>
> [root at glusterp3 isos]# getfattr -d -m . -e hex /bricks/brick1/gv0
> getfattr: Removing leading '/' from absolute path names
> # file: bricks/brick1/gv0
>
> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> trusted.gfid=0x00000000000000000000000000000001
> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
> trusted.glusterfs.volume-id=0xcfceb3535f0e4cf18b533ccfb1f091d3
>
> [root at glusterp3 isos]#
>
>
>
>
>
>
> On 10 May 2018 at 13:22, Thing <thing.thing at gmail.com> wrote:
>
>> Whatever repair happened has now finished but I still have this,
>>
>> I cant find anything so far telling me how to fix it.  Looking at
>>
>>
>> http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
>>
>> I cant determine what file? dir gvo? is actually the issue.
>>
>> [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
>> Brick glusterp1:/bricks/brick1/gv0
>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> Brick glusterp2:/bricks/brick1/gv0
>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> Brick glusterp3:/bricks/brick1/gv0
>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
>> Status: Connected
>> Number of entries in split-brain: 1
>>
>> [root at glusterp1 gv0]#
>>
>>
>> On 10 May 2018 at 12:22, Thing <thing.thing at gmail.com> wrote:
>>
>>> also I have this "split brain"?
>>>
>>> [root at glusterp1 gv0]# gluster volume heal gv0 info
>>> Brick glusterp1:/bricks/brick1/gv0
>>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>>>
>>> Status: Connected
>>> Number of entries: 1
>>>
>>> Brick glusterp2:/bricks/brick1/gv0
>>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>>>
>>> /glusterp1/images/centos-server-001.qcow2
>>> /glusterp1/images/kubernetes-template.qcow2
>>> /glusterp1/images/kworker01.qcow2
>>> /glusterp1/images/kworker02.qcow2
>>> Status: Connected
>>> Number of entries: 5
>>>
>>> Brick glusterp3:/bricks/brick1/gv0
>>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>>>
>>> /glusterp1/images/centos-server-001.qcow2
>>> /glusterp1/images/kubernetes-template.qcow2
>>> /glusterp1/images/kworker01.qcow2
>>> /glusterp1/images/kworker02.qcow2
>>> Status: Connected
>>> Number of entries: 5
>>>
>>> [root at glusterp1 gv0]#
>>>
>>> On 10 May 2018 at 12:20, Thing <thing.thing at gmail.com> wrote:
>>>
>>>> [root at glusterp1 gv0]# !737
>>>> gluster v status
>>>> Status of volume: gv0
>>>> Gluster process                             TCP Port  RDMA Port
>>>> Online  Pid
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Brick glusterp1:/bricks/brick1/gv0          49152     0          Y
>>>>  5229
>>>> Brick glusterp2:/bricks/brick1/gv0          49152     0          Y
>>>>  2054
>>>> Brick glusterp3:/bricks/brick1/gv0          49152     0          Y
>>>>  2110
>>>> Self-heal Daemon on localhost               N/A       N/A        Y
>>>>  5219
>>>> Self-heal Daemon on glusterp2               N/A       N/A        Y
>>>>  1943
>>>> Self-heal Daemon on glusterp3               N/A       N/A        Y
>>>>  2067
>>>>
>>>> Task Status of Volume gv0
>>>>
>>>> ------------------------------------------------------------------------------
>>>> There are no active volume tasks
>>>>
>>>> [root at glusterp1 gv0]# ls -l glusterp1/images/
>>>> total 2877064
>>>> -rw-------. 2 root root 107390828544 May 10 12:18
>>>> centos-server-001.qcow2
>>>> -rw-r--r--. 2 root root            0 May  8 14:32 file1
>>>> -rw-r--r--. 2 root root            0 May  9 14:41 file1-1
>>>> -rw-------. 2 root root  85912715264 May 10 12:18
>>>> kubernetes-template.qcow2
>>>> -rw-------. 2 root root            0 May 10 12:08 kworker01.qcow2
>>>> -rw-------. 2 root root            0 May 10 12:08 kworker02.qcow2
>>>> [root at glusterp1 gv0]#
>>>>
>>>>
>>>> while,
>>>>
>>>> [root at glusterp2 gv0]# ls -l glusterp1/images/
>>>> total 11209084
>>>> -rw-------. 2 root root 107390828544 May  9 14:45
>>>> centos-server-001.qcow2
>>>> -rw-r--r--. 2 root root            0 May  8 14:32 file1
>>>> -rw-r--r--. 2 root root            0 May  9 14:41 file1-1
>>>> -rw-------. 2 root root  85912715264 May  9 15:59
>>>> kubernetes-template.qcow2
>>>> -rw-------. 2 root root   3792371712 May  9 16:15 kworker01.qcow2
>>>> -rw-------. 2 root root   3792371712 May 10 11:20 kworker02.qcow2
>>>> [root at glusterp2 gv0]#
>>>>
>>>> So some files have re-synced but not the kworker machines   network
>>>> activity has stopped.
>>>>
>>>>
>>>>
>>>> On 10 May 2018 at 12:05, Diego Remolina <dijuremo at gmail.com> wrote:
>>>>
>>>>> Show us output from: gluster v status
>>>>>
>>>>> It should be easy to fix. Stop gluster daemon on that node, mount the
>>>>> brick, start gluster daemon again.
>>>>>
>>>>> Check: gluster v status
>>>>>
>>>>> Does it show the brick up?
>>>>>
>>>>> HTH,
>>>>>
>>>>> Diego
>>>>>
>>>>>
>>>>> On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>>>>>>
>>>>>> Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt
>>>>>> mount on boot and as a result its empty.
>>>>>>
>>>>>> Meanwhile I have data on glusterp2 /bricks/brick1/gv0  and glusterp3
>>>>>> /bricks/brick1/gv0 as expected.
>>>>>>
>>>>>> Is there a way to get glusterp1's gv0 to sync off the other 2? there
>>>>>> must be but,
>>>>>>
>>>>>> I have looked at the gluster docs and I cant find anything about
>>>>>> repairing  resyncing?
>>>>>>
>>>>>> Where am I meant to look for such info?
>>>>>>
>>>>>> thanks
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>
>>>>
>>>
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180510/c38f4a03/attachment.html>


More information about the Gluster-users mailing list