[Gluster-users] Shards and heal question post move-brick

David Gossage dgossage at carouselchecks.com
Thu Jul 21 03:32:20 UTC 2016


Resolved shards that wouldn't heal by stopping vm (to be on safe side)
comparing shard size between 3 nodes.  New node was always 0MB and date was
older so deleted that file plus its corresponding entry in glusterfs.  Did
stat/ls on disk image over gluster mount and waited for heal to finish.

split-brain entries disappeared for .shard directory once that was done as
well.

Guess next adventure is to troubleshoot if I still have issues with
3.7.12/13

*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284

On Wed, Jul 20, 2016 at 8:49 AM, David Gossage <dgossage at carouselchecks.com>
wrote:

> Another possibility is I've read a few email chains saying full heal
> should be run from the node with highest uuid, which after checking would
> have been my new empty node.  Yet I ran it from the first node.  Could that
> have caused the issues?  If so can I just kick off a new full heal from new
> node or do I need to kill old heal process somehow?  Or at this point would
> that be too late?
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>
> On Wed, Jul 20, 2016 at 4:53 AM, David Gossage <
> dgossage at carouselchecks.com> wrote:
>
>> If I read this correctly
>>
>> [2016-07-20 09:06:39.072114] E [MSGID: 108008]
>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>> <be318638-e8a0-4c6d-977d-7a937aa84806/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217>,
>> bc2b3e90-8efe-4d75-acda-9c4cc9cf2800 on GLUSTER1-client-2 and
>> a5cf59d7-c38a-4612-835d-3a294e70084d on GLUSTER1-client-0. Skipping
>> conservative merge on the file.
>>
>> the mismatch is between brick1 ccgl1 or client 0 and the new brick ccgl4
>> or client-2
>>
>> Can I just tell it to use a specific brick as correct one as I expect
>> client0/ccgl1 which had the correct data prior to kicking off post move
>> brick heal to have the correct data.  Or will this still involve powering
>> off VM and reseting  trusted.afr.GLUSTER1-client-# values?  That method
>> leaves me puzzled unless I am looking at old docs as I would expect to see
>> 3 lines of trusted.afr.GLUSTER1-client-#
>>
>> *David Gossage*
>> *Carousel Checks Inc. | System Administrator*
>> *Office* 708.613.2284
>>
>> On Wed, Jul 20, 2016 at 4:29 AM, David Gossage <
>> dgossage at carouselchecks.com> wrote:
>>
>>> picking one shard at random trying to follow
>>> https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md
>>> to see if I can figure out procedure.
>>> Shouldn't each shard have a trusted.afr.GLUSTER1-client-# for each of 3
>>> bricks?  First 2 servers have just 1, and 3rd new server has 2.  So sort of
>>> at loss how to interpret that at the moment.
>>>
>>> [root at ccgl1 ~]# getfattr -d -m . -e hex
>>> /gluster1/BRICK1/1/.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>> getfattr: Removing leading '/' from absolute path names
>>> # file: gluster1/BRICK1/1/.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>
>>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>>> trusted.afr.GLUSTER1-client-2=0x00001be30000000100000000
>>> trusted.afr.dirty=0x000000000000000000000000
>>> trusted.bit-rot.version=0x020000000000000057813fe9000a51b4
>>> trusted.gfid=0xa5cf59d7c38a4612835d3a294e70084d
>>>
>>> [root at ccgl2 ~]# getfattr -d -m . -e hex
>>> /gluster1/BRICK1/1/.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>> getfattr: Removing leading '/' from absolute path names
>>> # file: gluster1/BRICK1/1/.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>
>>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>>> trusted.afr.GLUSTER1-client-2=0x00001be30000000100000000
>>> trusted.afr.dirty=0x000000000000000000000000
>>> trusted.bit-rot.version=0x020000000000000057813fe9000b206f
>>> trusted.gfid=0xa5cf59d7c38a4612835d3a294e70084d
>>>
>>> [root at ccgl4 ~]# getfattr -d -m . -e hex
>>> /gluster1/BRICK1/1/.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>> getfattr: Removing leading '/' from absolute path names
>>> # file: gluster1/BRICK1/1/.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>
>>> security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
>>> trusted.afr.GLUSTER1-client-0=0x000000010000000100000000
>>> trusted.afr.GLUSTER1-client-1=0x000000010000000100000000
>>> trusted.gfid=0xbc2b3e908efe4d75acda9c4cc9cf2800
>>>
>>>
>>>
>>> *David Gossage*
>>> *Carousel Checks Inc. | System Administrator*
>>> *Office* 708.613.2284
>>>
>>> On Wed, Jul 20, 2016 at 4:13 AM, David Gossage <
>>> dgossage at carouselchecks.com> wrote:
>>>
>>>> Plenty of these in glustershd.log
>>>>
>>>> [2016-07-20 09:06:39.072114] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217>,
>>>> bc2b3e90-8efe-4d75-acda-9c4cc9cf2800 on GLUSTER1-client-2 and
>>>> a5cf59d7-c38a-4612-835d-3a294e70084d on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>> [2016-07-20 09:06:41.546543] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/18843fb4-e31c-4fc3-b519-cc6e5e947813.192>,
>>>> 45ac549c-25e1-4842-b9ed-ca3205101465 on GLUSTER1-client-2 and
>>>> 4c44b432-0b4d-4304-b030-e7a35d0dafc3 on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>> [2016-07-20 09:06:42.484537] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/91e12c0a-2557-4410-bf7d-834604f221f0.95>,
>>>> dea6f79c-56e2-4831-9542-57f034ad2afb on GLUSTER1-client-2 and
>>>> 6626519c-67f6-4e11-92a9-bf3c1b67cafd on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>> [2016-07-20 09:06:43.799848] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/d5a328be-03d0-42f7-a443-248290849e7d.48>,
>>>> 844384f5-4443-445f-b4ea-b0641d1c9fae on GLUSTER1-client-2 and
>>>> 97ac4b05-ac40-411d-981b-092556021aad on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>> [2016-07-20 09:06:44.572868] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10>,
>>>> 59894562-d896-40dd-aa36-e612e74bac43 on GLUSTER1-client-2 and
>>>> bcb2ab91-3bf6-477e-89db-66a69fac650e on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>> [2016-07-20 09:06:47.279036] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/585c8361-f6a9-48d4-9295-260ec657de1e.48>,
>>>> 6fcab9a8-add1-4e05-affb-612b96457351 on GLUSTER1-client-2 and
>>>> 0675dde6-e978-4d9d-aae9-eb8e3d11596e on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>> [2016-07-20 09:06:47.706885] E [MSGID: 108008]
>>>> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
>>>> 0-GLUSTER1-replicate-0: Gfid mismatch detected for
>>>> <be318638-e8a0-4c6d-977d-7a937aa84806/18843fb4-e31c-4fc3-b519-cc6e5e947813.226>,
>>>> 4314c4ad-d2a6-4352-b7f1-37cd3381fdf7 on GLUSTER1-client-2 and
>>>> 5540f9cd-0f91-44e7-bfaa-74870a908030 on GLUSTER1-client-0. Skipping
>>>> conservative merge on the file.
>>>>
>>>> also as opposed to earlier I know see a result in split brain info
>>>>
>>>> Brick ccgl1.gl.local:/gluster1/BRICK1/1
>>>> /.shard
>>>> Status: Connected
>>>> Number of entries in split-brain: 1
>>>>
>>>> Brick ccgl2.gl.local:/gluster1/BRICK1/1
>>>> /.shard
>>>> Status: Connected
>>>> Number of entries in split-brain: 1
>>>>
>>>> Brick ccgl4.gl.local:/gluster1/BRICK1/1
>>>> Status: Connected
>>>> Number of entries in split-brain: 0
>>>>
>>>>
>>>>
>>>>
>>>> *David Gossage*
>>>> *Carousel Checks Inc. | System Administrator*
>>>> *Office* 708.613.2284
>>>>
>>>> On Wed, Jul 20, 2016 at 1:29 AM, David Gossage <
>>>> dgossage at carouselchecks.com> wrote:
>>>>
>>>>> So I have enabled shading on 3.7.11, moved all VM images off and on
>>>>> and everything seemed fine no issues.
>>>>>
>>>>> Volume Name: GLUSTER1
>>>>> Type: Replicate
>>>>> Volume ID: 167b8e57-28c3-447a-95cc-8410cbdf3f7f
>>>>> Status: Started
>>>>> Number of Bricks: 1 x 3 = 3
>>>>> Transport-type: tcp
>>>>> Bricks:
>>>>> Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
>>>>> Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
>>>>> Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
>>>>> Options Reconfigured:
>>>>> nfs.enable-ino32: off
>>>>> nfs.addr-namelookup: off
>>>>> nfs.disable: on
>>>>> performance.strict-write-ordering: off
>>>>> cluster.background-self-heal-count: 16
>>>>> cluster.self-heal-window-size: 1024
>>>>> server.allow-insecure: on
>>>>> cluster.server-quorum-type: server
>>>>> cluster.quorum-type: auto
>>>>> network.remote-dio: enable
>>>>> cluster.eager-lock: enable
>>>>> performance.stat-prefetch: on
>>>>> performance.io-cache: off
>>>>> performance.read-ahead: off
>>>>> performance.quick-read: off
>>>>> storage.owner-gid: 36
>>>>> storage.owner-uid: 36
>>>>> performance.readdir-ahead: on
>>>>> features.shard: on
>>>>> features.shard-block-size: 64MB
>>>>> diagnostics.brick-log-level: WARNING
>>>>>
>>>>>
>>>>> I added a new server(ccgl4) to replace one that nic died.  Probed, and
>>>>> moved brick then kicked off heal.
>>>>>
>>>>> When I did so on the smaller volume that just hosts my hosted engine
>>>>> it completed so fast that engine didnt even hiccup.
>>>>>
>>>>> When I did same procedure on larger volume all VM's paused.  I'm
>>>>> guesing just too many shards healing at once maybe no biggie.  I let heal
>>>>> go went to bed for few hours while it healed 600GB of shards.
>>>>>
>>>>> This morning looks like I may have some split-brain heals going on.
>>>>> All VM's started just fine after I unpausd/re-started them as needed.  So
>>>>> far they have been up services seem ok, nothign seems read only etc.
>>>>>
>>>>> But I have shards that still seem healing after a bit.
>>>>> split-brain currently shows 0 entries on all 3 nodes (1 example below)
>>>>> [root at ccgl1 ~]# gluster volume heal GLUSTER1 info split-brain
>>>>> Brick ccgl1.gl.local:/gluster1/BRICK1/1
>>>>> Status: Connected
>>>>> Number of entries in split-brain: 0
>>>>>
>>>>> Brick ccgl2.gl.local:/gluster1/BRICK1/1
>>>>> Status: Connected
>>>>> Number of entries in split-brain: 0
>>>>>
>>>>> Brick ccgl4.gl.local:/gluster1/BRICK1/1
>>>>> Status: Connected
>>>>> Number of entries in split-brain: 0
>>>>>
>>>>>
>>>>> However for example I get different results running (and sorry for the
>>>>> copy/paste storm incoming)
>>>>>
>>>>> [root at ccgl1 ~]# gluster volume heal GLUSTER1 info
>>>>> Brick ccgl1.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> Status: Connected
>>>>> Number of entries: 21
>>>>>
>>>>> Brick ccgl2.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> Status: Connected
>>>>> Number of entries: 21
>>>>>
>>>>> Brick ccgl4.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> Status: Connected
>>>>> Number of entries: 19
>>>>>
>>>>> [root at ccgl2 ~]# gluster volume heal GLUSTER1 info
>>>>> Brick ccgl1.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/1cec1185-ad25-414b-9ae3-c17b5ef0d064.3
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> Status: Connected
>>>>> Number of entries: 22
>>>>>
>>>>> Brick ccgl2.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/900421ff-b10d-404a-bbe7-173af11a69dd.377
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.248
>>>>> Status: Connected
>>>>> Number of entries: 23
>>>>>
>>>>> Brick ccgl4.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.45
>>>>> /.shard/996ba563-c7c6-4448-9d94-2dee6c90b8c3.200
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.47
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.47
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.311
>>>>> Status: Connected
>>>>> Number of entries: 24
>>>>>
>>>>>
>>>>>
>>>>> [root at ccgl4 ~]# gluster volume heal GLUSTER1 info
>>>>> Brick ccgl1.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.123
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.45
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> Status: Connected
>>>>> Number of entries: 23
>>>>>
>>>>> Brick ccgl2.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.2
>>>>> /.shard - Possibly undergoing heal
>>>>>
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.248
>>>>> Status: Connected
>>>>> Number of entries: 23
>>>>>
>>>>> Brick ccgl4.gl.local:/gluster1/BRICK1/1
>>>>> /.shard/4c7d44fc-a0c1-413b-8dc4-2abbbe1d4d4f.49
>>>>> /.shard/91e12c0a-2557-4410-bf7d-834604f221f0.95
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.274
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.48
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.1
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.49
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.49
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.192
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.47
>>>>> /.shard/76b7fe00-a2b1-4b77-b129-c5643e0cffa7.10
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.201
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.209
>>>>> /.shard/f9a7f3c5-4c13-4020-b560-1f4f7b1e3c42.47
>>>>> /.shard/d5a328be-03d0-42f7-a443-248290849e7d.48
>>>>> /.shard/18843fb4-e31c-4fc3-b519-cc6e5e947813.226
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.217
>>>>> /.shard/5b708220-7e27-4a27-a2ac-98a6ae7c693d.226
>>>>> /.shard/241a55ed-f0d5-4dbc-a6ce-ab784a0ba6ff.218
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.32
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.45
>>>>> /.shard/996ba563-c7c6-4448-9d94-2dee6c90b8c3.200
>>>>> /.shard/996ba563-c7c6-4448-9d94-2dee6c90b8c3.160
>>>>> /.shard/585c8361-f6a9-48d4-9295-260ec657de1e.46
>>>>> Status: Connected
>>>>> Number of entries: 23
>>>>>
>>>>>
>>>>> Should I just let it keep going?
>>>>>
>>>>> *David Gossage*
>>>>> *Carousel Checks Inc. | System Administrator*
>>>>> *Office* 708.613.2284
>>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160720/c14a99c0/attachment.html>


More information about the Gluster-users mailing list