[Gluster-users] Replacing a third data node with an arbiter one
Ravishankar N
ravishankar at redhat.com
Mon Jan 29 16:17:08 UTC 2018
On 01/29/2018 08:56 PM, Hoggins! wrote:
> Thank you, for that, however I have a problem.
>
> Le 26/01/2018 à 02:35, Ravishankar N a écrit :
>> Yes, you would need to reduce it to replica 2 and then convert it to
>> arbiter.
>> 1. Ensure there are no pending heals, i.e. heal info shows zero entries.
>> 2. gluster volume remove-brick thedude replica 2
>> ngluster-3.network.hoggins.fr:/export/brick/thedude force
>> 3. gluster volume add-brick thedude replica 3 arbiter 1 <IP:brick path
>> of the new arbiter brick>
> Removing the current third brick was OK with
> volume remove-brick thedude replica 2
> ngluster-3.network.hoggins.fr:/export/brick/thedude force
>
> But while adding the arbiter brick with
> volume add-brick thedude replica 3 arbiter 1
> arbiter-1.network.hoggins.fr:/gluster/thedude force
>
> ... I got this :
> "volume add-brick: failed: Commit failed on
> arbiter-1.network.hoggins.fr. Please check log file for details."
>
> On arbiter-1 brick, the log says :
>
> [2018-01-29 15:15:52.999698] I [run.c:190:runner_log]
> (-->/usr/lib64/glusterfs/3.12.5/xlator/mgmt/glusterd.so(+0x3744b)
> [0x7fcd49fef44b]
> -->/usr/lib64/glusterfs/3.12.5/xlator/mgmt/glusterd.so(+0xd252c)
> [0x7fcd4a08a52c] -->/lib64/libglusterfs.so.0(runner_log+0x105)
> [0x7fcd4f48d0b5] ) 0-management: Ran script:
> /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
> --volname=thedude --version=1 --volume-op=add-brick
> --gd-workdir=/var/lib/glusterd
> [2018-01-29 15:15:52.999816] I [MSGID: 106578]
> [glusterd-brick-ops.c:1354:glusterd_op_perform_add_bricks]
> 0-management: replica-count is set 3
> [2018-01-29 15:15:52.999849] I [MSGID: 106578]
> [glusterd-brick-ops.c:1359:glusterd_op_perform_add_bricks]
> 0-management: arbiter-count is set 1
> [2018-01-29 15:15:52.999862] I [MSGID: 106578]
> [glusterd-brick-ops.c:1364:glusterd_op_perform_add_bricks]
> 0-management: type is set 0, need to change it
> [2018-01-29 15:15:55.140751] I
> [glusterd-utils.c:5941:glusterd_brick_start] 0-management: starting
> a fresh brick process for brick /gluster/thedude
> [2018-01-29 15:15:55.194678] E [MSGID: 106005]
> [glusterd-utils.c:5947:glusterd_brick_start] 0-management: Unable to
> start brick arbiter-1.network.hoggins.fr:/gluster/thedude
You need to find why is this so. What does the arbiter brick log say?
Does gluster volume status show the brick as up and running?
-Ravi
> [2018-01-29 15:15:55.194823] E [MSGID: 106074]
> [glusterd-brick-ops.c:2590:glusterd_op_add_brick] 0-glusterd: Unable
> to add bricks
> [2018-01-29 15:15:55.194854] E [MSGID: 106123]
> [glusterd-mgmt.c:312:gd_mgmt_v3_commit_fn] 0-management: Add-brick
> commit failed.
> [2018-01-29 15:15:55.194868] E [MSGID: 106123]
> [glusterd-mgmt-handler.c:603:glusterd_handle_commit_fn]
> 0-management: commit failed on operation Add brick
>
>
> However, when I query again :
>
> gluster> volume info thedude
>
> Volume Name: thedude
> Type: Replicate
> Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ngluster-1.network.hoggins.fr:/export/brick/thedude
> Brick2: ngluster-2.network.hoggins.fr:/export/brick/thedude
> Brick3: arbiter-1.network.hoggins.fr:/gluster/thedude (arbiter)
> Options Reconfigured:
> cluster.server-quorum-type: server
> transport.address-family: inet
> nfs.disable: on
> performance.readdir-ahead: on
> client.event-threads: 8
> server.event-threads: 15
>
>
> ... I can see that the arbiter has been taken into account.
>
> So is it, or is it not ? How to ensure that ?
>
> Thanks !
>
> Hoggins!
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180129/4c50f4ee/attachment.html>
More information about the Gluster-users
mailing list