[Gluster-devel] 0-management: Commit failed for operation Start on local node

TomK tomkcpr at mdevsys.com
Wed Sep 25 10:48:57 UTC 2019


Attached.


On 9/25/2019 5:08 AM, Sanju Rakonde wrote:
> Hi, The below errors indicate that brick process is failed to start. 
> Please attach brick log.
> 
> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
> fresh brick process for brick /mnt/p01-d01/glusterv01
> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
> [2019-09-25 05:17:26.722960] D [MSGID: 0]
> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
> commit failed.
> 
> On Wed, Sep 25, 2019 at 11:00 AM TomK <tomkcpr at mdevsys.com 
> <mailto:tomkcpr at mdevsys.com>> wrote:
> 
>     Hey All,
> 
>     I'm getting the below error when trying to start a 2 node Gluster
>     cluster.
> 
>     I had the quorum enabled when I was at version 3.12 .  However with
>     this
>     version it needed the quorum disabled.  So I did so however now see the
>     subject error.
> 
>     Any ideas what I could try next?
> 
>     -- 
>     Thx,
>     TK.
> 
> 
>     [2019-09-25 05:17:26.615203] D [MSGID: 0]
>     [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management: Returning 0
>     [2019-09-25 05:17:26.615555] D [MSGID: 0]
>     [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP = 5.
>     Returning 0
>     [2019-09-25 05:17:26.616271] D [MSGID: 0]
>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>     mdsgv01 found
>     [2019-09-25 05:17:26.616305] D [MSGID: 0]
>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
>     [2019-09-25 05:17:26.616327] D [MSGID: 0]
>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning 0
>     [2019-09-25 05:17:26.617056] I
>     [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>     fresh brick process for brick /mnt/p01-d01/glusterv01
>     [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>     [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>     start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>     [2019-09-25 05:17:26.722960] D [MSGID: 0]
>     [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>     -107
>     [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>     [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>     commit failed.
>     [2019-09-25 05:17:26.723027] D [MSGID: 0]
>     [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>     Returning -107
>     [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>     [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>     failed for operation Start on local node
>     [2019-09-25 05:17:26.723073] D [MSGID: 0]
>     [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management: op_ctx
>     modification not required
>     [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>     [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>     0-management: Commit Op Failed
>     [2019-09-25 05:17:26.723204] D [MSGID: 0]
>     [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management: Trying to
>     release lock of vol mdsgv01 for f7336db6-22b4-497d-8c2f-04c833a28546 as
>     mdsgv01_vol
>     [2019-09-25 05:17:26.723239] D [MSGID: 0]
>     [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
>     vol mdsgv01 successfully released
>     [2019-09-25 05:17:26.723273] D [MSGID: 0]
>     [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>     mdsgv01 found
>     [2019-09-25 05:17:26.723326] D [MSGID: 0]
>     [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management: Returning 0
>     [2019-09-25 05:17:26.723360] D [MSGID: 0]
>     [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
>     Returning 0
> 
>     ==> /var/log/glusterfs/cmd_history.log <==
>     [2019-09-25 05:17:26.723390]  : volume start mdsgv01 : FAILED : Commit
>     failed on localhost. Please check log file for details.
> 
>     ==> /var/log/glusterfs/glusterd.log <==
>     [2019-09-25 05:17:26.723479] D [MSGID: 0]
>     [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
>     Returning 0
> 
> 
> 
>     [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>     volume management
>           type mgmt/glusterd
>           option working-directory /var/lib/glusterd
>           option transport-type socket,rdma
>           option transport.socket.keepalive-time 10
>           option transport.socket.keepalive-interval 2
>           option transport.socket.read-fail-log off
>           option ping-timeout 0
>           option event-threads 1
>           option rpc-auth-allow-insecure on
>           # option cluster.server-quorum-type server
>           # option cluster.quorum-type auto
>           option server.event-threads 8
>           option client.event-threads 8
>           option performance.write-behind-window-size 8MB
>           option performance.io-thread-count 16
>           option performance.cache-size 1GB
>           option nfs.trusted-sync on
>           option storage.owner-uid 36
>           option storage.owner-uid 36
>           option cluster.data-self-heal-algorithm full
>           option performance.low-prio-threads 32
>           option features.shard-block-size 512MB
>           option features.shard on
>     end-volume
>     [root at mdskvm-p01 glusterfs]#
> 
> 
>     [root at mdskvm-p01 glusterfs]# gluster volume info
> 
>     Volume Name: mdsgv01
>     Type: Replicate
>     Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>     Status: Stopped
>     Snapshot Count: 0
>     Number of Bricks: 1 x 2 = 2
>     Transport-type: tcp
>     Bricks:
>     Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>     Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>     Options Reconfigured:
>     storage.owner-gid: 36
>     cluster.data-self-heal-algorithm: full
>     performance.low-prio-threads: 32
>     features.shard-block-size: 512MB
>     features.shard: on
>     storage.owner-uid: 36
>     cluster.server-quorum-type: none
>     cluster.quorum-type: none
>     server.event-threads: 8
>     client.event-threads: 8
>     performance.write-behind-window-size: 8MB
>     performance.io-thread-count: 16
>     performance.cache-size: 1GB
>     nfs.trusted-sync: on
>     server.allow-insecure: on
>     performance.readdir-ahead: on
>     diagnostics.brick-log-level: DEBUG
>     diagnostics.brick-sys-log-level: INFO
>     diagnostics.client-log-level: DEBUG
>     [root at mdskvm-p01 glusterfs]#
> 
> 
>     _______________________________________________
> 
>     Community Meeting Calendar:
> 
>     APAC Schedule -
>     Every 2nd and 4th Tuesday at 11:30 AM IST
>     Bridge: https://bluejeans.com/118564314
> 
>     NA/EMEA Schedule -
>     Every 1st and 3rd Tuesday at 01:00 PM EDT
>     Bridge: https://bluejeans.com/118564314
> 
>     Gluster-devel mailing list
>     Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>     https://lists.gluster.org/mailman/listinfo/gluster-devel
> 
> 
> 
> -- 
> Thanks,
> Sanju


-- 
Thx,
TK.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glusterd-logs.tar.gz
Type: application/x-gzip
Size: 683318 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190925/f17f9809/attachment-0001.gz>


More information about the Gluster-devel mailing list