[Gluster-devel] 0-management: Commit failed for operation Start on local node
TomK
tomkcpr at mdevsys.com
Wed Sep 25 10:56:32 UTC 2019
Brick log for specific gluster start command attempt (full log attached):
[2019-09-25 10:53:37.847426] I [MSGID: 100030] [glusterfsd.c:2847:main]
0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 6.5
(args: /usr/sbin/glusterfsd -s mdskvm-p01.nix.mds.xyz --volfile-id
mdsgv01.mdskvm-p01.nix.mds.xyz.mnt-p01-d01-glusterv01 -p
/var/run/gluster/vols/mdsgv01/mdskvm-p01.nix.mds.xyz-mnt-p01-d01-glusterv01.pid
-S /var/run/gluster/defbdb699838d53b.socket --brick-name
/mnt/p01-d01/glusterv01 -l
/var/log/glusterfs/bricks/mnt-p01-d01-glusterv01.log --xlator-option
*-posix.glusterd-uuid=f7336db6-22b4-497d-8c2f-04c833a28546
--process-name brick --brick-port 49155 --xlator-option
mdsgv01-server.listen-port=49155)
[2019-09-25 10:53:37.848508] I [glusterfsd.c:2556:daemonize]
0-glusterfs: Pid of current running process is 23133
[2019-09-25 10:53:37.858381] I [socket.c:902:__socket_server_bind]
0-socket.glusterfsd: closing (AF_UNIX) reuse check socket 9
[2019-09-25 10:53:37.865940] I [MSGID: 101190]
[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 0
[2019-09-25 10:53:37.866054] I [glusterfsd-mgmt.c:2443:mgmt_rpc_notify]
0-glusterfsd-mgmt: disconnected from remote-host: mdskvm-p01.nix.mds.xyz
[2019-09-25 10:53:37.866043] I [MSGID: 101190]
[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-09-25 10:53:37.866083] I [glusterfsd-mgmt.c:2463:mgmt_rpc_notify]
0-glusterfsd-mgmt: Exhausted all volfile servers
[2019-09-25 10:53:37.866454] W [glusterfsd.c:1570:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
-->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
-->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
received signum (1), shutting down
[2019-09-25 10:53:37.872399] I
[socket.c:3754:socket_submit_outgoing_msg] 0-glusterfs: not connected
(priv->connected = 0)
[2019-09-25 10:53:37.872445] W [rpc-clnt.c:1704:rpc_clnt_submit]
0-glusterfs: failed to submit rpc-request (unique: 0, XID: 0x2 Program:
Gluster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)
[2019-09-25 10:53:37.872534] W [glusterfsd.c:1570:cleanup_and_exit]
(-->/lib64/libgfrpc.so.0(+0xf1d3) [0x7f9680ee91d3]
-->/usr/sbin/glusterfsd(+0x12fef) [0x55ca25710fef]
-->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x55ca2570901b] ) 0-:
received signum (1), shutting down
On 9/25/2019 6:48 AM, TomK wrote:
> Attached.
>
>
> On 9/25/2019 5:08 AM, Sanju Rakonde wrote:
>> Hi, The below errors indicate that brick process is failed to start.
>> Please attach brick log.
>>
>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>> fresh brick process for brick /mnt/p01-d01/glusterv01
>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>> [2019-09-25 05:17:26.722960] D [MSGID: 0]
>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning -107
>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>> commit failed.
>>
>> On Wed, Sep 25, 2019 at 11:00 AM TomK <tomkcpr at mdevsys.com
>> <mailto:tomkcpr at mdevsys.com>> wrote:
>>
>> Hey All,
>>
>> I'm getting the below error when trying to start a 2 node Gluster
>> cluster.
>>
>> I had the quorum enabled when I was at version 3.12 . However with
>> this
>> version it needed the quorum disabled. So I did so however now
>> see the
>> subject error.
>>
>> Any ideas what I could try next?
>>
>> -- Thx,
>> TK.
>>
>>
>> [2019-09-25 05:17:26.615203] D [MSGID: 0]
>> [glusterd-utils.c:1136:glusterd_resolve_brick] 0-management:
>> Returning 0
>> [2019-09-25 05:17:26.615555] D [MSGID: 0]
>> [glusterd-mgmt.c:243:gd_mgmt_v3_pre_validate_fn] 0-management: OP
>> = 5.
>> Returning 0
>> [2019-09-25 05:17:26.616271] D [MSGID: 0]
>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>> mdsgv01 found
>> [2019-09-25 05:17:26.616305] D [MSGID: 0]
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>> Returning 0
>> [2019-09-25 05:17:26.616327] D [MSGID: 0]
>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management:
>> returning 0
>> [2019-09-25 05:17:26.617056] I
>> [glusterd-utils.c:6312:glusterd_brick_start] 0-management: starting a
>> fresh brick process for brick /mnt/p01-d01/glusterv01
>> [2019-09-25 05:17:26.722717] E [MSGID: 106005]
>> [glusterd-utils.c:6317:glusterd_brick_start] 0-management: Unable to
>> start brick mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>> [2019-09-25 05:17:26.722960] D [MSGID: 0]
>> [glusterd-utils.c:6327:glusterd_brick_start] 0-management: returning
>> -107
>> [2019-09-25 05:17:26.723006] E [MSGID: 106122]
>> [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start
>> commit failed.
>> [2019-09-25 05:17:26.723027] D [MSGID: 0]
>> [glusterd-mgmt.c:444:gd_mgmt_v3_commit_fn] 0-management: OP = 5.
>> Returning -107
>> [2019-09-25 05:17:26.723045] E [MSGID: 106122]
>> [glusterd-mgmt.c:1696:glusterd_mgmt_v3_commit] 0-management: Commit
>> failed for operation Start on local node
>> [2019-09-25 05:17:26.723073] D [MSGID: 0]
>> [glusterd-op-sm.c:5106:glusterd_op_modify_op_ctx] 0-management:
>> op_ctx
>> modification not required
>> [2019-09-25 05:17:26.723141] E [MSGID: 106122]
>> [glusterd-mgmt.c:2466:glusterd_mgmt_v3_initiate_all_phases]
>> 0-management: Commit Op Failed
>> [2019-09-25 05:17:26.723204] D [MSGID: 0]
>> [glusterd-locks.c:797:glusterd_mgmt_v3_unlock] 0-management:
>> Trying to
>> release lock of vol mdsgv01 for
>> f7336db6-22b4-497d-8c2f-04c833a28546 as
>> mdsgv01_vol
>> [2019-09-25 05:17:26.723239] D [MSGID: 0]
>> [glusterd-locks.c:846:glusterd_mgmt_v3_unlock] 0-management: Lock for
>> vol mdsgv01 successfully released
>> [2019-09-25 05:17:26.723273] D [MSGID: 0]
>> [glusterd-utils.c:1767:glusterd_volinfo_find] 0-management: Volume
>> mdsgv01 found
>> [2019-09-25 05:17:26.723326] D [MSGID: 0]
>> [glusterd-utils.c:1774:glusterd_volinfo_find] 0-management:
>> Returning 0
>> [2019-09-25 05:17:26.723360] D [MSGID: 0]
>> [glusterd-locks.c:464:glusterd_multiple_mgmt_v3_unlock] 0-management:
>> Returning 0
>>
>> ==> /var/log/glusterfs/cmd_history.log <==
>> [2019-09-25 05:17:26.723390] : volume start mdsgv01 : FAILED :
>> Commit
>> failed on localhost. Please check log file for details.
>>
>> ==> /var/log/glusterfs/glusterd.log <==
>> [2019-09-25 05:17:26.723479] D [MSGID: 0]
>> [glusterd-rpc-ops.c:199:glusterd_op_send_cli_response] 0-management:
>> Returning 0
>>
>>
>>
>> [root at mdskvm-p01 glusterfs]# cat /etc/glusterfs/glusterd.vol
>> volume management
>> type mgmt/glusterd
>> option working-directory /var/lib/glusterd
>> option transport-type socket,rdma
>> option transport.socket.keepalive-time 10
>> option transport.socket.keepalive-interval 2
>> option transport.socket.read-fail-log off
>> option ping-timeout 0
>> option event-threads 1
>> option rpc-auth-allow-insecure on
>> # option cluster.server-quorum-type server
>> # option cluster.quorum-type auto
>> option server.event-threads 8
>> option client.event-threads 8
>> option performance.write-behind-window-size 8MB
>> option performance.io-thread-count 16
>> option performance.cache-size 1GB
>> option nfs.trusted-sync on
>> option storage.owner-uid 36
>> option storage.owner-uid 36
>> option cluster.data-self-heal-algorithm full
>> option performance.low-prio-threads 32
>> option features.shard-block-size 512MB
>> option features.shard on
>> end-volume
>> [root at mdskvm-p01 glusterfs]#
>>
>>
>> [root at mdskvm-p01 glusterfs]# gluster volume info
>>
>> Volume Name: mdsgv01
>> Type: Replicate
>> Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
>> Status: Stopped
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: mdskvm-p02.nix.mds.xyz:/mnt/p02-d01/glusterv02
>> Brick2: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
>> Options Reconfigured:
>> storage.owner-gid: 36
>> cluster.data-self-heal-algorithm: full
>> performance.low-prio-threads: 32
>> features.shard-block-size: 512MB
>> features.shard: on
>> storage.owner-uid: 36
>> cluster.server-quorum-type: none
>> cluster.quorum-type: none
>> server.event-threads: 8
>> client.event-threads: 8
>> performance.write-behind-window-size: 8MB
>> performance.io-thread-count: 16
>> performance.cache-size: 1GB
>> nfs.trusted-sync: on
>> server.allow-insecure: on
>> performance.readdir-ahead: on
>> diagnostics.brick-log-level: DEBUG
>> diagnostics.brick-sys-log-level: INFO
>> diagnostics.client-log-level: DEBUG
>> [root at mdskvm-p01 glusterfs]#
>>
>>
>> _______________________________________________
>>
>> Community Meeting Calendar:
>>
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/118564314
>>
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/118564314
>>
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org <mailto:Gluster-devel at gluster.org>
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>>
>> --
>> Thanks,
>> Sanju
>
>
>
> _______________________________________________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
--
Thx,
TK.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glusterd-brick.tar.gz
Type: application/x-gzip
Size: 28548 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190925/89d439af/attachment-0001.gz>
More information about the Gluster-devel
mailing list