[Gluster-users] gluster volume status -> commit failed
Christophe TREFOIS
christophe.trefois at uni.lu
Wed Dec 30 10:20:06 UTC 2015
Hi Atin,
Thank you for the fast responses!
So, your fix, worked
[root at highlander live]# gluster volume status
Status of volume: live
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick stor104:/zfs/brick0/brick 49152 0 Y 16702
Brick stor104:/zfs/brick1/brick 49153 0 Y 16720
Brick stor104:/zfs/brick2/brick 49154 0 Y 16738
Brick stor106:/zfs/brick0/brick 49161 0 Y 16309
Brick stor106:/zfs/brick1/brick 49162 0 Y 16327
Brick stor106:/zfs/brick2/brick 49163 0 Y 16345
Brick stor105:/zfs/brick0/brick 49152 0 Y 16184
Brick stor105:/zfs/brick1/brick 49153 0 Y 16202
Brick stor105:/zfs/brick2/brick 49154 0 Y 16220
Task Status of Volume live
------------------------------------------------------------------------------
There are no active volume tasks
I also had a look inside the files, and on the controller, there was this in the node_state.info file
[root at highlander live]# cat node_state.info
rebalance_status=0
status=0
rebalance_op=21
rebalance-id=ab4f05dd-bb96-4eb5-b0e2-46702a436355
On the actual nodes, the rebalance_op was 0.
I did do many rebalance operations in the past, and afaik the last one actually succeeded.
Thanks a lot for your help!
—
Christophe
Dr Christophe Trefois, Dipl.-Ing.
Technical Specialist / Post-Doc
UNIVERSITÉ DU LUXEMBOURG
LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
Campus Belval | House of Biomedicine
6, avenue du Swing
L-4367 Belvaux
T: +352 46 66 44 6124
F: +352 46 66 44 6949
http://www.uni.lu/lcsb
[Facebook]<https://www.facebook.com/trefex> [Twitter] <https://twitter.com/Trefex> [Google Plus] <https://plus.google.com/+ChristopheTrefois/> [Linkedin] <https://www.linkedin.com/in/trefoischristophe> [skype] <http://skype:Trefex?call>
----
This message is confidential and may contain privileged information.
It is intended for the named recipient only.
If you receive it in error please notify me and permanently delete the original message and any copies.
----
On 29 Dec 2015, at 17:59, Atin Mukherjee <atin.mukherjee83 at gmail.com<mailto:atin.mukherjee83 at gmail.com>> wrote:
I'll dive into it in detail in some time. Could you provide cli/cmd_history/glusterd log files for further debugging.
-Atin
Sent from one plus one
On Dec 29, 2015 9:53 PM, "Christophe TREFOIS" <christophe.trefois at uni.lu<mailto:christophe.trefois at uni.lu>> wrote:
Hi Atin,
Same issue. I restarted glusterd and glusterfsd everywhere and it seems the thing is still in STATEDUMP.
Any other pointers?
Kind regards,
—
Christophe
> On 29 Dec 2015, at 16:19, Atin Mukherjee <amukherj at redhat.com<mailto:amukherj at redhat.com>> wrote:
>
>
>
> On 12/29/2015 07:09 PM, Christophe TREFOIS wrote:
>> Hi,
>>
>>
>>> On 29 Dec 2015, at 14:27, Atin Mukherjee <amukherj at redhat.com<mailto:amukherj at redhat.com>> wrote:
>>>
>>> It seems like your opCode is STATEDUMP instead of STATUS which is weird.
>>> Are you running a heterogeneous cluster?
>>
>> What does that mean? In principle no.
> This means when all the nodes in the cluster are not running with same bits.
>>
>>> What is the last version you
>>> were running with?
>>
>> I think it was 3.7.3 or 3.7.something. I’m not sure, but it was in the 3.7 repo.
>>
>>> What's the current cluster op-version?
>>
>> [root at highlander ~]# gluster volume get live cluster.op-version
>> Option Value
>> ------ -----
>> cluster.op-version 30600
> Can you bump up the op-version with 'gluster volume set all
> cluster.op-version 30706 and see if the problem goes away. But you can
> only bump this up if all nodes are upgraded to 3.7.6
>>
>> Thank you,
>>
>>>
>>> Thanks,
>>> Atin
>>>
>>> On 12/29/2015 06:31 PM, Christophe TREFOIS wrote:
>>>> Dear all,
>>>>
>>>> I have a 3-node distribute setup with a controller of GlusterFS and
>>>> upgraded to 3.7.6 today and to CentOS 7.2.
>>>>
>>>> After the ugprade (reboot), I can start the volume fine and see mounted
>>>> volume as well on the controller.
>>>>
>>>> However, a gluster volume info <volname> results in an
>>>>
>>>> [root at stor104 glusterfs]# gluster volume status live
>>>> Commit failed on localhost. Please check the log file for more details.
>>>>
>>>> error.
>>>>
>>>> Below some information and log extracts.
>>>>
>>>> Thank you for any hints on where to start fixing this,
>>>>
>>>> Kind regards,
>>>>
>>>> —
>>>> Christophe
>>>>
>>>> ——————
>>>>
>>>> Here is the gluster info command for the volume.
>>>>
>>>> [root at stor104 glusterfs]# gluster volume info live
>>>>
>>>> Volume Name: live
>>>> Type: Distribute
>>>> Volume ID: 1328637d-7730-4627-8945-bbe43626d527
>>>> Status: Started
>>>> Number of Bricks: 9
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: stor104:/zfs/brick0/brick
>>>> Brick2: stor104:/zfs/brick1/brick
>>>> Brick3: stor104:/zfs/brick2/brick
>>>> Brick4: stor106:/zfs/brick0/brick
>>>> Brick5: stor106:/zfs/brick1/brick
>>>> Brick6: stor106:/zfs/brick2/brick
>>>> Brick7: stor105:/zfs/brick0/brick
>>>> Brick8: stor105:/zfs/brick1/brick
>>>> Brick9: stor105:/zfs/brick2/brick
>>>> Options Reconfigured:
>>>> performance.io<http://performance.io/> <http://performance.io<http://performance.io/>>-thread-count: 8
>>>> nfs.disable: on
>>>> performance.write-behind-window-size: 4MB
>>>> performance.client-io-threads: on
>>>> performance.cache-size: 1GB
>>>> performance.cache-refresh-timeout: 60
>>>> performance.cache-max-file-size: 4MB
>>>> cluster.data-self-heal-algorithm: full
>>>> diagnostics.client-log-level: ERROR
>>>> diagnostics.brick-log-level: ERROR
>>>> cluster.min-free-disk: 1%
>>>> server.allow-insecure: on
>>>>
>>>> Relevant log parts when carrying out the command:
>>>>
>>>> ==> /var/log/glusterfs/cli.log <==
>>>> [2015-12-29 12:51:03.216998] I [cli.c:721:main] 0-cli: Started running
>>>> gluster with version 3.7.6
>>>> [2015-12-29 12:51:03.226123] I
>>>> [cli-cmd-volume.c:1926:cli_check_gsync_present] 0-: geo-replication not
>>>> installed
>>>> [2015-12-29 12:51:03.226623] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
>>>> with index 1
>>>> [2015-12-29 12:51:03.226723] I [socket.c:2355:socket_event_handler]
>>>> 0-transport: disconnecting now
>>>>
>>>> ==> /var/log/glusterfs/cmd_history.log <==
>>>> [2015-12-29 12:51:03.236182] : volume status : SUCCESS
>>>>
>>>> ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==
>>>> [2015-12-29 12:51:03.238562] I [MSGID: 106499]
>>>> [glusterd-handler.c:4267:__glusterd_handle_status_volume] 0-management:
>>>> Received status volume req for volume live
>>>> [2015-12-29 12:51:03.246043] E [MSGID: 106396]
>>>> [glusterd-op-sm.c:2985:_add_task_to_dict] 0-management: Statedump
>>>> operation doesn't have a task_id
>>>> [2015-12-29 12:51:03.246081] E [MSGID: 106060]
>>>> [glusterd-op-sm.c:3055:glusterd_aggregate_task_status] 0-management:
>>>> Failed to add task details to dict
>>>> [2015-12-29 12:51:03.246098] E [MSGID: 106123]
>>>> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of
>>>> operation 'Volume Status' failed on localhost
>>>>
>>>> ==> /var/log/glusterfs/cmd_history.log <==
>>>> [2015-12-29 12:51:03.249364] : volume status : FAILED : Commit failed
>>>> on localhost. Please check the log file for more details.
>>>>
>>>> ==> /var/log/glusterfs/cli.log <==
>>>> [2015-12-29 12:51:03.249647] I [input.c:36:cli_batch] 0-: Exiting with: 0
>>>>
>>>> All bricks are up
>>>>
>>>> [root at highlander glusterfs]# pdsh -g live 'df -h | grep brick*'
>>>> stor106: brick0 50T 33T 18T 66% /zfs/brick0
>>>> stor106: brick1 50T 33T 18T 66% /zfs/brick1
>>>> stor106: brick2 50T 33T 18T 66% /zfs/brick2
>>>> stor105: brick0 40T 23T 18T 57% /zfs/brick0
>>>> stor105: brick1 40T 23T 18T 57% /zfs/brick1
>>>> stor105: brick2 40T 23T 18T 57% /zfs/brick2
>>>> stor104: brick0 40T 23T 18T 57% /zfs/brick0
>>>> stor104: brick1 40T 23T 18T 57% /zfs/brick1
>>>> stor104: brick2 40T 23T 18T 57% /zfs/brick2
>>>>
>>>> Package details
>>>>
>>>> [root at highlander glusterfs]# rpm -qa | grep gluster
>>>> glusterfs-client-xlators-3.7.6-1.el7.x86_64
>>>> glusterfs-server-3.7.6-1.el7.x86_64
>>>> samba-vfs-glusterfs-4.2.3-10.el7.x86_64
>>>> glusterfs-3.7.6-1.el7.x86_64
>>>> glusterfs-cli-3.7.6-1.el7.x86_64
>>>> glusterfs-libs-3.7.6-1.el7.x86_64
>>>> glusterfs-fuse-3.7.6-1.el7.x86_64
>>>> glusterfs-api-3.7.6-1.el7.x86_64
>>>>
>>>> [root at highlander glusterfs]# pdsh -g live 'rpm -qa | grep gluster'
>>>> stor105: glusterfs-libs-3.7.6-1.el7.x86_64
>>>> stor105: glusterfs-api-3.7.6-1.el7.x86_64
>>>> stor105: glusterfs-3.7.6-1.el7.x86_64
>>>> stor105: glusterfs-fuse-3.7.6-1.el7.x86_64
>>>> stor105: glusterfs-cli-3.7.6-1.el7.x86_64
>>>> stor105: glusterfs-client-xlators-3.7.6-1.el7.x86_64
>>>> stor105: glusterfs-server-3.7.6-1.el7.x86_64
>>>>
>>>> stor104: glusterfs-server-3.7.6-1.el7.x86_64
>>>> stor104: glusterfs-libs-3.7.6-1.el7.x86_64
>>>> stor104: glusterfs-api-3.7.6-1.el7.x86_64
>>>> stor104: glusterfs-3.7.6-1.el7.x86_64
>>>> stor104: glusterfs-fuse-3.7.6-1.el7.x86_64
>>>> stor104: glusterfs-cli-3.7.6-1.el7.x86_64
>>>> stor104: glusterfs-client-xlators-3.7.6-1.el7.x86_64
>>>>
>>>> stor106: glusterfs-3.7.6-1.el7.x86_64
>>>> stor106: glusterfs-cli-3.7.6-1.el7.x86_64
>>>> stor106: glusterfs-server-3.7.6-1.el7.x86_64
>>>> stor106: glusterfs-libs-3.7.6-1.el7.x86_64
>>>> stor106: glusterfs-client-xlators-3.7.6-1.el7.x86_64
>>>> stor106: glusterfs-api-3.7.6-1.el7.x86_64
>>>> stor106: glusterfs-fuse-3.7.6-1.el7.x86_64
>>>>
>>>> More detailled logs:
>>>>
>>>> ==> /var/log/glusterfs/cli.log <==
>>>> [2015-12-29 12:57:23.520821] I [cli.c:721:main] 0-cli: Started running
>>>> gluster with version 3.7.6
>>>> [2015-12-29 12:57:23.530898] I
>>>> [cli-cmd-volume.c:1926:cli_check_gsync_present] 0-: geo-replication not
>>>> installed
>>>> [2015-12-29 12:57:23.531844] I [MSGID: 101190]
>>>> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
>>>> with index 1
>>>> [2015-12-29 12:57:23.532004] I [socket.c:2355:socket_event_handler]
>>>> 0-transport: disconnecting now
>>>>
>>>> ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==
>>>> [2015-12-29 12:57:23.534830] D [MSGID: 0]
>>>> [glusterd-op-sm.c:214:glusterd_generate_txn_id] 0-management:
>>>> Transaction_id = c2886398-03c1-4d32-924a-9f92367be85c
>>>> [2015-12-29 12:57:23.534946] D [MSGID: 0]
>>>> [glusterd-op-sm.c:311:glusterd_set_txn_opinfo] 0-management:
>>>> Successfully set opinfo for transaction ID :
>>>> c2886398-03c1-4d32-924a-9f92367be85c
>>>> [2015-12-29 12:57:23.534975] D [MSGID: 0]
>>>> [glusterd-op-sm.c:318:glusterd_set_txn_opinfo] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.535001] D [MSGID: 0]
>>>> [glusterd-syncop.c:1767:gd_sync_task_begin] 0-management: Transaction ID
>>>> : c2886398-03c1-4d32-924a-9f92367be85c
>>>> [2015-12-29 12:57:23.535031] D [MSGID: 0]
>>>> [glusterd-syncop.c:1807:gd_sync_task_begin] 0-glusterd: Failed to get
>>>> volume name
>>>> [2015-12-29 12:57:23.535078] D [MSGID: 0]
>>>> [glusterd-op-sm.c:1695:glusterd_op_stage_status_volume] 0-management:
>>>> Returning: 0
>>>> [2015-12-29 12:57:23.535103] D [MSGID: 0]
>>>> [glusterd-op-sm.c:5515:glusterd_op_stage_validate] 0-management: OP =
>>>> 18. Returning 0
>>>> [2015-12-29 12:57:23.535555] D
>>>> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f410a9d5a82] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f410a7a587b]
>>>> (--> /lib64/libgfrpc.so.0(+0x13e74)[0x7f410a7a5e74] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x34f)[0x7f410a7a1c4f] (-->
>>>> /usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(gd_syncop_submit_request+0x1a5)[0x7f40ff5d24b5]
>>>> ))))) 0-: 192.168.123.105:24007<http://192.168.123.105:24007/>: ping timer event already removed
>>>> [2015-12-29 12:57:23.535935] D
>>>> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f410a9d5a82] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f410a7a587b]
>>>> (--> /lib64/libgfrpc.so.0(+0x13e74)[0x7f410a7a5e74] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x34f)[0x7f410a7a1c4f] (-->
>>>> /usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(gd_syncop_submit_request+0x1a5)[0x7f40ff5d24b5]
>>>> ))))) 0-: 192.168.123.1:24007<http://192.168.123.1:24007/>: ping timer event already removed
>>>> [2015-12-29 12:57:23.536279] D
>>>> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (-->
>>>> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f410a9d5a82] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f410a7a587b]
>>>> (--> /lib64/libgfrpc.so.0(+0x13e74)[0x7f410a7a5e74] (-->
>>>> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x34f)[0x7f410a7a1c4f] (-->
>>>> /usr/lib64/glusterfs/3.7.6/xlator/mgmt/glusterd.so(gd_syncop_submit_request+0x1a5)[0x7f40ff5d24b5]
>>>> ))))) 0-: 192.168.123.106:24007<http://192.168.123.106:24007/>: ping timer event already removed
>>>> [2015-12-29 12:57:23.536358] D [MSGID: 0]
>>>> [glusterd-syncop.c:1312:gd_stage_op_phase] 0-management: Sent stage op
>>>> req for 'Volume Status' to 3 peers
>>>> [2015-12-29 12:57:23.538064] D [MSGID: 0]
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:
>>>> Friend found... state: Peer in Cluster
>>>> [2015-12-29 12:57:23.539059] D [logging.c:1952:_gf_msg_internal]
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>>>> About to flush least recently used log message to disk
>>>> The message "D [MSGID: 0]
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:
>>>> Friend found... state: Peer in Cluster" repeated 2 times between
>>>> [2015-12-29 12:57:23.538064] and [2015-12-29 12:57:23.538833]
>>>> [2015-12-29 12:57:23.539057] D [MSGID: 0]
>>>> [glusterd-op-sm.c:7010:glusterd_op_bricks_select] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.539163] D [MSGID: 0]
>>>> [glusterd-syncop.c:1709:gd_brick_op_phase] 0-management: Sent op req to
>>>> 0 bricks
>>>> [2015-12-29 12:57:23.539213] D [MSGID: 0]
>>>> [glusterd-op-sm.c:3360:glusterd_op_status_volume] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.539237] D [MSGID: 0]
>>>> [glusterd-op-sm.c:5642:glusterd_op_commit_perform] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.541960] D [MSGID: 0]
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:
>>>> Friend found... state: Peer in Cluster
>>>> [2015-12-29 12:57:23.542525] D [logging.c:1952:_gf_msg_internal]
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>>>> About to flush least recently used log message to disk
>>>> The message "D [MSGID: 0]
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:
>>>> Friend found... state: Peer in Cluster" repeated 2 times between
>>>> [2015-12-29 12:57:23.541960] and [2015-12-29 12:57:23.542186]
>>>> [2015-12-29 12:57:23.542523] D [MSGID: 0]
>>>> [glusterd-syncop.c:1449:gd_commit_op_phase] 0-management: Sent commit op
>>>> req for 'Volume Status' to 3 peers
>>>> [2015-12-29 12:57:23.542643] D [MSGID: 0]
>>>> [glusterd-op-sm.c:4487:glusterd_op_modify_op_ctx] 0-management: op_ctx
>>>> modification not required for status operation being performed
>>>> [2015-12-29 12:57:23.542680] D [MSGID: 0]
>>>> [glusterd-op-sm.c:255:glusterd_get_txn_opinfo] 0-management:
>>>> Successfully got opinfo for transaction ID :
>>>> c2886398-03c1-4d32-924a-9f92367be85c
>>>> [2015-12-29 12:57:23.542704] D [MSGID: 0]
>>>> [glusterd-op-sm.c:259:glusterd_get_txn_opinfo] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.542732] D [MSGID: 0]
>>>> [glusterd-op-sm.c:360:glusterd_clear_txn_opinfo] 0-management:
>>>> Successfully cleared opinfo for transaction ID :
>>>> c2886398-03c1-4d32-924a-9f92367be85c
>>>> [2015-12-29 12:57:23.542756] D [MSGID: 0]
>>>> [glusterd-op-sm.c:364:glusterd_clear_txn_opinfo] 0-management: Returning 0
>>>>
>>>> ==> /var/log/glusterfs/cmd_history.log <==
>>>> [2015-12-29 12:57:23.542781] : volume status : SUCCESS
>>>>
>>>> ==> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log <==
>>>> [2015-12-29 12:57:23.542847] D [MSGID: 0]
>>>> [glusterd-rpc-ops.c:205:glusterd_op_send_cli_response] 0-management:
>>>> Returning 0
>>>> [2015-12-29 12:57:23.545697] I [MSGID: 106499]
>>>> [glusterd-handler.c:4267:__glusterd_handle_status_volume] 0-management:
>>>> Received status volume req for volume live
>>>> [2015-12-29 12:57:23.545818] D [MSGID: 0]
>>>> [glusterd-op-sm.c:214:glusterd_generate_txn_id] 0-management:
>>>> Transaction_id = fd3671e4-fa7a-4913-a6dd-b37884a3a715
>>>> [2015-12-29 12:57:23.545872] D [MSGID: 0]
>>>> [glusterd-op-sm.c:311:glusterd_set_txn_opinfo] 0-management:
>>>> Successfully set opinfo for transaction ID :
>>>> fd3671e4-fa7a-4913-a6dd-b37884a3a715
>>>> [2015-12-29 12:57:23.545905] D [MSGID: 0]
>>>> [glusterd-op-sm.c:318:glusterd_set_txn_opinfo] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.545926] D [MSGID: 0]
>>>> [glusterd-syncop.c:1767:gd_sync_task_begin] 0-management: Transaction ID
>>>> : fd3671e4-fa7a-4913-a6dd-b37884a3a715
>>>> [2015-12-29 12:57:23.545978] D [MSGID: 0]
>>>> [glusterd-locks.c:562:glusterd_mgmt_v3_lock] 0-management: Trying to
>>>> acquire lock of vol live for 305c0f00-0f11-4da3-a470-50b6e6c14976 as
>>>> live_vol
>>>> [2015-12-29 12:57:23.546388] D [MSGID: 0]
>>>> [glusterd-locks.c:618:glusterd_mgmt_v3_lock] 0-management: Lock for vol
>>>> live successfully held by 305c0f00-0f11-4da3-a470-50b6e6c14976
>>>> [2015-12-29 12:57:23.546478] D [MSGID: 0]
>>>> [glusterd-syncop.c:411:gd_syncop_mgmt_v3_lock] 0-glusterd: Returning 0
>>>> [2015-12-29 12:57:23.549943] D [logging.c:1952:_gf_msg_internal]
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>>>> About to flush least recently used log message to disk
>>>> The message "D [MSGID: 0] [glusterd-syncop.c:411:gd_syncop_mgmt_v3_lock]
>>>> 0-glusterd: Returning 0" repeated 2 times between [2015-12-29
>>>> 12:57:23.546478] and [2015-12-29 12:57:23.546565]
>>>> [2015-12-29 12:57:23.549941] D [MSGID: 0]
>>>> [glusterd-syncop.c:1205:gd_lock_op_phase] 0-management: Sent lock op req
>>>> for 'Volume Status' to 3 peers. Returning 0
>>>> [2015-12-29 12:57:23.550063] D [MSGID: 0]
>>>> [glusterd-utils.c:1424:glusterd_volinfo_find] 0-management: Volume live
>>>> found
>>>> [2015-12-29 12:57:23.550087] D [MSGID: 0]
>>>> [glusterd-utils.c:1431:glusterd_volinfo_find] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.550121] D [MSGID: 0]
>>>> [glusterd-utils.c:1424:glusterd_volinfo_find] 0-management: Volume live
>>>> found
>>>> [2015-12-29 12:57:23.550140] D [MSGID: 0]
>>>> [glusterd-utils.c:1431:glusterd_volinfo_find] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.550167] D [MSGID: 0]
>>>> [glusterd-op-sm.c:1695:glusterd_op_stage_status_volume] 0-management:
>>>> Returning: 0
>>>> [2015-12-29 12:57:23.550192] D [MSGID: 0]
>>>> [glusterd-op-sm.c:5515:glusterd_op_stage_validate] 0-management: OP =
>>>> 18. Returning 0
>>>> [2015-12-29 12:57:23.550309] D [MSGID: 0]
>>>> [glusterd-syncop.c:1312:gd_stage_op_phase] 0-management: Sent stage op
>>>> req for 'Volume Status' to 3 peers
>>>> [2015-12-29 12:57:23.552791] D [MSGID: 0]
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:
>>>> Friend found... state: Peer in Cluster
>>>> [2015-12-29 12:57:23.553395] D [logging.c:1952:_gf_msg_internal]
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>>>> About to flush least recently used log message to disk
>>>> The message "D [MSGID: 0]
>>>> [glusterd-peer-utils.c:200:glusterd_peerinfo_find_by_uuid] 0-management:
>>>> Friend found... state: Peer in Cluster" repeated 2 times between
>>>> [2015-12-29 12:57:23.552791] and [2015-12-29 12:57:23.553087]
>>>> [2015-12-29 12:57:23.553394] D [MSGID: 0]
>>>> [glusterd-op-sm.c:7010:glusterd_op_bricks_select] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.553499] D [MSGID: 0]
>>>> [glusterd-syncop.c:1709:gd_brick_op_phase] 0-management: Sent op req to
>>>> 0 bricks
>>>> [2015-12-29 12:57:23.553535] D [MSGID: 0]
>>>> [glusterd-utils.c:1424:glusterd_volinfo_find] 0-management: Volume live
>>>> found
>>>> [2015-12-29 12:57:23.553556] D [MSGID: 0]
>>>> [glusterd-utils.c:1431:glusterd_volinfo_find] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.553786] D [MSGID: 0]
>>>> [glusterd-snapshot-utils.c:3595:glusterd_is_snapd_enabled] 0-management:
>>>> Key features.uss not present in the dict for volume live
>>>> [2015-12-29 12:57:23.554065] E [MSGID: 106396]
>>>> [glusterd-op-sm.c:2985:_add_task_to_dict] 0-management: Statedump
>>>> operation doesn't have a task_id
>>>> [2015-12-29 12:57:23.554097] E [MSGID: 106060]
>>>> [glusterd-op-sm.c:3055:glusterd_aggregate_task_status] 0-management:
>>>> Failed to add task details to dict
>>>> [2015-12-29 12:57:23.554117] D [MSGID: 0]
>>>> [glusterd-op-sm.c:3360:glusterd_op_status_volume] 0-management: Returning -1
>>>> [2015-12-29 12:57:23.554135] D [MSGID: 0]
>>>> [glusterd-op-sm.c:5642:glusterd_op_commit_perform] 0-management:
>>>> Returning -1
>>>> [2015-12-29 12:57:23.554155] E [MSGID: 106123]
>>>> [glusterd-syncop.c:1404:gd_commit_op_phase] 0-management: Commit of
>>>> operation 'Volume Status' failed on localhost
>>>> [2015-12-29 12:57:23.554240] D [MSGID: 0]
>>>> [glusterd-syncop.c:510:gd_syncop_mgmt_v3_unlock] 0-glusterd: Returning 0
>>>> [2015-12-29 12:57:23.557147] D [logging.c:1952:_gf_msg_internal]
>>>> 0-logging-infra: Buffer overflow of a buffer whose size limit is 5.
>>>> About to flush least recently used log message to disk
>>>> The message "D [MSGID: 0]
>>>> [glusterd-syncop.c:510:gd_syncop_mgmt_v3_unlock] 0-glusterd: Returning
>>>> 0" repeated 2 times between [2015-12-29 12:57:23.554240] and [2015-12-29
>>>> 12:57:23.554319]
>>>> [2015-12-29 12:57:23.557146] D [MSGID: 0]
>>>> [glusterd-syncop.c:1558:gd_unlock_op_phase] 0-management: Sent unlock op
>>>> req for 'Volume Status' to 3 peers. Returning 0
>>>> [2015-12-29 12:57:23.557255] D [MSGID: 0]
>>>> [glusterd-locks.c:669:glusterd_mgmt_v3_unlock] 0-management: Trying to
>>>> release lock of vol live for 305c0f00-0f11-4da3-a470-50b6e6c14976 as
>>>> live_vol
>>>> [2015-12-29 12:57:23.557286] D [MSGID: 0]
>>>> [glusterd-locks.c:714:glusterd_mgmt_v3_unlock] 0-management: Lock for
>>>> vol live successfully released
>>>> [2015-12-29 12:57:23.557312] D [MSGID: 0]
>>>> [glusterd-op-sm.c:255:glusterd_get_txn_opinfo] 0-management:
>>>> Successfully got opinfo for transaction ID :
>>>> fd3671e4-fa7a-4913-a6dd-b37884a3a715
>>>> [2015-12-29 12:57:23.557332] D [MSGID: 0]
>>>> [glusterd-op-sm.c:259:glusterd_get_txn_opinfo] 0-management: Returning 0
>>>> [2015-12-29 12:57:23.557356] D [MSGID: 0]
>>>> [glusterd-op-sm.c:360:glusterd_clear_txn_opinfo] 0-management:
>>>> Successfully cleared opinfo for transaction ID :
>>>> fd3671e4-fa7a-4913-a6dd-b37884a3a715
>>>> [2015-12-29 12:57:23.557420] D [MSGID: 0]
>>>> [glusterd-op-sm.c:364:glusterd_clear_txn_opinfo] 0-management: Returning 0
>>>>
>>>> ==> /var/log/glusterfs/cmd_history.log <==
>>>> [2015-12-29 12:57:23.557447] : volume status : FAILED : Commit failed
>>>> on localhost. Please check the log file for more details.
>>>>
>>>> Dr Christophe Trefois, Dipl.-Ing.
>>>> Technical Specialist / Post-Doc
>>>>
>>>> UNIVERSITÉ DU LUXEMBOURG
>>>>
>>>> LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE
>>>> Campus Belval | House of Biomedicine
>>>> 6, avenue du Swing
>>>> L-4367 Belvaux
>>>> T: +352 46 66 44 6124
>>>> F: +352 46 66 44 6949
>>>> http://www.uni.lu/lcsb
>>>>
>>>> Facebook <https://www.facebook.com/trefex> Twitter
>>>> <https://twitter.com/Trefex> Google Plus
>>>> <https://plus.google.com/+ChristopheTrefois/> Linkedin
>>>> <https://www.linkedin.com/in/trefoischristophe> skype
>>>> <http://skype:Trefex?call<http://skype:trefex?call>>
>>>>
>>>> ----
>>>> This message is confidential and may contain privileged information.
>>>> It is intended for the named recipient only.
>>>> If you receive it in error please notify me and permanently delete the
>>>> original message and any copies.
>>>> ----
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>
>>
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151230/8ccd11ad/attachment.html>
More information about the Gluster-users
mailing list