[Gluster-devel] Volume start fails on recent git

Anand Avati anand.avati at gmail.com
Wed Sep 26 15:53:57 UTC 2012


I checked just now and the same test seems to work fine for me. Have you
verified if both glusterds were on the same commit id?

Avati


On Wed, Sep 26, 2012 at 1:06 AM, Jan Engelhardt <jengelh at inai.de> wrote:

>
>
> With glusterfs v3.3.0qa39-457-g5ad96fb ("master" branch), starting a
> volume fails. The start command however works with v3.3.1qa3
> ("release-3.3" branch).
>
>
> # gluster volume create d0 replica 2 transport tcp \
> mozart:/sync/.gluster-store bach:/sync/.gluster-store
>
> 09:35 mozart:/tmp/glu # gluster volume info
>
> Volume Name: d0
> Type: Replicate
> Volume ID: 09386acc-7149-4c9c-b8f2-e6ed4104435b
> Status: Created
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: mozart:/sync/.gluster-store
> Brick2: bach:/sync/.gluster-store
> 09:35 mozart:/tmp/glu # gluster volume status
> Volume d0 is not started
>
> 09:35 mozart:/tmp/glu # gluster volume start d0
> volume start: d0: failed
> 09:35 mozart:/tmp/glu # tail /var/log/messages
> Sep 26 09:35:50 mozart GlusterFS[19401]: [2012-09-26 07:35:50.067710] C
> [glusterd-op-sm.c:1923:glusterd_op_build_payload] 0-management: volname
> is not present in operation ctx
>
> When glusterd runs with --debug, on `volume start`, it prints:
>
> [2012-09-26 07:49:41.234138] I
> [glusterd-volume-ops.c:261:glusterd_handle_cli_start_volume] 0-management:
> Received start vol req for volume d0
> [2012-09-26 07:49:41.234179] I [glusterd-utils.c:274:glusterd_lock]
> 0-glusterd: Cluster lock held by 93820285-fa3b-4f9e-8510-93da28df5bfd
> [2012-09-26 07:49:41.234191] I
> [glusterd-handler.c:440:glusterd_op_txn_begin] 0-management: Acquired local
> lock
> [2012-09-26 07:49:41.234203] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_START_LOCK'
> [2012-09-26 07:49:41.234214] D
> [glusterd-handler.c:458:glusterd_op_txn_begin] 0-management: Returning 0
> [2012-09-26 07:49:41.234224] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_START_LOCK'
> [2012-09-26 07:49:41.234297] D
> [glusterd-rpc-ops.c:1629:glusterd_cluster_lock] 0-glusterd: Returning 0
> [2012-09-26 07:49:41.234311] D
> [glusterd-op-sm.c:1618:glusterd_op_ac_send_lock] 0-: Returning with 0
> [2012-09-26 07:49:41.234320] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Default' to 'Lock sent' due to event
> 'GD_OP_EVENT_START_LOCK'
> [2012-09-26 07:49:41.234332] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.235284] I
> [glusterd-rpc-ops.c:537:glusterd_cluster_lock_cbk] 0-glusterd: Received ACC
> from uuid: 4ea16432-3d76-40e0-b1d2-a4676c60a2b4
> [2012-09-26 07:49:41.235310] D
> [glusterd-utils.c:4125:glusterd_friend_find_by_uuid] 0-glusterd: Friend
> found... state: Peer in Cluster
> [2012-09-26 07:49:41.235323] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_RCVD_ACC'
> [2012-09-26 07:49:41.235334] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_RCVD_ACC'
> [2012-09-26 07:49:41.235343] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.235353] D
> [glusterd-op-sm.c:1767:glusterd_op_ac_rcvd_lock_acc] 0-: Returning 0
> [2012-09-26 07:49:41.235362] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Lock sent' to 'Lock sent' due to event
> 'GD_OP_EVENT_RCVD_ACC'
> [2012-09-26 07:49:41.235372] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.235381] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.235391] C
> [glusterd-op-sm.c:1923:glusterd_op_build_payload] 0-management: volname is
> not present in operation ctx
> [2012-09-26 07:49:41.235466] E
> [glusterd-op-sm.c:1968:glusterd_op_ac_send_stage_op] 0-management: Building
> payload failed
> [2012-09-26 07:49:41.235479] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_RCVD_RJT'
> [2012-09-26 07:49:41.235489] I
> [glusterd-op-sm.c:2016:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op
> req to 0 peers
> [2012-09-26 07:49:41.235499] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.235508] D
> [glusterd-op-sm.c:134:glusterd_op_sm_inject_all_acc] 0-: Returning 0
> [2012-09-26 07:49:41.235516] D
> [glusterd-op-sm.c:2021:glusterd_op_ac_send_stage_op] 0-: Returning with 0
> [2012-09-26 07:49:41.235525] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Lock sent' to 'Stage op sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.235535] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.235543] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_RCVD_RJT'
> [2012-09-26 07:49:41.235552] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_ALL_ACK'
> [2012-09-26 07:49:41.235561] D
> [glusterd-op-sm.c:2425:glusterd_op_ac_stage_op_failed] 0-: Returning 0
> [2012-09-26 07:49:41.235570] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Stage op sent' to 'Stage op failed' due to event
> 'GD_OP_EVENT_RCVD_RJT'
> [2012-09-26 07:49:41.235579] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.235588] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.235601] D [glusterd-op-sm.c:1577:glusterd_op_ac_none]
> 0-: Returning with 0
> [2012-09-26 07:49:41.235605] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Stage op failed' to 'Stage op failed' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.235609] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.235613] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_ALL_ACK'
> [2012-09-26 07:49:41.235635] D
> [glusterd-rpc-ops.c:1663:glusterd_cluster_unlock] 0-glusterd: Returning 0
> [2012-09-26 07:49:41.235668] D
> [glusterd-op-sm.c:1665:glusterd_op_ac_send_unlock] 0-: Returning with 0
> [2012-09-26 07:49:41.235673] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Stage op failed' to 'Unlock sent' due to event
> 'GD_OP_EVENT_ALL_ACK'
> [2012-09-26 07:49:41.235685] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.236502] I
> [glusterd-rpc-ops.c:596:glusterd_cluster_unlock_cbk] 0-glusterd: Received
> ACC from uuid: 4ea16432-3d76-40e0-b1d2-a4676c60a2b4
> [2012-09-26 07:49:41.236528] D
> [glusterd-utils.c:4125:glusterd_friend_find_by_uuid] 0-glusterd: Friend
> found... state: Peer in Cluster
> [2012-09-26 07:49:41.236540] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_RCVD_ACC'
> [2012-09-26 07:49:41.236551] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_RCVD_ACC'
> [2012-09-26 07:49:41.236560] D
> [glusterd-op-sm.c:4521:glusterd_op_sm_inject_event] 0-glusterd: Enqueue
> event: 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.236569] D
> [glusterd-op-sm.c:2562:glusterd_op_ac_rcvd_unlock_acc] 0-: Returning 0
> [2012-09-26 07:49:41.236578] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Unlock sent' to 'Unlock sent' due to event
> 'GD_OP_EVENT_RCVD_ACC'
> [2012-09-26 07:49:41.236588] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.236597] D [glusterd-op-sm.c:4593:glusterd_op_sm] 0-:
> Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.236619] I
> [glusterd-op-sm.c:2630:glusterd_op_txn_complete] 0-glusterd: Cleared local
> lock
> [2012-09-26 07:49:41.236632] E [glusterd-utils.c:5732:glusterd_to_cli]
> 0-glusterd: Failed to get command string
> [2012-09-26 07:49:41.236685] D
> [glusterd-rpc-ops.c:180:glusterd_op_send_cli_response] 0-: Returning 0
> [2012-09-26 07:49:41.236698] D
> [glusterd-op-sm.c:2648:glusterd_op_txn_complete] 0-glusterd: Returning 0
> [2012-09-26 07:49:41.236709] D
> [glusterd-op-sm.c:2661:glusterd_op_ac_unlocked_all] 0-: Returning 0
> volume start: d0: failed
> [2012-09-26 07:49:41.236717] D
> [glusterd-utils.c:4765:glusterd_sm_tr_log_transition_add] 0-glusterd:
> Transitioning from 'Unlock sent' to 'Default' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2012-09-26 07:49:41.236829] D
> [glusterd-utils.c:4767:glusterd_sm_tr_log_transition_add] 0-: returning 0
> [2012-09-26 07:49:41.237278] D [socket.c:373:__socket_rwv]
> 0-socket.management: EOF on socket
> [2012-09-26 07:49:41.237320] W [socket.c:399:__socket_rwv]
> 0-socket.management: readv failed (No data available)
> [2012-09-26 07:49:41.237339] D [socket.c:2104:socket_event_handler]
> 0-transport: disconnecting now
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20120926/0d898cc0/attachment-0001.html>


More information about the Gluster-devel mailing list