[Gluster-users] State: Peer Rejected (Connected)
符永涛
yongtaofu at gmail.com
Sat Jan 12 00:39:43 UTC 2013
Peer rejected shoule be caused by inconsistent volume file or
configurations files between current host and the cluster.
So you're
1 only upgrade gluster?
or
2 upgrade some other host and try to add it to current cluster?
If 1 is true an easier approach to restore a peer which got messed is
only keep /var/lib/glusterd/glusterd.info file and
/var/lib/glusterd/peers directory and make sure all the files are
valid. (nfs, vols dir can be deleted but make sure all the
configuration file have a backup copy). Start current host and no need
to do any thing gluster volumes can be synced to current host.
I'm not quite sure about what problem you have run into. mess one peer
or the whole cluster?
If 2 is true then first you have to probe the server to current
cluster and the server can't be in other cluster before probe it.
2013/1/12, YANG ChengFu <youngseph at gmail.com>:
> furthermore, when I stop gluster, and restartd glusterfs, in the log, I
> have
>
> ==> etc-glusterfs-glusterd.vol.log <==
> [2013-01-11 16:39:55.438506] I [glusterfsd.c:1666:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.3.1
> [2013-01-11 16:39:55.440098] I [glusterd.c:807:init] 0-management: Using
> /var/lib/glusterd as working directory
> [2013-01-11 16:39:55.440797] C [rdma.c:4102:gf_rdma_init]
> 0-rpc-transport/rdma: Failed to get IB devices
> [2013-01-11 16:39:55.440859] E [rdma.c:4993:init] 0-rdma.management: Failed
> to initialize IB Device
> [2013-01-11 16:39:55.440881] E [rpc-transport.c:316:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2013-01-11 16:39:55.440901] W [rpcsvc.c:1356:rpcsvc_transport_create]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2013-01-11 16:39:55.440992] I [glusterd.c:95:glusterd_uuid_init]
> 0-glusterd: retrieved UUID: eece061b-1cd0-4f30-ad17-61809297aba9
> [2013-01-11 16:39:56.050996] E
> [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-0
> [2013-01-11 16:39:56.051041] E
> [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-1
> [2013-01-11 16:39:56.235444] E
> [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-0
> [2013-01-11 16:39:56.235482] E
> [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-1
> [2013-01-11 16:39:56.235810] E
> [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-0
> [2013-01-11 16:39:56.235831] E
> [glusterd-store.c:2080:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-1
> [2013-01-11 16:39:56.236277] I [rpc-clnt.c:968:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
> [2013-01-11 16:39:56.236843] I
> [glusterd-handler.c:2227:glusterd_friend_add] 0-management: connect
> returned 0
> [2013-01-11 16:39:56.241266] E
> [glusterd-store.c:2586:glusterd_resolve_all_bricks] 0-glusterd: resolve
> brick failed in restore
> [2013-01-11 16:39:56.243958] E [glusterd-utils.c:3418:glusterd_brick_start]
> 0-glusterd: cannot resolve brick: irene.mdc:/opt/gluster-data/puppet/ssl
> [2013-01-11 16:39:56.247827] E [glusterd-utils.c:3418:glusterd_brick_start]
> 0-glusterd: cannot resolve brick: irene.mdc:/opt/gluster-data/puppet/dist
> [2013-01-11 16:39:56.251832] E [glusterd-utils.c:3418:glusterd_brick_start]
> 0-glusterd: cannot resolve brick: irene.mdc:/opt/gluster-data/puppet/bucket
> [2013-01-11 16:39:56.258909] I [rpc-clnt.c:968:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
>
> ==> nfs.log.1 <==
> [2013-01-11 16:39:56.259055] W [socket.c:410:__socket_keepalive] 0-socket:
> failed to set keep idle on socket 7
> [2013-01-11 16:39:56.259108] W [socket.c:1876:socket_server_event_handler]
> 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
> [2013-01-11 16:39:56.259154] W [socket.c:410:__socket_keepalive] 0-socket:
> failed to set keep idle on socket 8
> [2013-01-11 16:39:56.259172] W [socket.c:1876:socket_server_event_handler]
> 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
>
> ==> etc-glusterfs-glusterd.vol.log <==
> [2013-01-11 16:39:56.266390] I [rpc-clnt.c:968:rpc_clnt_connection_init]
> 0-management: setting frame-timeout to 600
>
> ==> glustershd.log.1 <==
> [2013-01-11 16:39:56.266520] W [socket.c:410:__socket_keepalive] 0-socket:
> failed to set keep idle on socket 7
> [2013-01-11 16:39:56.266562] W [socket.c:1876:socket_server_event_handler]
> 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
>
> ==> etc-glusterfs-glusterd.vol.log <==
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume management
> 2: type mgmt/glusterd
> 3: option working-directory /var/lib/glusterd
> 4: option transport-type socket,rdma
> 5: option transport.socket.keepalive-time 10
> 6: option transport.socket.keepalive-interval 2
> 7: option transport.socket.read-fail-log off
> 8: end-volume
>
> +------------------------------------------------------------------------------+
>
> ==> glustershd.log.1 <==
> [2013-01-11 16:39:56.266610] W [socket.c:410:__socket_keepalive] 0-socket:
> failed to set keep idle on socket 8
> [2013-01-11 16:39:56.266624] W [socket.c:1876:socket_server_event_handler]
> 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
>
> ==> etc-glusterfs-glusterd.vol.log <==
> [2013-01-11 16:39:56.267030] I
> [glusterd-handshake.c:397:glusterd_set_clnt_mgmt_program] 0-: Using Program
> glusterd mgmt, Num (1238433), Version (2)
> [2013-01-11 16:39:56.267053] I
> [glusterd-handshake.c:403:glusterd_set_clnt_mgmt_program] 0-: Using Program
> Peer mgmt, Num (1238437), Version (2)
>
> ==> nfs.log.1 <==
> [2013-01-11 16:39:58.148908] W [nfs.c:735:nfs_init_state] 1-nfs:
> /sbin/rpc.statd not found. Disabling NLM
>
> ==> etc-glusterfs-glusterd.vol.log <==
> [2013-01-11 16:39:58.149702] I
> [glusterd-handler.c:1486:glusterd_handle_incoming_friend_req] 0-glusterd:
> Received probe from uuid: 184a81f4-ff0f-48d6-adb8-798b98957b1a
> [2013-01-11 16:39:58.149818] E
> [glusterd-utils.c:1926:glusterd_compare_friend_volume] 0-: Cksums of volume
> puppet-bucket differ. local cksum = 1273524870, remote cksum = 1932840611
> [2013-01-11 16:39:58.149858] I
> [glusterd-handler.c:2395:glusterd_xfer_friend_add_resp] 0-glusterd:
> Responded to bastille.mdc (0), ret: 0
>
> ==> nfs.log.1 <==
> [2013-01-11 16:39:58.179450] E [socket.c:333:__socket_server_bind]
> 1-socket.nfs-server: binding to failed: Address already in use
> [2013-01-11 16:39:58.179512] E [socket.c:336:__socket_server_bind]
> 1-socket.nfs-server: Port is already in use
> [2013-01-11 16:39:58.179535] W [rpcsvc.c:1363:rpcsvc_transport_create]
> 1-rpc-service: listening on transport failed
> [2013-01-11 16:39:58.179663] E
> [rpcsvc.c:1135:rpcsvc_program_register_portmap] 1-rpc-service: Could not
> register with portmap
> [2013-01-11 16:39:58.179710] E [socket.c:333:__socket_server_bind]
> 1-socket.nfs-server: binding to failed: Address already in use
> [2013-01-11 16:39:58.179727] E [socket.c:336:__socket_server_bind]
> 1-socket.nfs-server: Port is already in use
> [2013-01-11 16:39:58.179743] W [rpcsvc.c:1363:rpcsvc_transport_create]
> 1-rpc-service: listening on transport failed
> [2013-01-11 16:39:58.179815] E
> [rpcsvc.c:1135:rpcsvc_program_register_portmap] 1-rpc-service: Could not
> register with portmap
> [2013-01-11 16:39:58.180193] E [socket.c:333:__socket_server_bind]
> 1-socket.nfs-server: binding to failed: Address already in use
> [2013-01-11 16:39:58.180214] E [socket.c:336:__socket_server_bind]
> 1-socket.nfs-server: Port is already in use
> [2013-01-11 16:39:58.180230] W [rpcsvc.c:1363:rpcsvc_transport_create]
> 1-rpc-service: listening on transport failed
> [2013-01-11 16:39:58.180300] E
> [rpcsvc.c:1135:rpcsvc_program_register_portmap] 1-rpc-service: Could not
> register with portmap
> [2013-01-11 16:39:58.180319] I [nfs.c:821:init] 1-nfs: NFS service started
> [2013-01-11 16:39:58.186245] W [graph.c:316:_log_if_unknown_option]
> 1-nfs-server: option 'rpc-auth.auth-glusterfs' is not recognized
> [2013-01-11 16:39:58.186346] W [graph.c:316:_log_if_unknown_option]
> 1-nfs-server: option 'rpc-auth-allow-insecure' is not recognized
> [2013-01-11 16:39:58.186366] W [graph.c:316:_log_if_unknown_option]
> 1-nfs-server: option 'transport-type' is not recognized
> [2013-01-11 16:39:58.186400] I [client.c:2142:notify]
> 1-puppet-ssl-client-0: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.187286] I [client.c:2142:notify]
> 1-puppet-ssl-client-1: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.188173] I [client.c:2142:notify]
> 1-puppet-dist-client-0: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.189031] I [client.c:2142:notify]
> 1-puppet-dist-client-1: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.189703] I [client.c:2142:notify]
> 1-puppet-bucket-client-0: parent translators are ready, attempting connect
> on transport
> [2013-01-11 16:39:58.190559] I [client.c:2142:notify]
> 1-puppet-bucket-client-1: parent translators are ready, attempting connect
> on transport
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume puppet-bucket-client-0
> 2: type protocol/client
> 3: option remote-host sandy.mdc
> 4: option remote-subvolume /opt/gluster-data/snake-puppet/bucket
> 5: option transport-type tcp
> 6: end-volume
> 7:
> 8: volume puppet-bucket-client-1
> 9: type protocol/client
> 10: option remote-host irene.mdc
> 11: option remote-subvolume /opt/gluster-data/puppet/bucket
> 12: option transport-type tcp
> 13: end-volume
> 14:
> 15: volume puppet-bucket-replicate-0
> 16: type cluster/replicate
> 17: subvolumes puppet-bucket-client-0 puppet-bucket-client-1
> 18: end-volume
> 19:
> 20: volume puppet-bucket
> 21: type debug/io-stats
> 22: option latency-measurement off
> 23: option count-fop-hits off
> 24: subvolumes puppet-bucket-replicate-0
> 25: end-volume
> 26:
> 27: volume puppet-dist-client-0
> 28: type protocol/client
> 29: option remote-host sandy.mdc
> 30: option remote-subvolume /opt/gluster-data/snake-puppet/dist
> 31: option transport-type tcp
> 32: end-volume
> 33:
> 34: volume puppet-dist-client-1
> 35: type protocol/client
> 36: option remote-host irene.mdc
> 37: option remote-subvolume /opt/gluster-data/puppet/dist
> 38: option transport-type tcp
> 39: end-volume
> 40:
> 41: volume puppet-dist-replicate-0
> 42: type cluster/replicate
> 43: option data-self-heal-algorithm full
> 44: subvolumes puppet-dist-client-0 puppet-dist-client-1
> 45: end-volume
> 46:
> 47: volume puppet-dist
> 48: type debug/io-stats
> 49: option latency-measurement off
> 50: option count-fop-hits off
> 51: subvolumes puppet-dist-replicate-0
> 52: end-volume
> 53:
> 54: volume puppet-ssl-client-0
> 55: type protocol/client
> 56: option remote-host sandy.mdc
> 57: option remote-subvolume /opt/gluster-data/snake-puppet/ssl
> 58: option transport-type tcp
> 59: end-volume
> 60:
> 61: volume puppet-ssl-client-1
> 62: type protocol/client
> 63: option remote-host irene.mdc
> 64: option remote-subvolume /opt/gluster-data/puppet/ssl
> 65: option transport-type tcp
> 66: end-volume
> 67:
> 68: volume puppet-ssl-replicate-0
> 69: type cluster/replicate
> 70: option metadata-change-log on
> 71: option data-self-heal-algorithm full
> 72: subvolumes puppet-ssl-client-0 puppet-ssl-client-1
> 73: end-volume
> 74:
> 75: volume puppet-ssl
> 76: type debug/io-stats
> 77: option latency-measurement off
> 78: option count-fop-hits off
> 79: subvolumes puppet-ssl-replicate-0
> 80: end-volume
> 81:
> 82: volume nfs-server
> 83: type nfs/server
> 84: option nfs.dynamic-volumes on
> 85: option nfs.nlm on
> 86: option rpc-auth.addr.puppet-ssl.allow *
> 87: option nfs3.puppet-ssl.volume-id
> bb2ffdd5-f00c-4016-ab07-301a6ede3042
> 88: option rpc-auth.addr.puppet-dist.allow *
> 89: option nfs3.puppet-dist.volume-id
> 376220d6-dcdd-4f3f-9809-397046a78f5a
> 90: option rpc-auth.addr.puppet-bucket.allow *
> 91: option nfs3.puppet-bucket.volume-id
> 3a7e146c-7c37-41ea-baa5-5262c79b1232
> 92: subvolumes puppet-ssl puppet-dist puppet-bucket
> 93: end-volume
>
> +------------------------------------------------------------------------------+
> [2013-01-11 16:39:58.191727] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-ssl-client-1: changing port to 24010 (from 0)
> [2013-01-11 16:39:58.191806] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-dist-client-1: changing port to 24012 (from 0)
> [2013-01-11 16:39:58.191844] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-bucket-client-1: changing port to 24014 (from 0)
> [2013-01-11 16:39:58.191881] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-ssl-client-0: changing port to 24012 (from 0)
> [2013-01-11 16:39:58.191974] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-dist-client-0: changing port to 24010 (from 0)
> [2013-01-11 16:39:58.192024] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-bucket-client-0: changing port to 24014 (from 0)
>
> ==> glustershd.log.1 <==
> [2013-01-11 16:39:58.381647] I [graph.c:241:gf_add_cmdline_options]
> 0-puppet-ssl-replicate-0: adding option 'node-uuid' for volume
> 'puppet-ssl-replicate-0' with value 'eece061b-1cd0-4f30-ad17-61809297aba9'
> [2013-01-11 16:39:58.381673] I [graph.c:241:gf_add_cmdline_options]
> 0-puppet-dist-replicate-0: adding option 'node-uuid' for volume
> 'puppet-dist-replicate-0' with value 'eece061b-1cd0-4f30-ad17-61809297aba9'
> [2013-01-11 16:39:58.381686] I [graph.c:241:gf_add_cmdline_options]
> 0-puppet-bucket-replicate-0: adding option 'node-uuid' for volume
> 'puppet-bucket-replicate-0' with value
> 'eece061b-1cd0-4f30-ad17-61809297aba9'
> [2013-01-11 16:39:58.390396] I [client.c:2142:notify]
> 1-puppet-ssl-client-0: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.391487] I [client.c:2142:notify]
> 1-puppet-ssl-client-1: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.392209] I [client.c:2142:notify]
> 1-puppet-dist-client-0: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.392995] I [client.c:2142:notify]
> 1-puppet-dist-client-1: parent translators are ready, attempting connect on
> transport
> [2013-01-11 16:39:58.393804] I [client.c:2142:notify]
> 1-puppet-bucket-client-0: parent translators are ready, attempting connect
> on transport
> [2013-01-11 16:39:58.394598] I [client.c:2142:notify]
> 1-puppet-bucket-client-1: parent translators are ready, attempting connect
> on transport
> Given volfile:
> +------------------------------------------------------------------------------+
> 1: volume puppet-bucket-client-0
> 2: type protocol/client
> 3: option remote-host sandy.mdc
> 4: option remote-subvolume /opt/gluster-data/snake-puppet/bucket
> 5: option transport-type tcp
> 6: end-volume
> 7:
> 8: volume puppet-bucket-client-1
> 9: type protocol/client
> 10: option remote-host irene.mdc
> 11: option remote-subvolume /opt/gluster-data/puppet/bucket
> 12: option transport-type tcp
> 13: end-volume
> 14:
> 15: volume puppet-bucket-replicate-0
> 16: type cluster/replicate
> 17: option background-self-heal-count 0
> 18: option metadata-self-heal on
> 19: option data-self-heal on
> 20: option entry-self-heal on
> 21: option self-heal-daemon on
> 22: option iam-self-heal-daemon yes
> 23: subvolumes puppet-bucket-client-0 puppet-bucket-client-1
> 24: end-volume
> 25:
> 26: volume puppet-dist-client-0
> 27: type protocol/client
> 28: option remote-host sandy.mdc
> 29: option remote-subvolume /opt/gluster-data/snake-puppet/dist
> 30: option transport-type tcp
> 31: end-volume
> 32:
> 33: volume puppet-dist-client-1
> 34: type protocol/client
> 35: option remote-host irene.mdc
> 36: option remote-subvolume /opt/gluster-data/puppet/dist
> 37: option transport-type tcp
> 38: end-volume
> 39:
> 40: volume puppet-dist-replicate-0
> 41: type cluster/replicate
> 42: option background-self-heal-count 0
> 43: option metadata-self-heal on
> 44: option data-self-heal on
> 45: option entry-self-heal on
> 46: option self-heal-daemon on
> 47: option data-self-heal-algorithm full
> 48: option iam-self-heal-daemon yes
> 49: subvolumes puppet-dist-client-0 puppet-dist-client-1
> 50: end-volume
> 51:
> 52: volume puppet-ssl-client-0
> 53: type protocol/client
> 54: option remote-host sandy.mdc
> 55: option remote-subvolume /opt/gluster-data/snake-puppet/ssl
> 56: option transport-type tcp
> 57: end-volume
> 58:
> 59: volume puppet-ssl-client-1
> 60: type protocol/client
> 61: option remote-host irene.mdc
> 62: option remote-subvolume /opt/gluster-data/puppet/ssl
> 63: option transport-type tcp
> 64: end-volume
> 65:
> 66: volume puppet-ssl-replicate-0
> 67: type cluster/replicate
> 68: option background-self-heal-count 0
> 69: option metadata-self-heal on
> 70: option data-self-heal on
> 71: option entry-self-heal on
> 72: option self-heal-daemon on
> 73: option metadata-change-log on
> 74: option data-self-heal-algorithm full
> 75: option iam-self-heal-daemon yes
> 76: subvolumes puppet-ssl-client-0 puppet-ssl-client-1
> 77: end-volume
> 78:
> 79: volume glustershd
> 80: type debug/io-stats
> 81: subvolumes puppet-ssl-replicate-0 puppet-dist-replicate-0
> puppet-bucket-replicate-0
> 82: end-volume
>
> +------------------------------------------------------------------------------+
> [2013-01-11 16:39:58.395877] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-ssl-client-1: changing port to 24010 (from 0)
> [2013-01-11 16:39:58.395978] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-bucket-client-0: changing port to 24014 (from 0)
> [2013-01-11 16:39:58.396048] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-dist-client-1: changing port to 24012 (from 0)
> [2013-01-11 16:39:58.396106] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-bucket-client-1: changing port to 24014 (from 0)
> [2013-01-11 16:39:58.396161] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-ssl-client-0: changing port to 24012 (from 0)
> [2013-01-11 16:39:58.396223] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 1-puppet-dist-client-0: changing port to 24010 (from 0)
>
> ==> nfs.log.1 <==
> [2013-01-11 16:40:02.148931] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-ssl-client-1: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.149212] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-1:
> Connected to 10.136.200.16:24010, attached to remote volume
> '/opt/gluster-data/puppet/ssl'.
> [2013-01-11 16:40:02.149238] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.149289] I [afr-common.c:3628:afr_notify]
> 1-puppet-ssl-replicate-0: Subvolume 'puppet-ssl-client-1' came back up;
> going online.
> [2013-01-11 16:40:02.149382] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-1:
> Server lk version = 1
> [2013-01-11 16:40:02.149711] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-dist-client-1: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.149931] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-1:
> Connected to 10.136.200.16:24012, attached to remote volume
> '/opt/gluster-data/puppet/dist'.
> [2013-01-11 16:40:02.149951] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.149995] I [afr-common.c:3628:afr_notify]
> 1-puppet-dist-replicate-0: Subvolume 'puppet-dist-client-1' came back up;
> going online.
> [2013-01-11 16:40:02.150086] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-1:
> Server lk version = 1
> [2013-01-11 16:40:02.150727] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-bucket-client-1: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.151013] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-1:
> Connected to 10.136.200.16:24014, attached to remote volume
> '/opt/gluster-data/puppet/bucket'.
> [2013-01-11 16:40:02.151042] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.151091] I [afr-common.c:3628:afr_notify]
> 1-puppet-bucket-replicate-0: Subvolume 'puppet-bucket-client-1' came back
> up; going online.
> [2013-01-11 16:40:02.151187] I
> [client-handshake.c:453:client_set_lk_version_cbk]
> 1-puppet-bucket-client-1: Server lk version = 1
> [2013-01-11 16:40:02.151623] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-ssl-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.151924] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-0:
> Connected to 10.136.200.27:24012, attached to remote volume
> '/opt/gluster-data/snake-puppet/ssl'.
> [2013-01-11 16:40:02.151950] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.152166] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-0:
> Server lk version = 1
> [2013-01-11 16:40:02.152472] I
> [afr-common.c:1965:afr_set_root_inode_on_first_lookup]
> 1-puppet-ssl-replicate-0: added root inode
> [2013-01-11 16:40:02.152566] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-dist-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.152807] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-0:
> Connected to 10.136.200.27:24010, attached to remote volume
> '/opt/gluster-data/snake-puppet/dist'.
> [2013-01-11 16:40:02.152827] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.152991] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-0:
> Server lk version = 1
> [2013-01-11 16:40:02.153187] I
> [afr-common.c:1965:afr_set_root_inode_on_first_lookup]
> 1-puppet-dist-replicate-0: added root inode
> [2013-01-11 16:40:02.153403] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-bucket-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.153644] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-0:
> Connected to 10.136.200.27:24014, attached to remote volume
> '/opt/gluster-data/snake-puppet/bucket'.
> [2013-01-11 16:40:02.153665] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.153797] I
> [client-handshake.c:453:client_set_lk_version_cbk]
> 1-puppet-bucket-client-0: Server lk version = 1
> [2013-01-11 16:40:02.154054] I
> [afr-common.c:1965:afr_set_root_inode_on_first_lookup]
> 1-puppet-bucket-replicate-0: added root inode
>
> ==> glustershd.log.1 <==
> [2013-01-11 16:40:02.381825] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-ssl-client-1: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.382098] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-1:
> Connected to 10.136.200.16:24010, attached to remote volume
> '/opt/gluster-data/puppet/ssl'.
> [2013-01-11 16:40:02.382119] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.382203] I [afr-common.c:3628:afr_notify]
> 1-puppet-ssl-replicate-0: Subvolume 'puppet-ssl-client-1' came back up;
> going online.
> [2013-01-11 16:40:02.382321] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-1:
> Server lk version = 1
> [2013-01-11 16:40:02.382889] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-bucket-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.383190] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-0:
> Connected to 10.136.200.27:24014, attached to remote volume
> '/opt/gluster-data/snake-puppet/bucket'.
> [2013-01-11 16:40:02.383213] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.383284] I [afr-common.c:3628:afr_notify]
> 1-puppet-bucket-replicate-0: Subvolume 'puppet-bucket-client-0' came back
> up; going online.
> [2013-01-11 16:40:02.384825] I
> [client-handshake.c:453:client_set_lk_version_cbk]
> 1-puppet-bucket-client-0: Server lk version = 1
> [2013-01-11 16:40:02.384999] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-dist-client-1: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.385614] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-1:
> Connected to 10.136.200.16:24012, attached to remote volume
> '/opt/gluster-data/puppet/dist'.
> [2013-01-11 16:40:02.385646] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.385725] I [afr-common.c:3628:afr_notify]
> 1-puppet-dist-replicate-0: Subvolume 'puppet-dist-client-1' came back up;
> going online.
> [2013-01-11 16:40:02.386268] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-1:
> Server lk version = 1
> [2013-01-11 16:40:02.386381] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-bucket-client-1: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.386710] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-ssl-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.386817] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-bucket-client-1:
> Connected to 10.136.200.16:24014, attached to remote volume
> '/opt/gluster-data/puppet/bucket'.
> [2013-01-11 16:40:02.386842] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-bucket-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.387051] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-ssl-client-0:
> Connected to 10.136.200.27:24012, attached to remote volume
> '/opt/gluster-data/snake-puppet/ssl'.
> [2013-01-11 16:40:02.387087] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-ssl-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.387222] I
> [client-handshake.c:453:client_set_lk_version_cbk]
> 1-puppet-bucket-client-1: Server lk version = 1
> [2013-01-11 16:40:02.387345] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-ssl-client-0:
> Server lk version = 1
> [2013-01-11 16:40:02.387427] I
> [client-handshake.c:1636:select_server_supported_programs]
> 1-puppet-dist-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2013-01-11 16:40:02.388029] I
> [client-handshake.c:1433:client_setvolume_cbk] 1-puppet-dist-client-0:
> Connected to 10.136.200.27:24010, attached to remote volume
> '/opt/gluster-data/snake-puppet/dist'.
> [2013-01-11 16:40:02.388058] I
> [client-handshake.c:1445:client_setvolume_cbk] 1-puppet-dist-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2013-01-11 16:40:02.389682] I
> [client-handshake.c:453:client_set_lk_version_cbk] 1-puppet-dist-client-0:
> Server lk version = 1
> ^C
>
>
> --
> Yang
> Orange Key: 35745318S1
>
>
> On Fri, Jan 11, 2013 at 11:00 AM, YANG ChengFu <youngseph at gmail.com> wrote:
>
>> Hello Fu Yong Tao,
>>
>> thanks for your suggest, after I did your steps, I got the following:
>>
>> gluster> volume sync new-host
>> please delete all the volumes before full sync
>> gluster> peer status
>> Number of Peers: 1
>>
>> Hostname: 10.136.200.27
>> Uuid: 184a81f4-ff0f-48d6-adb8-798b98957b1a
>> State: Accepted peer request (Connected)
>>
>> I still can not put the server in the truested pool !
>>
>> --
>> Yang
>> Orange Key: 35745318S1
>>
>>
>> On Fri, Jan 11, 2013 at 5:22 AM, 符永涛 <yongtaofu at gmail.com> wrote:
>>
>>> Reinstall gluster server or upgrade is a dangerous task before it it's
>>> better to backup /etc/glusterfs /var/lib/glusterd.
>>>
>>> /var/lib/glusterd/glusterd.info contains the uuid of current server
>>> and /var/lib/glusterd/peers contain it's peers
>>> make sure above two files are all correct
>>>
>>> If other servers status are fine then with only above configuration
>>> files you can start current host and gluster volumes files will
>>> automatically sync to current host.
>>>
>>> Always remember backup
>>>
>>> 2013/1/11, YANG ChengFu <youngseph at gmail.com>:
>>> > Hello,
>>> >
>>> > I did an upgrade glusterfs from 3.0.5 to 3.3.1, before I did it, I
>>> > have
>>> > other two 3.3.1 hosts(new-host) ready and made a cluster.
>>> >
>>> > After I upgraded old hosts, I tried to add them to the cluster, I
>>> > got State: Peer Rejected (Connected), for sure it could be about same
>>> > volumes on the old hosts, but I have tried to stop glusterd, remove
>>> > everything from the old host, such /etc/glusterd, /etc/glusterfs
>>> > and /var/lib/glusterd/, and start glusterd, then I readded it to
>>> cluster,
>>> > the problem is still there.
>>> >
>>> > I also did 'volume sync', but I failed, because of the following error
>>> > message
>>> >
>>> > gluster> volume sync new-hosts
>>> > please delete all the volumes before full sync
>>> >
>>> > I can not do it, or I will lose all my data!
>>> >
>>> > The most funny thing I found, even if the peer status is rejected, but
>>> > I
>>> > can mount the volume from the old host.
>>> >
>>> > Any ideas !
>>> >
>>> > --
>>> > Yang
>>> > Orange Key: 35745318S1
>>> >
>>>
>>>
>>> --
>>> 符永涛
>>>
>>
>>
>
--
符永涛
More information about the Gluster-users
mailing list