[Gluster-users] [ovirt-users] Centos 7.1 failed to start glusterd after upgrading to ovirt 3.6

Stefano Danzi s.danzi at hawai.it
Fri Nov 6 08:27:53 UTC 2015


Hi!
I have only one node (Test system) and I don't chage any ip address and 
the entry is on /etc/hosts.
I thing that now gluster start before networking

Il 06/11/2015 6.32, Atin Mukherjee ha scritto:
>>> [glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd:
>>> resolve brick failed in restore
> The above log is the culprit here. Generally this function fails when
> GlusterD fails to resolve the associated host of a brick. Has any of the
> node undergone an IP change during the upgrade process?
>
> ~Atin
>
> On 11/06/2015 09:59 AM, Sahina Bose wrote:
>> Did you upgrade all the nodes too?
>> Are some of your nodes not-reachable?
>>
>> Adding gluster-users for glusterd error.
>>
>> On 11/06/2015 12:00 AM, Stefano Danzi wrote:
>>> After upgrading oVirt from 3.5 to 3.6, glusterd fail to start when the
>>> host boot.
>>> Manual start of service after boot works fine.
>>>
>>> gluster log:
>>>
>>> [2015-11-04 13:37:55.360876] I [MSGID: 100030]
>>> [glusterfsd.c:2318:main] 0-/usr/sbin/glusterd: Started running
>>> /usr/sbin/glusterd version 3.7.5 (args: /usr/sbin/glusterd -p
>>> /var/run/glusterd.pid)
>>> [2015-11-04 13:37:55.447413] I [MSGID: 106478] [glusterd.c:1350:init]
>>> 0-management: Maximum allowed open file descriptors set to 65536
>>> [2015-11-04 13:37:55.447477] I [MSGID: 106479] [glusterd.c:1399:init]
>>> 0-management: Using /var/lib/glusterd as working directory
>>> [2015-11-04 13:37:55.464540] W [MSGID: 103071]
>>> [rdma.c:4592:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event
>>> channel creation failed [Nessun device corrisponde]
>>> [2015-11-04 13:37:55.464559] W [MSGID: 103055] [rdma.c:4899:init]
>>> 0-rdma.management: Failed to initialize IB Device
>>> [2015-11-04 13:37:55.464566] W
>>> [rpc-transport.c:359:rpc_transport_load] 0-rpc-transport: 'rdma'
>>> initialization failed
>>> [2015-11-04 13:37:55.464616] W [rpcsvc.c:1597:rpcsvc_transport_create]
>>> 0-rpc-service: cannot create listener, initing the transport failed
>>> [2015-11-04 13:37:55.464624] E [MSGID: 106243] [glusterd.c:1623:init]
>>> 0-management: creation of 1 listeners failed, continuing with
>>> succeeded transport
>>> [2015-11-04 13:37:57.663862] I [MSGID: 106513]
>>> [glusterd-store.c:2036:glusterd_restore_op_version] 0-glusterd:
>>> retrieved op-version: 30600
>>> [2015-11-04 13:37:58.284522] I [MSGID: 106194]
>>> [glusterd-store.c:3465:glusterd_store_retrieve_missed_snaps_list]
>>> 0-management: No missed snaps list.
>>> [2015-11-04 13:37:58.287477] E [MSGID: 106187]
>>> [glusterd-store.c:4243:glusterd_resolve_all_bricks] 0-glusterd:
>>> resolve brick failed in restore
>>> [2015-11-04 13:37:58.287505] E [MSGID: 101019]
>>> [xlator.c:428:xlator_init] 0-management: Initialization of volume
>>> 'management' failed, review your volfile again
>>> [2015-11-04 13:37:58.287513] E [graph.c:322:glusterfs_graph_init]
>>> 0-management: initializing translator failed
>>> [2015-11-04 13:37:58.287518] E [graph.c:661:glusterfs_graph_activate]
>>> 0-graph: init failed
>>> [2015-11-04 13:37:58.287799] W [glusterfsd.c:1236:cleanup_and_exit]
>>> (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xfd) [0x7f29b876524d]
>>> -->/usr/sbin/glusterd(glusterfs_process_volfp+0x126) [0x7f29b87650f6]
>>> -->/usr/sbin/glusterd(cleanup_and_exit+0x69) [0x7f29b87646d9] ) 0-:
>>> received signum (0), shutting down
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>


More information about the Gluster-users mailing list