[Gluster-users] Strange problems with my Gluster Cluster

Fedele Stabile fedele.stabile at fis.unical.it
Sat Jan 14 17:30:24 UTC 2017


Hello,

I have a 32-nodes cluster each node has 2 brick of 1 TB each and I
configured only one distributed volume using all my 64 bricks

I suppose to have 64 TB of disk online but I can see only 36 TB!!

All bricks are online 

but the result of a volume status says that

an operation of rebalance failed.

In glusterd.vol.log I can see 

glusterd_volume_rebalance_use_rsp_dict] 0-glusterd: failed to get index"
repeated 32 times between [2017-01-14 17:04:18.727159] and [2017-01-14
17:04:23.796719]

W [socket.c:588:__socket_rwv] 0-management: readv on
/var/run/gluster/gluster-rebalance-fc6f18b6-a06c-4fdf-ac08-23e9b4f8053e.sock
failed (No data available)

I [MSGID: 106007] [glusterd-rebalance.c:162:__glusterd_defrag_notify]
0-management: Rebalance process for volume scratch has disconnected.

[2017-01-14 17:05:43.502366] I [MSGID: 101053]
[mem-pool.c:616:mem_pool_destroy] 0-management: size=588 max=0 total=0

[2017-01-14 17:05:43.502378] I [MSGID: 101053]
[mem-pool.c:616:mem_pool_destroy] 0-management: size=124 max=0 total=0

[2017-01-14 17:11:21.827125] I [MSGID: 106499]
[glusterd-handler.c:4329:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume scratch

[2017-01-14 17:11:21.859989] W [MSGID: 106217]
[glusterd-op-sm.c:4592:glusterd_op_modify_op_ctx] 0-management: Failed uuid
to hostname conversion

[2017-01-14 17:11:21.860009] W [MSGID: 106387]
[glusterd-op-sm.c:4696:glusterd_op_modify_op_ctx] 0-management: op_ctx
modification failed

[2017-01-14 17:11:33.694377] I [MSGID: 106488]
[glusterd-handler.c:1533:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req

 

Can anyone help me with this problem?

My glusterfs version is 3.7.8

 

Thank you in advance

Fedele Stabile

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170114/2bd4e4bb/attachment.html>


More information about the Gluster-users mailing list