<div dir="ltr">Hi,<div><br></div><div>You need to check the rebalance logs (<span style="font-size:12.8px">glu_linux_dr2_oracle-rebalance.log) on </span><a href="http://glustoretst04.net.dr.dk/" rel="noreferrer" target="_blank" style="font-size:12.8px">glustoretst03.net.dr.dk</a><span style="font-size:12.8px"> and </span><a href="http://glustoretst04.net.dr.dk/" rel="noreferrer" target="_blank" style="font-size:12.8px">glustoretst04.net.dr.dk</a><span style="font-size:12.8px"> to see what went wrong.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Regards,</span></div><div><span style="font-size:12.8px">Nithya</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 4 May 2017 at 11:46, Jesper Led Lauridsen TS Infra server <span dir="ltr"><<a href="mailto:JLY@dr.dk" target="_blank">JLY@dr.dk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi<br>
<br>
I'm trying to remove 2 bricks from a Distributed-Replicate without losing data. But it fails in rebalance<br>
<br>
Any help is appreciated...<br>
<br>
What I do:<br>
# gluster volume remove-brick glu_linux_dr2_oracle replica 2 glustoretst03.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle glustoretst04.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle start<br>
volume remove-brick start: success<br>
ID: c2549eb4-e37a-4f0d-9273-<wbr>3f7c580e9e80<br>
# gluster volume remove-brick glu_linux_dr2_oracle replica 2 glustoretst03.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle glustoretst04.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle status<br>
Node Rebalanced-files size scanned failures skipped status run time in secs<br>
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------<br>
<a href="http://glustoretst04.net.dr.dk" rel="noreferrer" target="_blank">glustoretst04.net.dr.dk</a> 0 0Bytes 0 0 0 failed 0.00<br>
<a href="http://glustoretst03.net.dr.dk" rel="noreferrer" target="_blank">glustoretst03.net.dr.dk</a> 0 0Bytes 0 0 0 failed 0.00<br>
<br>
******** log output *******<br>
# cat etc-glusterfs-glusterd.vol.log<br>
[2017-05-03 12:18:59.423867] I [glusterd-handler.c:1296:__<wbr>glusterd_handle_cli_get_<wbr>volume] 0-glusterd: Received get vol req<br>
[2017-05-03 12:20:21.024213] I [glusterd-handler.c:3836:__<wbr>glusterd_handle_status_volume] 0-management: Received status volume req for volume glu_int_dr2_dalet<br>
[2017-05-03 12:21:10.813956] I [glusterd-handler.c:1296:__<wbr>glusterd_handle_cli_get_<wbr>volume] 0-glusterd: Received get vol req<br>
[2017-05-03 12:22:45.298742] I [glusterd-brick-ops.c:676:__<wbr>glusterd_handle_remove_brick] 0-management: Received rem brick req<br>
[2017-05-03 12:22:45.298807] I [glusterd-brick-ops.c:722:__<wbr>glusterd_handle_remove_brick] 0-management: request to change replica-count to 2<br>
[2017-05-03 12:22:45.311705] I [glusterd-utils.c:11549:<wbr>glusterd_generate_and_set_<wbr>task_id] 0-management: Generated task-id c2549eb4-e37a-4f0d-9273-<wbr>3f7c580e9e80 for key remove-brick-id<br>
[2017-05-03 12:22:45.312296] I [glusterd-op-sm.c:5105:<wbr>glusterd_bricks_select_remove_<wbr>brick] 0-management: force flag is not set<br>
[2017-05-03 12:22:46.414038] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.419778] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.425132] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.429469] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.433623] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.439089] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.444048] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.448623] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.457386] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.538115] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.542870] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.547325] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.551742] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.555951] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.560725] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.565692] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.570027] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:46.578645] I [glusterd-volgen.c:1177:get_<wbr>vol_nfs_transport_type] 0-glusterd: The default transport type for tcp,rdma volume is tcp if option is not defined by the user<br>
[2017-05-03 12:22:47.663980] I [glusterd-utils.c:6316:<wbr>glusterd_nfs_pmap_deregister] 0-: De-registered MOUNTV3 successfully<br>
[2017-05-03 12:22:47.664372] I [glusterd-utils.c:6321:<wbr>glusterd_nfs_pmap_deregister] 0-: De-registered MOUNTV1 successfully<br>
[2017-05-03 12:22:47.664786] I [glusterd-utils.c:6326:<wbr>glusterd_nfs_pmap_deregister] 0-: De-registered NFSV3 successfully<br>
[2017-05-03 12:22:47.665175] I [glusterd-utils.c:6331:<wbr>glusterd_nfs_pmap_deregister] 0-: De-registered NLM v4 successfully<br>
[2017-05-03 12:22:47.665559] I [glusterd-utils.c:6336:<wbr>glusterd_nfs_pmap_deregister] 0-: De-registered NLM v1 successfully<br>
[2017-05-03 12:22:47.665943] I [glusterd-utils.c:6341:<wbr>glusterd_nfs_pmap_deregister] 0-: De-registered ACL v3 successfully<br>
[2017-05-03 12:22:47.674503] I [rpc-clnt.c:969:rpc_clnt_<wbr>connection_init] 0-management: setting frame-timeout to 600<br>
[2017-05-03 12:22:47.674655] W [socket.c:3004:socket_connect] 0-management: Ignore failed connection attempt on , (No such file or directory)<br>
[2017-05-03 12:22:48.703206] I [rpc-clnt.c:969:rpc_clnt_<wbr>connection_init] 0-management: setting frame-timeout to 600<br>
[2017-05-03 12:22:48.703345] W [socket.c:3004:socket_connect] 0-management: Ignore failed connection attempt on , (No such file or directory)<br>
[2017-05-03 12:22:49.458391] I [mem-pool.c:545:mem_pool_<wbr>destroy] 0-management: size=588 max=0 total=0<br>
[2017-05-03 12:22:49.458429] I [mem-pool.c:545:mem_pool_<wbr>destroy] 0-management: size=124 max=0 total=0<br>
[2017-05-03 12:22:49.470431] W [socket.c:620:__socket_rwv] 0-socket.management: writev on <a href="http://127.0.0.1:985" rel="noreferrer" target="_blank">127.0.0.1:985</a> failed (Broken pipe)<br>
[2017-05-03 12:22:49.470450] I [socket.c:2353:socket_event_<wbr>handler] 0-transport: disconnecting now<br>
[2017-05-03 12:22:49.470929] W [socket.c:620:__socket_rwv] 0-socket.management: writev on <a href="http://127.0.0.1:988" rel="noreferrer" target="_blank">127.0.0.1:988</a> failed (Broken pipe)<br>
[2017-05-03 12:22:49.470945] I [socket.c:2353:socket_event_<wbr>handler] 0-transport: disconnecting now<br>
[2017-05-03 12:22:49.473855] W [socket.c:620:__socket_rwv] 0-management: readv on /var/run/<wbr>b10c65c880e831b5c91cf638e1c0e0<wbr>e4.socket failed (Invalid argument)<br>
[2017-05-03 12:22:49.473880] I [MSGID: 106006] [glusterd-handler.c:4290:__<wbr>glusterd_nodesvc_rpc_notify] 0-management: nfs has disconnected from glusterd.<br>
[2017-05-03 12:22:49.473907] I [mem-pool.c:545:mem_pool_<wbr>destroy] 0-management: size=588 max=0 total=0<br>
[2017-05-03 12:22:49.473930] I [mem-pool.c:545:mem_pool_<wbr>destroy] 0-management: size=124 max=0 total=0<br>
[2017-05-03 12:22:49.473986] W [socket.c:620:__socket_rwv] 0-management: readv on /var/run/<wbr>6a75793fc0c76a2c9e9403f63ff38d<wbr>99.socket failed (Invalid argument)<br>
[2017-05-03 12:22:49.474003] I [MSGID: 106006] [glusterd-handler.c:4290:__<wbr>glusterd_nodesvc_rpc_notify] 0-management: glustershd has disconnected from glusterd.<br>
[2017-05-03 12:23:23.106811] E [glusterd-op-sm.c:3603:<wbr>glusterd_op_ac_send_stage_op] 0-management: Staging of operation 'Volume Rebalance' failed on localhost : remove-brick not started.<br>
[2017-05-03 12:29:22.407630] I [glusterd-handler.c:1296:__<wbr>glusterd_handle_cli_get_<wbr>volume] 0-glusterd: Received get vol req<br>
[2017-05-03 12:29:44.157973] I [glusterd-handler.c:1296:__<wbr>glusterd_handle_cli_get_<wbr>volume] 0-glusterd: Received get vol req<br>
[2017-05-03 12:30:23.522501] I [glusterd-handler.c:1296:__<wbr>glusterd_handle_cli_get_<wbr>volume] 0-glusterd: Received get vol req<br>
<br>
******** Volume information *******<br>
# gluster volume info glu_linux_dr2_oracle<br>
Volume Name: glu_linux_dr2_oracle<br>
Type: Distributed-Replicate<br>
Volume ID: 3aef9266-0736-45b0-93bb-<wbr>74248e18e85d<br>
Status: Started<br>
Number of Bricks: 3 x 2 = 6<br>
Transport-type: tcp,rdma<br>
Bricks:<br>
Brick1: glustoretst01.net.dr.dk:/<wbr>bricks/brick2/glu_linux_dr2_<wbr>oracle<br>
Brick2: glustoretst02.net.dr.dk:/<wbr>bricks/brick2/glu_linux_dr2_<wbr>oracle<br>
Brick3: glustoretst01.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle<br>
Brick4: glustoretst02.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle<br>
Brick5: glustoretst03.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle<br>
Brick6: glustoretst04.net.dr.dk:/<wbr>bricks/brick1/glu_linux_dr2_<wbr>oracle<br>
Options Reconfigured:<br>
features.quota: off<br>
storage.owner-gid: 0<br>
storage.owner-uid: 0<br>
cluster.server-quorum-type: server<br>
cluster.quorum-type: none<br>
performance.stat-prefetch: off<br>
performance.io-cache: off<br>
performance.read-ahead: off<br>
performance.quick-read: off<br>
auth.allow: 10.101.*<br>
user.cifs: disable<br>
nfs.disable: on<br>
cluster.server-quorum-ratio: 50%<br>
<br>
******** Gluster Version *******<br>
# rpm -qa | grep glusterfs<br>
glusterfs-3.6.9-1.el6.x86_64<br>
[root@glustertst01 glusterfs]# rpm -qa | grep glusterfs<br>
glusterfs-fuse-3.6.9-1.el6.<wbr>x86_64<br>
glusterfs-server-3.6.9-1.el6.<wbr>x86_64<br>
glusterfs-libs-3.6.9-1.el6.<wbr>x86_64<br>
glusterfs-cli-3.6.9-1.el6.x86_<wbr>64<br>
glusterfs-api-3.6.9-1.el6.x86_<wbr>64<br>
glusterfs-3.6.9-1.el6.x86_64<br>
glusterfs-geo-replication-3.6.<wbr>9-1.el6.x86_64<br>
<br>
Regards<br>
Jesper<br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
</blockquote></div><br></div>