[Bugs] [Bug 1447392] [Brick MUX] : Rebalance fails.

bugzilla at redhat.com bugzilla at redhat.com
Tue May 2 16:09:33 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1447392



--- Comment #1 from Nithya Balachandran <nbalacha at redhat.com> ---
+++ This bug was initially created as a clone of Bug #1446107 +++

Description of problem:
------------------------

Created an EC volume.Enabled Brick multiplexing.Added bricks.Triggered
rebalance.

Rebalance failed.

[root at server1 glusterfs]# gluster v rebalance butcher status
                                    Node Rebalanced-files          size      
scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------  
-----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes        
    0             0             0            completed        0:00:00
      server2.sbu.lab.eng.bos.redhat.com                0        0Bytes        
    0             1             0               failed        0:00:02
      server3.sbu.lab.eng.bos.redhat.com                0        0Bytes        
    0             1             0               failed        0:00:02
volume rebalance: butcher: success
[root at gqas009 glusterfs]# 


Version-Release number of selected component (if applicable):
-------------------------------------------------------------

mainline

How reproducible:
-----------------

2/2


Actual results:
--------------

Rebal fails.

Expected results:
-----------------

Rebal should not fail.

Additional info:
----------------

# gluster v info

Volume Name: butcher
Type: Distributed-Disperse
Volume ID: 98d7434c-0466-4ff3-879b-3ee8c211c7b2
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: server1:/bricks2/e1
Brick2: server2:/bricks2/e1
Brick3: server3:/bricks2/e1
Brick4: server1:/bricks1/e1
Brick5: server2:/bricks1/e1
Brick6: server3:/bricks1/e1
Brick7: server1:/bricks6/A1
Brick8: server2:/bricks6/A1
Brick9: server3:/bricks6/A1
Brick10: server1:/bricks8/A1
Brick11: server2:/bricks8/A1
Brick12: server3:/bricks8/A1
Options Reconfigured:
cluster.lookup-optimize: on
transport.address-family: inet
nfs.disable: on
cluster.brick-multiplex: enable

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list