[Bugs] [Bug 1566067] Volume status inode is broken with brickmux

bugzilla at redhat.com bugzilla at redhat.com
Wed Apr 11 13:03:08 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1566067

hari gowtham <hgowtham at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Assignee|bugs at gluster.org            |hgowtham at redhat.com



--- Comment #1 from hari gowtham <hgowtham at redhat.com> ---
Description of problem:

Gluster volume status inode command failing on subsequent volumes.


Version-Release number of selected component (if applicable):

3.8.4-54-2

How reproducible:

Every time

Steps to Reproduce:
1.Create 3 node cluster(n1, n2, n3).
2.Create 2 replica-3 volumes (v1, v2).
3.Mount 2 volumes on two different clients(c1, c2).
4.Start running I/O parallel on two mount points.
5.While running I/O's , start executing 'Gluster volume status v1 inode' and
'Gluster volume status v1 fd' frequently with some time gap
6.In sameway run volume status inode command for v2 also
7.Then create new volume  v3 (distirubted_replicated)
8. Then perform "gluster volume status v3 inode" and "gluster volume status v3
fd" on node n1
9. 'Gluster volume status inode' and 'gluster volume status fd' commands are
failing for newly created volume.
10. node n1 bricks of volume v3  went to offline 

Actual results:

root at dhcp37-113 home]# gluster vol status rp1 fd
Error : Request timed out
[root at dhcp37-113 home]# gluster vol status drp1 inode
Error : Request timed out

gluster vol status drp1
Status of volume: drp1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.113:/bricks/brick1/drp1      N/A       N/A        N       N/A  
Brick 10.70.37.157:/bricks/brick1/drp1      49152     0          Y       2125 
Brick 10.70.37.174:/bricks/brick1/drp1      49152     0          Y       2306 
Brick 10.70.37.113:/bricks/brick2/drp1      N/A       N/A        N       N/A  
Brick 10.70.37.157:/bricks/brick2/drp1      49152     0          Y       2125 
Brick 10.70.37.174:/bricks/brick2/drp1      49152     0          Y       2306 
Self-heal Daemon on localhost               N/A       N/A        Y       4507 
Self-heal Daemon on 10.70.37.157            N/A       N/A        Y       4006 
Self-heal Daemon on 10.70.37.174            N/A       N/A        Y       4111 

Task Status of Volume drp1



Expected results:

Bricks should not go to offline and gluster volume status inode and fd commands
should get executed successfully

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list