[Bugs] [Bug 1221941] glusterfsd: bricks crash while executing ls on nfs-ganesha vers=3

bugzilla at redhat.com bugzilla at redhat.com
Fri May 15 10:03:35 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1221941



--- Comment #1 from Saurabh <saujain at redhat.com> ---
[root at nfs3 ~]# gluster volume status
Status of volume: gluster_shared_storage
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1-share   49156     0          Y       3549 
Brick 10.70.37.77:/rhs/brick1/d1r2-share    49155     0          Y       3329 
Brick 10.70.37.76:/rhs/brick1/d2r1-share    49155     0          Y       3081 
Brick 10.70.37.69:/rhs/brick1/d2r2-share    49155     0          Y       3346 
Brick 10.70.37.148:/rhs/brick1/d3r1-share   49157     0          Y       3566 
Brick 10.70.37.77:/rhs/brick1/d3r2-share    49156     0          Y       3346 
Brick 10.70.37.76:/rhs/brick1/d4r1-share    49156     0          Y       3098 
Brick 10.70.37.69:/rhs/brick1/d4r2-share    49156     0          Y       3363 
Brick 10.70.37.148:/rhs/brick1/d5r1-share   49158     0          Y       3583 
Brick 10.70.37.77:/rhs/brick1/d5r2-share    49157     0          Y       3363 
Brick 10.70.37.76:/rhs/brick1/d6r1-share    49157     0          Y       3115 
Brick 10.70.37.69:/rhs/brick1/d6r2-share    49157     0          Y       3380 
Self-heal Daemon on localhost               N/A       N/A        Y       28389
Self-heal Daemon on 10.70.37.148            N/A       N/A        Y       22717
Self-heal Daemon on 10.70.37.77             N/A       N/A        Y       4784 
Self-heal Daemon on 10.70.37.76             N/A       N/A        Y       25893

Task Status of Volume gluster_shared_storage
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: vol2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.148:/rhs/brick1/d1r1         49153     0          Y       22219
Brick 10.70.37.77:/rhs/brick1/d1r2          49152     0          Y       4321 
Brick 10.70.37.76:/rhs/brick1/d2r1          N/A       N/A        N       25654
Brick 10.70.37.69:/rhs/brick1/d2r2          49152     0          Y       27914
Brick 10.70.37.148:/rhs/brick1/d3r1         49154     0          Y       18842
Brick 10.70.37.77:/rhs/brick1/d3r2          49153     0          Y       4343 
Brick 10.70.37.76:/rhs/brick1/d4r1          N/A       N/A        N       25856
Brick 10.70.37.69:/rhs/brick1/d4r2          N/A       N/A        N       27934
Brick 10.70.37.148:/rhs/brick1/d5r1         49155     0          Y       22237
Brick 10.70.37.77:/rhs/brick1/d5r2          49154     0          Y       4361 
Brick 10.70.37.76:/rhs/brick1/d6r1          N/A       N/A        N       25874
Brick 10.70.37.69:/rhs/brick1/d6r2          N/A       N/A        N       27952
Self-heal Daemon on localhost               N/A       N/A        Y       28389
Self-heal Daemon on 10.70.37.77             N/A       N/A        Y       4784 
Self-heal Daemon on 10.70.37.148            N/A       N/A        Y       22717
Self-heal Daemon on 10.70.37.76             N/A       N/A        Y       25893

Task Status of Volume vol2
------------------------------------------------------------------------------
There are no active volume tasks


cat /etc/ganesha/exports/export.vol2.conf
# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file, copy it over to all the NFS-Ganesha nodes
# and run ganesha-ha.sh --refresh-config.
EXPORT{
      Export_Id= 2 ;
      Path = "/vol2";
      FSAL {
           name = GLUSTER;
           hostname="localhost";
          volume="vol2";
           }
      Access_type = RW;
      Squash="No_root_squash";
      Pseudo="/vol2";
      Protocols = "3", "4" ;
      Transports = "UDP","TCP";
      SecType = "sys";
      Disable_ACL = True;
     }

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list