[Bugs] [Bug 1714895] New: Glusterfs(fuse) client crash

bugzilla at redhat.com bugzilla at redhat.com
Wed May 29 06:39:49 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1714895

            Bug ID: 1714895
           Summary: Glusterfs(fuse) client crash
           Product: GlusterFS
           Version: 6
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: libglusterfsclient
          Assignee: bugs at gluster.org
          Reporter: maybeonly at gmail.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Created attachment 1574617
  --> https://bugzilla.redhat.com/attachment.cgi?id=1574617&action=edit
Copied log from /var/log/glusterfs/mount-point.log of the client when crashing

Description of problem:
One of Glusterfs(fuse) client crashes sometimes 

Version-Release number of selected component (if applicable):
6.1 (from yum)

How reproducible:
about once a week

Steps to Reproduce:
I'm sorry, I don't know

Actual results:
It crashed. It seems a core file was generated but it failed to be written to
the root dir.
And I think there's something wrong with this volume, but cannot be healed.

Expected results:


Additional info:
# gluster volume info datavolume3

Volume Name: datavolume3
Type: Replicate
Volume ID: 675d3435-e60e-424d-9eb6-dfd7427defdd
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 185***:/***/bricks/datavolume3
Brick2: 237***:/***/bricks/datavolume3
Brick3: 208***:/***/bricks/datavolume3
Options Reconfigured:
features.locks-revocation-max-blocked: 3
features.locks-revocation-clear-all: true
cluster.entry-self-heal: on
cluster.data-self-heal: on
cluster.metadata-self-heal: on
storage.owner-gid: ****
storage.owner-uid: ****
auth.allow: *********
nfs.disable: on
transport.address-family: inet

The attachment is copied from /var/log/glusterfs/mount-point.log of the client
I've got a statedump file but I don't know which section is related.

The volume(s) were created by gfs v3.8 @ centos6, and then I replaced the
servers by new servers with gfs v6.0 @ centos7, and upgraded their gfs to v6.1,
and then set cluseter.opver=60000

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list