[Bugs] [Bug 1784013] New: Pending self-heal when bricks of a volume are full

bugzilla at redhat.com bugzilla at redhat.com
Mon Dec 16 13:52:08 UTC 2019


            Bug ID: 1784013
           Summary: Pending self-heal when bricks of a volume are full
           Product: GlusterFS
           Version: 5
            Status: NEW
         Component: selfheal
          Assignee: bugs at gluster.org
          Reporter: david.spisla at iternity.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community

Created attachment 1645596
  --> https://bugzilla.redhat.com/attachment.cgi?id=1645596&action=edit
Gluster vo info and status, df -hT, heal info, logs of glfsheal and all bricks

Description of problem:

Setup: 3-Node VMWare Cluster (2 Storage Nodes and 1 Arbiter Node),
Distribute-Replica 2 Volume with 1 Arbiter brick per Replica-Tupel (see
attached file for the detail configuration).

Access to the volume provided via Samba (samba-vfs-glusterfs plugin) and CTDB.

After reaching the storage.reserve limit, there is a pending self-heal which is
not resolved automatically.

Version-Release number of selected component (if applicable):
Gluster FS v5.10

How reproducible:
Steps to Reproduce:
1. Mount volume from a Win10 Client via SMB.
2. Copy a lot of small files (between 50-1000KB) recursively to the share 
3. Continue copying until volume is full and bricks reached the storage.reserve
limit (we use the default of 1%)

During copy process all nodes were up and running

Actual results:
There is a pending self-heal for 1 file

Expected results:
No pending self-heal

Additional info:
See attached file

The above scenario was not only reproduced on a VM Cluster. We could also
monitor it on a real HW Cluster and the number of pending files for self-heal
varies (it can also be up to 7 or 10).

You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.

More information about the Bugs mailing list