[Bugs] [Bug 1744950] glusterfs wrong size with total sum of brick.

bugzilla at redhat.com bugzilla at redhat.com
Tue Aug 27 09:48:20 UTC 2019


Nithya Balachandran <nbalacha at redhat.com> changed:

           What    |Removed                     |Added
             Status|NEW                         |ASSIGNED
          Component|core                        |glusterd

--- Comment #11 from Nithya Balachandran <nbalacha at redhat.com> ---
Does this volume have the same problem as the other one? If yes, the problem is
with the volfiles for the bricks on

   option shared-brick-count 2

   option shared-brick-count 2

Both these have a shared-brick-count value of 2 which causes gluster to
internally halve the available disk size for these bricks. As they are on
different replica sets and the lowest disk space value of of the bricks is
taken for the disk space of the replica set, this means the value of the disk
space is halved for the entire volume.

This is the same problem reported in

To recover, please do the following:

1. Restart glusterd on each node
2. For each volume, run the following command from any one gluster node:

gluster v set <volname> cluster.min-free-disk 11%

This should regenerate the volfiles with the correct values. Recheck the
shared-brick-count values after doing these steps - the values should be 0 or
1. The df values should also be correct.

Moving this to the glusterd component.

You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.

More information about the Bugs mailing list