[Gluster-users] df does not show full volume capacity after update to 3.12.4

Nithya Balachandran nbalacha at redhat.com
Wed Jan 31 06:17:04 UTC 2018


I found this on the mailing list:








*I found the issue.The CentOS 7 RPMs, upon upgrade, modifies the .vol
files. Among other things, it adds "option shared-brick-count \d", using
the number of bricks in the volume.This gives you an average free space per
brick, instead of total free space in the volume.When I create a new
volume, the value of "shared-brick-count" is "1".find
/var/lib/glusterd/vols -type f|xargs sed -i -e 's/option shared-brick-count
[0-9]*/option shared-brick-count 1/g'*



Eva, can you send me the contents of the /var/lib/glusterd/<volname> folder
from any one node so I can confirm if this is the problem?

Regards,
Nithya


On 31 January 2018 at 10:47, Nithya Balachandran <nbalacha at redhat.com>
wrote:

> Hi Eva,
>
> One more question. What version of gluster were you running before the
> upgrade?
>
> Thanks,
> Nithya
>
> On 31 January 2018 at 09:52, Nithya Balachandran <nbalacha at redhat.com>
> wrote:
>
>> Hi Eva,
>>
>> Can you send us the following:
>>
>> gluster volume info
>> gluster volume status
>>
>> The log files and tcpdump for df on a fresh mount point for that volume.
>>
>> Thanks,
>> Nithya
>>
>>
>> On 31 January 2018 at 07:17, Freer, Eva B. <freereb at ornl.gov> wrote:
>>
>>> After OS update to CentOS 7.4 or RedHat 6.9 and update to Gluster
>>> 3.12.4, the ‘df’ command shows only part of the available space on the
>>> mount point for multi-brick volumes. All nodes are at 3.12.4. This occurs
>>> on both servers and clients.
>>>
>>>
>>>
>>> We have 2 different server configurations.
>>>
>>>
>>>
>>> Configuration 1: A distributed volume of 8 bricks with 4 on each server.
>>> The initial configuration had 4 bricks of 59TB each with 2 on each server.
>>> Prior to the update to CentOS 7.4 and gluster 3.12.4, ‘df’ correctly showed
>>> the size for the volume as 233TB. After the update, we added 2 bricks with
>>> 1 on each server, but the output of ‘df’ still only listed 233TB for the
>>> volume. We added 2 more bricks, again with 1 on each server. The output of
>>> ‘df’ now shows 350TB, but the aggregate of 8 – 59TB bricks should be ~466TB.
>>>
>>>
>>>
>>> Configuration 2: A distributed, replicated volume with 9 bricks on each
>>> server for a total of ~350TB of storage. After the server update to RHEL
>>> 6.9 and gluster 3.12.4, the volume now shows as having 50TB with ‘df’. No
>>> changes were made to this volume after the update.
>>>
>>>
>>>
>>> In both cases, examining the bricks shows that the space and files are
>>> still there, just not reported correctly with ‘df’. All machines have been
>>> rebooted and the problem persists.
>>>
>>>
>>>
>>> Any help/advice you can give on this would be greatly appreciated.
>>>
>>>
>>>
>>> Thanks in advance.
>>>
>>> Eva Freer
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180131/b4016cd1/attachment.html>


More information about the Gluster-users mailing list