[Gluster-users] Wrong volume size with df

Benjamin Kingston list at nexusnebula.net
Fri Jan 5 02:50:24 UTC 2018


I'm also having this issue with a volume before and after I broke from a
arbiter volume down to a single distribute, and rebuilt to arbiter

On Tue, Jan 2, 2018 at 1:51 PM, Tom Fite <tomfite at gmail.com> wrote:

> For what it's worth here, after I added a hot tier to the pool, the brick
> sizes are now reporting the correct size of all bricks combined instead of
> just one brick.
>
> Not sure if that gives you any clues for this... maybe adding another
> brick to the pool would have a similar effect?
>
>
> On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
>
>> Sure!
>>
>> > 1 - output of gluster volume heal <volname> info
>>
>> Brick pod-sjc1-gluster1:/data/brick1/gv0
>> Status: Connected
>> Number of entries: 0
>>
>> Brick pod-sjc1-gluster2:/data/brick1/gv0
>> Status: Connected
>> Number of entries: 0
>>
>> Brick pod-sjc1-gluster1:/data/brick2/gv0
>> Status: Connected
>> Number of entries: 0
>>
>> Brick pod-sjc1-gluster2:/data/brick2/gv0
>> Status: Connected
>> Number of entries: 0
>>
>> Brick pod-sjc1-gluster1:/data/brick3/gv0
>> Status: Connected
>> Number of entries: 0
>>
>> Brick pod-sjc1-gluster2:/data/brick3/gv0
>> Status: Connected
>> Number of entries: 0
>>
>> > 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
>>
>> Attached
>>
>> > 3 - output of gluster volume <volname> info
>>
>> [root at pod-sjc1-gluster2 ~]# gluster volume info
>>
>> Volume Name: gv0
>> Type: Distributed-Replicate
>> Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
>> Status: Started
>> Snapshot Count: 13
>> Number of Bricks: 3 x 2 = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: pod-sjc1-gluster1:/data/brick1/gv0
>> Brick2: pod-sjc1-gluster2:/data/brick1/gv0
>> Brick3: pod-sjc1-gluster1:/data/brick2/gv0
>> Brick4: pod-sjc1-gluster2:/data/brick2/gv0
>> Brick5: pod-sjc1-gluster1:/data/brick3/gv0
>> Brick6: pod-sjc1-gluster2:/data/brick3/gv0
>> Options Reconfigured:
>> performance.cache-refresh-timeout: 60
>> performance.stat-prefetch: on
>> server.allow-insecure: on
>> performance.flush-behind: on
>> performance.rda-cache-limit: 32MB
>> network.tcp-window-size: 1048576
>> performance.nfs.io-threads: on
>> performance.write-behind-window-size: 4MB
>> performance.nfs.write-behind-window-size: 512MB
>> performance.io-cache: on
>> performance.quick-read: on
>> features.cache-invalidation: on
>> features.cache-invalidation-timeout: 600
>> performance.cache-invalidation: on
>> performance.md-cache-timeout: 600
>> network.inode-lru-limit: 90000
>> performance.cache-size: 4GB
>> server.event-threads: 16
>> client.event-threads: 16
>> features.barrier: disable
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: on
>> cluster.lookup-optimize: on
>> server.outstanding-rpc-limit: 1024
>> auto-delete: enable
>> You have new mail in /var/spool/mail/root
>>
>> > 4 - output of gluster volume <volname> status
>>
>> [root at pod-sjc1-gluster2 ~]# gluster volume status gv0
>> Status of volume: gv0
>> Gluster process                             TCP Port  RDMA Port  Online
>> Pid
>> ------------------------------------------------------------
>> ------------------
>> Brick pod-sjc1-gluster1:/data/
>> brick1/gv0                                  49152     0          Y
>>  3198
>> Brick pod-sjc1-gluster2:/data/
>> brick1/gv0                                  49152     0          Y
>>  4018
>> Brick pod-sjc1-gluster1:/data/
>> brick2/gv0                                  49153     0          Y
>>  3205
>> Brick pod-sjc1-gluster2:/data/
>> brick2/gv0                                  49153     0          Y
>>  4029
>> Brick pod-sjc1-gluster1:/data/
>> brick3/gv0                                  49154     0          Y
>>  3213
>> Brick pod-sjc1-gluster2:/data/
>> brick3/gv0                                  49154     0          Y
>>  4036
>> Self-heal Daemon on localhost               N/A       N/A        Y
>>  17869
>> Self-heal Daemon on pod-sjc1-gluster1.exava
>> ult.com                                     N/A       N/A        Y
>>  3183
>>
>> Task Status of Volume gv0
>> ------------------------------------------------------------
>> ------------------
>> There are no active volume tasks
>>
>>
>> > 5 - Also, could you try unmount the volume and mount it again and check
>> the size?
>>
>> I have done this a few times but it doesn't seem to help.
>>
>> On Thu, Dec 21, 2017 at 11:18 AM, Ashish Pandey <aspandey at redhat.com>
>> wrote:
>>
>>>
>>> Could youplease provide following -
>>>
>>> 1 - output of gluster volume heal <volname> info
>>> 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
>>> 3 - output of gluster volume <volname> info
>>> 4 - output of gluster volume <volname> status
>>> 5 - Also, could you try unmount the volume and mount it again and check
>>> the size?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------
>>> *From: *"Teknologeek Teknologeek" <teknologeek06 at gmail.com>
>>> *To: *gluster-users at gluster.org
>>> *Sent: *Wednesday, December 20, 2017 2:54:40 AM
>>> *Subject: *[Gluster-users] Wrong volume size with df
>>>
>>>
>>> I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ).
>>>
>>> After a server crash, "gluster peer status" reports all peers as
>>> connected.
>>>
>>> "gluster volume status detail" shows that all bricks are up and running
>>> with the right size, but when I use df from a client mount point, the size
>>> displayed is about 1/6 of the total size.
>>>
>>> When browsing the data, they seem to be ok tho.
>>>
>>> I need some help to understand what's going on as i can't delete the
>>> volume and recreate it from scratch.
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180104/a7d1540b/attachment.html>


More information about the Gluster-users mailing list