[Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)

Nithya Balachandran nbalacha at redhat.com
Wed Feb 28 04:07:16 UTC 2018


Hi Jose,

There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.

The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.


Regards,
Nithya



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260

On 28 February 2018 at 03:03, Jose V. Carrión <jocarbur at gmail.com> wrote:

> Hi,
>
> Some days ago all my glusterfs configuration was working fine. Today I
> realized that the total size reported by df command was changed and is
> smaller than the aggregated capacity of all the bricks in the volume.
>
> I checked that all the volumes status are fine, all the glusterd daemons
> are running, there is no error in logs,  however df shows a bad total size.
>
> My configuration for one volume: volumedisk1
> [root at stor1 ~]# gluster volume status volumedisk1  detail
>
> Status of volume: volumedisk1
> ------------------------------------------------------------
> ------------------
> Brick                : Brick stor1data:/mnt/glusterfs/vol1/brick1
> TCP Port             : 49153
> RDMA Port            : 0
> Online               : Y
> Pid                  : 13579
> File System          : xfs
> Device               : /dev/sdc1
> Mount Options        : rw,noatime
> Inode Size           : 512
> Disk Space Free      : 35.0TB
> Total Disk Space     : 49.1TB
> Inode Count          : 5273970048
> Free Inodes          : 5273123069
> ------------------------------------------------------------
> ------------------
> Brick                : Brick stor2data:/mnt/glusterfs/vol1/brick1
> TCP Port             : 49153
> RDMA Port            : 0
> Online               : Y
> Pid                  : 13344
> File System          : xfs
> Device               : /dev/sdc1
> Mount Options        : rw,noatime
> Inode Size           : 512
> Disk Space Free      : 35.0TB
> Total Disk Space     : 49.1TB
> Inode Count          : 5273970048
> Free Inodes          : 5273124718
> ------------------------------------------------------------
> ------------------
> Brick                : Brick stor3data:/mnt/disk_c/glusterfs/vol1/brick1
> TCP Port             : 49154
> RDMA Port            : 0
> Online               : Y
> Pid                  : 17439
> File System          : xfs
> Device               : /dev/sdc1
> Mount Options        : rw,noatime
> Inode Size           : 512
> Disk Space Free      : 35.7TB
> Total Disk Space     : 49.1TB
> Inode Count          : 5273970048
> Free Inodes          : 5273125437
> ------------------------------------------------------------
> ------------------
> Brick                : Brick stor3data:/mnt/disk_d/glusterfs/vol1/brick1
> TCP Port             : 49155
> RDMA Port            : 0
> Online               : Y
> Pid                  : 17459
> File System          : xfs
> Device               : /dev/sdd1
> Mount Options        : rw,noatime
> Inode Size           : 512
> Disk Space Free      : 35.6TB
> Total Disk Space     : 49.1TB
> Inode Count          : 5273970048
> Free Inodes          : 5273127036
>
>
> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB
> = *196,4 TB  *but df shows:
>
> [root at stor1 ~]# df -h
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/sda2              48G   21G   25G  46% /
> tmpfs                  32G   80K   32G   1% /dev/shm
> /dev/sda1             190M   62M  119M  35% /boot
> /dev/sda4             395G  251G  124G  68% /data
> /dev/sdb1              26T  601G   25T   3% /mnt/glusterfs/vol0
> /dev/sdc1              50T   15T   36T  29% /mnt/glusterfs/vol1
> stor1data:/volumedisk0
>                        76T  1,6T   74T   3% /volumedisk0
> stor1data:/volumedisk1
>                       *148T*   42T  106T  29% /volumedisk1
>
> Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
>
> It's a production system so I hope you can help me.
>
> Thanks in advance.
>
> Jose V.
>
>
> Below some other data of my configuration:
>
> [root at stor1 ~]# gluster volume info
>
> Volume Name: volumedisk0
> Type: Distribute
> Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: stor1data:/mnt/glusterfs/vol0/brick1
> Brick2: stor2data:/mnt/glusterfs/vol0/brick1
> Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1
> Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1
> Options Reconfigured:
> performance.cache-size: 4GB
> cluster.min-free-disk: 1%
> performance.io-thread-count: 16
> performance.readdir-ahead: on
>
> Volume Name: volumedisk1
> Type: Distribute
> Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: stor1data:/mnt/glusterfs/vol1/brick1
> Brick2: stor2data:/mnt/glusterfs/vol1/brick1
> Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1
> Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1
> Options Reconfigured:
> cluster.min-free-inodes: 6%
> performance.cache-size: 4GB
> cluster.min-free-disk: 1%
> performance.io-thread-count: 16
> performance.readdir-ahead: on
>
> [root at stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option
> shared-brick-count 1
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.
> stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option
> shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-
> glusterfs-vol1-brick1.vol:3:    option shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-
> glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-
> glusterfs-vol1-brick1.vol:3:    option shared-brick-count 0
> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-
> glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 0
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180228/9cc92b58/attachment.html>


More information about the Gluster-users mailing list