[Gluster-users] upgrade to 3.12.1 from 3.10: df returns wrong numbers

Robert Hajime Lanning lanning at lanning.cc
Thu Sep 28 22:02:57 UTC 2017


I found the issue.

The CentOS 7 RPMs, upon upgrade, modifies the .vol files. Among other 
things, it adds "option shared-brick-count \d", using the number of 
bricks in the volume.

This gives you an average free space per brick, instead of total free 
space in the volume.

When I create a new volume, the value of "shared-brick-count" is "1".

find /var/lib/glusterd/vols -type f|xargs sed -i -e 's/option 
shared-brick-count [0-9]*/option shared-brick-count 1/g'

On 09/27/17 17:09, Robert Hajime Lanning wrote:
> Hi,
>
> When I upgraded my cluster, df started returning some odd numbers for 
> my legacy volumes.
>
> Newly created volumes after the upgrade, df works just fine.
>
> I have been researching since Monday and have not found any reference 
> to this symptom.
>
> "vm-images" is the old legacy volume, "test" is the new one.
>
> [root at st-srv-03 ~]# (df -h|grep bricks;ssh st-srv-02 'df -h|grep 
> bricks')|sort
> /dev/sda1                          7.3T  991G  6.4T  14% /bricks/sda1
> /dev/sda1                          7.3T  991G  6.4T  14% /bricks/sda1
> /dev/sdb1                          7.3T  557G  6.8T   8% /bricks/sdb1
> /dev/sdb1                          7.3T  557G  6.8T   8% /bricks/sdb1
> /dev/sdc1                          7.3T  630G  6.7T   9% /bricks/sdc1
> /dev/sdc1                          7.3T  630G  6.7T   9% /bricks/sdc1
> /dev/sdd1                          7.3T  683G  6.7T  10% /bricks/sdd1
> /dev/sdd1                          7.3T  683G  6.7T  10% /bricks/sdd1
> /dev/sde1                          7.3T  657G  6.7T   9% /bricks/sde1
> /dev/sde1                          7.3T  658G  6.7T   9% /bricks/sde1
> /dev/sdf1                          7.3T  711G  6.6T  10% /bricks/sdf1
> /dev/sdf1                          7.3T  711G  6.6T  10% /bricks/sdf1
> /dev/sdg1                          7.3T  756G  6.6T  11% /bricks/sdg1
> /dev/sdg1                          7.3T  756G  6.6T  11% /bricks/sdg1
> /dev/sdh1                          7.3T  753G  6.6T  11% /bricks/sdh1
> /dev/sdh1                          7.3T  753G  6.6T  11% /bricks/sdh1
>
> [root at st-srv-03 ~]# df -h|grep localhost
> localhost:/test                     59T  5.7T   53T  10% /gfs/test
> localhost:/vm-images               7.3T  717G  6.6T  10% /gfs/vm-images
>
> This is on CentOS 7.
>
> Upgrade method was to shutdown glusterd/glusterfsd, "yum erase 
> centos-release-gluster310", "yum install centos-release-gluster312", 
> "yum upgrade -y", start glusterd.
>
> [root at st-srv-03 ~]# rpm -qa|grep gluster
> glusterfs-cli-3.12.1-1.el7.x86_64
> glusterfs-3.12.1-1.el7.x86_64
> nfs-ganesha-gluster-2.5.2-1.el7.x86_64
> glusterfs-client-xlators-3.12.1-1.el7.x86_64
> glusterfs-server-3.12.1-1.el7.x86_64
> glusterfs-libs-3.12.1-1.el7.x86_64
> glusterfs-api-3.12.1-1.el7.x86_64
> glusterfs-fuse-3.12.1-1.el7.x86_64
> centos-release-gluster312-1.0-1.el7.centos.noarch
>
> [root at st-srv-03 ~]# gluster volume info test
>
> Volume Name: test
> Type: Distributed-Replicate
> Volume ID: b53e0836-575e-46fd-9f86-ab7bf7c07ca9
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 8 x 2 = 16
> Transport-type: tcp
> Bricks:
> Brick1: st-srv-02-stor:/bricks/sda1/test
> Brick2: st-srv-03-stor:/bricks/sda1/test
> Brick3: st-srv-02-stor:/bricks/sdb1/test
> Brick4: st-srv-03-stor:/bricks/sdb1/test
> Brick5: st-srv-02-stor:/bricks/sdc1/test
> Brick6: st-srv-03-stor:/bricks/sdc1/test
> Brick7: st-srv-02-stor:/bricks/sdd1/test
> Brick8: st-srv-03-stor:/bricks/sdd1/test
> Brick9: st-srv-02-stor:/bricks/sde1/test
> Brick10: st-srv-03-stor:/bricks/sde1/test
> Brick11: st-srv-02-stor:/bricks/sdf1/test
> Brick12: st-srv-03-stor:/bricks/sdf1/test
> Brick13: st-srv-02-stor:/bricks/sdg1/test
> Brick14: st-srv-03-stor:/bricks/sdg1/test
> Brick15: st-srv-02-stor:/bricks/sdh1/test
> Brick16: st-srv-03-stor:/bricks/sdh1/test
> Options Reconfigured:
> features.cache-invalidation: on
> server.allow-insecure: on
> auth.allow: 192.168.60.*
> transport.address-family: inet
> nfs.disable: on
> cluster.enable-shared-storage: enable
> nfs-ganesha: disable
> [root at st-srv-03 ~]# gluster volume info vm-images
>
> Volume Name: vm-images
> Type: Distributed-Replicate
> Volume ID: 066a0598-e72e-419f-809e-86fa17f6f81c
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 8 x 2 = 16
> Transport-type: tcp
> Bricks:
> Brick1: st-srv-02-stor:/bricks/sda1/vm-images
> Brick2: st-srv-03-stor:/bricks/sda1/vm-images
> Brick3: st-srv-02-stor:/bricks/sdb1/vm-images
> Brick4: st-srv-03-stor:/bricks/sdb1/vm-images
> Brick5: st-srv-02-stor:/bricks/sdc1/vm-images
> Brick6: st-srv-03-stor:/bricks/sdc1/vm-images
> Brick7: st-srv-02-stor:/bricks/sdd1/vm-images
> Brick8: st-srv-03-stor:/bricks/sdd1/vm-images
> Brick9: st-srv-02-stor:/bricks/sde1/vm-images
> Brick10: st-srv-03-stor:/bricks/sde1/vm-images
> Brick11: st-srv-02-stor:/bricks/sdf1/vm-images
> Brick12: st-srv-03-stor:/bricks/sdf1/vm-images
> Brick13: st-srv-02-stor:/bricks/sdg1/vm-images
> Brick14: st-srv-03-stor:/bricks/sdg1/vm-images
> Brick15: st-srv-02-stor:/bricks/sdh1/vm-images
> Brick16: st-srv-03-stor:/bricks/sdh1/vm-images
> Options Reconfigured:
> features.cache-invalidation: on
> server.allow-insecure: on
> auth.allow: 192.168.60.*
> transport.address-family: inet
> nfs.disable: on
> cluster.enable-shared-storage: enable
> nfs-ganesha: disable
>

-- 
Mr. Flibble
King of the Potato People
http://www.linkedin.com/in/RobertLanning



More information about the Gluster-users mailing list