<div dir="ltr"><p>Hi, <br>
</p>
<p>Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.</p>
<p>I checked that all the volumes status are fine, all the glusterd
daemons are running, there is no error in logs, however df shows a bad
total size.<br>
</p>
<p>My configuration for one volume: volumedisk1</p>
[root@stor1 ~]# gluster volume status volumedisk1 detail<br>
<p>Status of volume: volumedisk1<br>
------------------------------------------------------------------------------<br>
Brick : Brick stor1data:/mnt/glusterfs/vol1/brick1<br>
TCP Port : 49153 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 13579 <br>
File System : xfs <br>
Device : /dev/sdc1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.0TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273123069 <br>
------------------------------------------------------------------------------<br>
Brick : Brick stor2data:/mnt/glusterfs/vol1/brick1<br>
TCP Port : 49153 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 13344 <br>
File System : xfs <br>
Device : /dev/sdc1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.0TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273124718 <br>
------------------------------------------------------------------------------<br>
Brick : Brick stor3data:/mnt/disk_c/glusterfs/vol1/brick1<br>
TCP Port : 49154 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 17439 <br>
File System : xfs <br>
Device : /dev/sdc1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.7TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273125437 <br>
------------------------------------------------------------------------------<br>
Brick : Brick stor3data:/mnt/disk_d/glusterfs/vol1/brick1<br>
TCP Port : 49155 <br>
RDMA Port : 0 <br>
Online : Y <br>
Pid : 17459 <br>
File System : xfs <br>
Device : /dev/sdd1 <br>
Mount Options : rw,noatime <br>
Inode Size : 512 <br>
Disk Space Free : 35.6TB <br>
Total Disk Space : 49.1TB <br>
Inode Count : 5273970048 <br>
Free Inodes : 5273127036 <br>
</p>
<p>Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB = <b>196,4 TB </b>but df shows:<br>
</p>
<p>[root@stor1 ~]# df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
/dev/sda2 48G 21G 25G 46% /<br>
tmpfs 32G 80K 32G 1% /dev/shm<br>
/dev/sda1 190M 62M 119M 35% /boot<br>
/dev/sda4 395G 251G 124G 68% /data<br>
/dev/sdb1 26T 601G 25T 3% /mnt/glusterfs/vol0<br>
/dev/sdc1 50T 15T 36T 29% /mnt/glusterfs/vol1<br>
stor1data:/volumedisk0<br>
76T 1,6T 74T 3% /volumedisk0<br>
stor1data:/volumedisk1<br>
<b>148T</b> 42T 106T 29% /volumedisk1</p>
<p>Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB<br>
</p>
<p>It's a production system so I hope you can help me.<br>
</p>
<p>Thanks in advance.</p>
<p>Jose V.</p>
<p><br>
</p>
<p>Below some other data of my configuration:<br>
</p>
<p>[root@stor1 ~]# gluster volume info<br>
<br>
Volume Name: volumedisk0<br>
Type: Distribute<br>
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol0/brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol0/brick1<br>
Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1<br>
Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1<br>
Options Reconfigured:<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on<br>
<br>
Volume Name: volumedisk1<br>
Type: Distribute<br>
Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol1/brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol1/brick1<br>
Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1<br>
Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1<br>
Options Reconfigured:<br>
cluster.min-free-inodes: 6%<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on</p>
<p>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt-glusterfs-vol1-brick1.vol.rpmsave:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol:3: option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:
option shared-brick-count 0<br>
</p>
<p><br>
</p></div>