<div dir="ltr">Hi Jose,<div><br></div><div>There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this.</div><div><br></div><div>The &quot;<span style="font-size:12.8px">shared-brick-count&quot; values seem fine on </span><span style="font-size:12.8px">stor1. Please send us &quot;</span><span style="font-size:12.8px">grep -n &quot;share&quot; /var/lib/glusterd/vols/</span><wbr style="font-size:12.8px"><span style="font-size:12.8px">volumedisk1/*&quot; results </span><span style="font-size:12.8px">for the other nodes so we can check if they are the cause.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Regards,</span></div><div><span style="font-size:12.8px">Nithya</span></div><div><br></div><div><br></div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1517260">https://bugzilla.redhat.com/show_bug.cgi?id=1517260</a></div></div><div class="gmail_extra"><br><div class="gmail_quote">On 28 February 2018 at 03:03, Jose V. Carrión <span dir="ltr">&lt;<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><p>Hi, <br>
</p>
<p>Some days ago all my glusterfs configuration was working fine. Today I 
realized that the total size reported by df command was changed and is 
smaller than the aggregated capacity of all the bricks in the volume.</p>
<p>I checked that all the volumes status are fine, all the glusterd 
daemons are running, there is no error in logs,  however df shows a bad 
total size.<br>
</p>
<p>My configuration for one volume: volumedisk1</p>
[root@stor1 ~]# gluster volume status volumedisk1  detail<br>
<p>Status of volume: volumedisk1<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick                : Brick stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port             : 49153               <br>
RDMA Port            : 0                   <br>
Online               : Y                   <br>
Pid                  : 13579               <br>
File System          : xfs                 <br>
Device               : /dev/sdc1           <br>
Mount Options        : rw,noatime          <br>
Inode Size           : 512                 <br>
Disk Space Free      : 35.0TB              <br>
Total Disk Space     : 49.1TB              <br>
Inode Count          : 5273970048          <br>
Free Inodes          : 5273123069          <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick                : Brick stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port             : 49153               <br>
RDMA Port            : 0                   <br>
Online               : Y                   <br>
Pid                  : 13344               <br>
File System          : xfs                 <br>
Device               : /dev/sdc1           <br>
Mount Options        : rw,noatime          <br>
Inode Size           : 512                 <br>
Disk Space Free      : 35.0TB              <br>
Total Disk Space     : 49.1TB              <br>
Inode Count          : 5273970048          <br>
Free Inodes          : 5273124718          <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick                : Brick stor3data:/mnt/disk_c/<wbr>glusterfs/vol1/brick1<br>
TCP Port             : 49154               <br>
RDMA Port            : 0                   <br>
Online               : Y                   <br>
Pid                  : 17439               <br>
File System          : xfs                 <br>
Device               : /dev/sdc1           <br>
Mount Options        : rw,noatime          <br>
Inode Size           : 512                 <br>
Disk Space Free      : 35.7TB              <br>
Total Disk Space     : 49.1TB              <br>
Inode Count          : 5273970048          <br>
Free Inodes          : 5273125437          <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick                : Brick stor3data:/mnt/disk_d/<wbr>glusterfs/vol1/brick1<br>
TCP Port             : 49155               <br>
RDMA Port            : 0                   <br>
Online               : Y                   <br>
Pid                  : 17459               <br>
File System          : xfs                 <br>
Device               : /dev/sdd1           <br>
Mount Options        : rw,noatime          <br>
Inode Size           : 512                 <br>
Disk Space Free      : 35.6TB              <br>
Total Disk Space     : 49.1TB              <br>
Inode Count          : 5273970048          <br>
Free Inodes          : 5273127036          <br>
 </p>
<p>Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB = <b>196,4 TB  </b>but df shows:<br>
</p>
<p>[root@stor1 ~]# df -h<br>
Filesystem            Size  Used Avail Use% Mounted on<br>
/dev/sda2              48G   21G   25G  46% /<br>
tmpfs                  32G   80K   32G   1% /dev/shm<br>
/dev/sda1             190M   62M  119M  35% /boot<br>
/dev/sda4             395G  251G  124G  68% /data<br>
/dev/sdb1              26T  601G   25T   3% /mnt/glusterfs/vol0<br>
/dev/sdc1              50T   15T   36T  29% /mnt/glusterfs/vol1<br>
stor1data:/volumedisk0<br>
                       76T  1,6T   74T   3% /volumedisk0<br>
stor1data:/volumedisk1<br>
                      <b>148T</b>   42T  106T  29% /volumedisk1</p>
<p>Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB<br>
</p>
<p>It&#39;s a production system so I hope you can help me.<br>
</p>
<p>Thanks in advance.</p>
<p>Jose V.</p>
<p><br>
</p>
<p>Below some other data of my configuration:<br>
</p>
<p>[root@stor1 ~]# gluster volume info<br>
 <br>
Volume Name: volumedisk0<br>
Type: Distribute<br>
Volume ID: 0ee52d94-1131-4061-bcef-<wbr>bd8cf898da10<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_b1/<wbr>glusterfs/vol0/brick1<br>
Brick4: stor3data:/mnt/disk_b2/<wbr>glusterfs/vol0/brick1<br>
Options Reconfigured:<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on<br>
 <br>
Volume Name: volumedisk1<br>
Type: Distribute<br>
Volume ID: 591b7098-800e-4954-82a9-<wbr>6b6d81c9e0a2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_c/<wbr>glusterfs/vol1/brick1<br>
Brick4: stor3data:/mnt/disk_d/<wbr>glusterfs/vol1/brick1<br>
Options Reconfigured:<br>
cluster.min-free-inodes: 6%<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on</p>
<p>[root@stor1 ~]# grep -n &quot;share&quot; /var/lib/glusterd/vols/<wbr>volumedisk1/*<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:    option shared-brick-count 1<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:    option shared-brick-count 1<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:    option shared-brick-count 0<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:    option shared-brick-count 0<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol:3:  <wbr>  option shared-brick-count 0<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   
 option shared-brick-count 0<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol:3:  <wbr>  option shared-brick-count 0<br>
/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   
 option shared-brick-count 0<br>
</p>
<p><br>
</p></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>