<div dir="ltr">Hi Jose,<br><div class="gmail_extra"><br><div class="gmail_quote">On 28 February 2018 at 18:28, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Nithya,<div><br></div><div>I applied the workarround for this bug and now df shows the right size:</div><div><span class=""><div><br></div></span></div></div></blockquote><div>That is good to hear.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><span class=""><div></div><div>[root@stor1 ~]# df -h</div><div>Filesystem       Size  Used Avail Use% Mounted on</div></span><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div><div>stor1data:/volumedisk0</div><div>           101T  3,3T  97T  4% /volumedisk0</div><div>stor1data:/volumedisk1</div><div>           197T  61T  136T  31% /volumedisk1</div></div><div><br></div><div><div><br></div><div>[root@stor2 ~]# df -h</div><span class=""><div>Filesystem       Size  Used Avail Use% Mounted on</div></span><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div><div>stor2data:/volumedisk0</div><div>           101T  3,3T  97T  4% /volumedisk0</div><div>stor2data:/volumedisk1</div><div>           197T  61T  136T  31% /volumedisk1</div></div><div><br></div><div><br></div><div><div>[root@stor3 ~]# df -h</div><span class=""><div>Filesystem       Size  Used Avail Use% Mounted on</div></span><div>/dev/sdb1        25T  638G  24T  3% /mnt/disk_b1/glusterfs/vol0<br></div><div>/dev/sdb2        25T  654G  24T  3% /mnt/disk_b2/glusterfs/vol0</div><div>/dev/sdc1        50T  15T  35T  30% /mnt/disk_c/glusterfs/vol1</div><div>/dev/sdd1        50T  15T  35T  30% /mnt/disk_d/glusterfs/vol1</div><div>stor3data:/volumedisk0</div><div>           101T  3,3T  97T  4% /volumedisk0</div><div>stor3data:/volumedisk1</div><div>           197T  61T  136T  31% /volumedisk1</div></div><div><br></div><div><br></div><div>However I'm concerned because, as you can see, the volumedisk0 on stor3data is composed by 2 bricks on thesame disk but on different partitions (/dev/sdb1 and /dev/sdb2).</div><div>After to aplly the workarround, the  shared-brick-count parameter was setted to 1 in all the bricks and all the servers (see below). Could be this an issue ?</div><div><br></div></div></blockquote><div>No, this is correct. The shared-brick-count will be > 1 only if multiple bricks share the same partition.</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div></div><div>Also, I can check that stor3data is now unbalanced respect stor1data and stor2data. The three nodes have the same size of brick but stor3data bricks have used 1TB less than stor1data and stor2data:</div></div></blockquote><div><br></div><div><br></div><div>This does not necessarily indicate a problem. The distribution need not be exactly equal and depends on the filenames. Can you provide more information on the kind of dataset (how many files, sizes etc) on this volume? Did you create the volume with all 4 bricks or add some later?</div><div><br></div><div>Regards,</div><div>Nithya</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><br></div><div><div><div>stor1data:<br></div><div><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div></div><div><br></div><div><div>stor2data bricks:</div></div><div><div>/dev/sdb1        26T  1,1T  25T  4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1        50T  16T  34T  33% /mnt/glusterfs/vol1</div></div><div><br></div><div>stor3data bricks:</div></div><div><div> /dev/sdb1        25T  638G  24T  3% /mnt/disk_b1/glusterfs/vol0<br></div><div> /dev/sdb2        25T  654G  24T  3% /mnt/disk_b2/glusterfs/vol0</div></div><div>  dev/sdc1        50T  15T  35T  30% /mnt/disk_c/glusterfs/vol1<br></div><div>  /dev/sdd1       50T  15T  35T  30% /mnt/disk_d/glusterfs/vol1</div></div><div><span class=""><div><br></div><div><br></div><div>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/<wbr>volumedisk1/*</div><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:   option shared-brick-count 1</div><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:   option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor2 ~]# grep -n "share" /var/lib/glusterd/vols/<wbr>volumedisk1/*</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:   option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:   option shared-brick-count 0</div><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:   option shared-brick-count 1</div><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:   option shared-brick-count 1</div><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor3t ~]# grep -n "share" /var/lib/glusterd/vols/<wbr>volumedisk1/*</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:   option shared-brick-count 1<br></div><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor1data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:   option shared-brick-count 1</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor2data.mnt-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_c-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   option shared-brick-count 0</div></span><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol:3:   option shared-brick-count 1</div><span class=""><div>/var/lib/glusterd/vols/<wbr>volumedisk1/volumedisk1.<wbr>stor3data.mnt-disk_d-<wbr>glusterfs-vol1-brick1.vol.<wbr>rpmsave:3:   option shared-brick-count 0</div></span></div><div><br></div><div class="gmail_extra">Thaks for your help,</div><div class="gmail_extra">Greetings.</div><span class="HOEnZb"><font color="#888888"><div class="gmail_extra"><br></div><div class="gmail_extra">Jose V.</div></font></span><div><div class="h5"><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">2018-02-28 5:07 GMT+01:00 Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Jose,<div><br></div><div>There is a known issue with gluster 3.12.x builds (see [1]) so you may be running into this.</div><div><br></div><div>The "<span style="font-size:12.8px">shared-brick-count" values seem fine on </span><span style="font-size:12.8px">stor1. Please send us "</span><span style="font-size:12.8px">grep -n "share" /var/lib/glusterd/vols/</span><span style="font-size:12.8px">volumed<wbr>isk1/*" results </span><span style="font-size:12.8px">for the other nodes so we can check if they are the cause.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Regards,</span></div><div><span style="font-size:12.8px">Nithya</span></div><div><br></div><div><br></div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1517260" target="_blank">https://bugzilla.redhat.co<wbr>m/show_bug.cgi?id=1517260</a></div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_3174327891829646280h5">On 28 February 2018 at 03:03, Jose V. Carrión <span dir="ltr"><<a href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_3174327891829646280h5"><div dir="ltr"><p>Hi, <br>
</p>
<p>Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.</p>
<p>I checked that all the volumes status are fine, all the glusterd
daemons are running, there is no error in logs, however df shows a bad
total size.<br>
</p>
<p>My configuration for one volume: volumedisk1</p>
[root@stor1 ~]# gluster volume status volumedisk1Â detail<br>
<p>Status of volume: volumedisk1<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port            : 49153              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 13579              <br>
File System         : xfs                <br>
Device              : /dev/sdc1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.0TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273123069         <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
TCP Port            : 49153              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 13344              <br>
File System         : xfs                <br>
Device              : /dev/sdc1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.0TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273124718         <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
TCP Port            : 49154              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 17439              <br>
File System         : xfs                <br>
Device              : /dev/sdc1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.7TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273125437         <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick               : Brick stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
TCP Port            : 49155              <br>
RDMA Port           : 0                  <br>
Online              : Y                  <br>
Pid                 : 17459              <br>
File System         : xfs                <br>
Device              : /dev/sdd1          <br>
Mount Options       : rw,noatime         <br>
Inode Size          : 512                <br>
Disk Space Free     : 35.6TB             <br>
Total Disk Space    : 49.1TB             <br>
Inode Count         : 5273970048         <br>
Free Inodes         : 5273127036         <br>
 </p>
<p>Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB = <b>196,4 TBÂ </b>but df shows:<br>
</p>
<p>[root@stor1 ~]# df -h<br>
Filesystem           Size Used Avail Use% Mounted on<br>
/dev/sda2Â Â Â Â Â Â Â Â Â Â Â Â Â 48GÂ Â 21GÂ Â 25GÂ 46% /<br>
tmpfs                 32G  80K  32G  1% /dev/shm<br>
/dev/sda1Â Â Â Â Â Â Â Â Â Â Â Â 190MÂ Â 62MÂ 119MÂ 35% /boot<br>
/dev/sda4Â Â Â Â Â Â Â Â Â Â Â Â 395GÂ 251GÂ 124GÂ 68% /data<br>
/dev/sdb1Â Â Â Â Â Â Â Â Â Â Â Â Â 26TÂ 601GÂ Â 25TÂ Â 3% /mnt/glusterfs/vol0<br>
/dev/sdc1Â Â Â Â Â Â Â Â Â Â Â Â Â 50TÂ Â 15TÂ Â 36TÂ 29% /mnt/glusterfs/vol1<br>
stor1data:/volumedisk0<br>
                      76T 1,6T  74T  3% /volumedisk0<br>
stor1data:/volumedisk1<br>
                     <b>148T</b>  42T 106T 29% /volumedisk1</p>
<p>Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB<br>
</p>
<p>It's a production system so I hope you can help me.<br>
</p>
<p>Thanks in advance.</p>
<p>Jose V.</p>
<p><br>
</p>
<p>Below some other data of my configuration:<br>
</p>
<p>[root@stor1 ~]# gluster volume info<br>
 <br>
Volume Name: volumedisk0<br>
Type: Distribute<br>
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf8<wbr>98da10<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0/brick1<br>
Brick4: stor3data:/mnt/disk_b2/gluster<wbr>fs/vol0/brick1<br>
Options Reconfigured:<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on<br>
 <br>
Volume Name: volumedisk1<br>
Type: Distribute<br>
Volume ID: 591b7098-800e-4954-82a9-6b6d81<wbr>c9e0a2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
Brick4: stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
Options Reconfigured:<br>
cluster.min-free-inodes: 6%<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on</p>
<p>[root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumed<wbr>isk1/*<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:Â <wbr>Â Â option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:Â Â Â option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3:Â <wbr>Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3:Â Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3:Â Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:Â Â Â
option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3:Â Â Â option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3:Â Â Â
option shared-brick-count 0<br>
</p>
<p><br>
</p></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div></div>