<div dir="ltr"><div>pylon:/var/lib/glusterd/vols/dev_apkmirror_data # ack shared-brick-count</div><div>dev_apkmirror_data.pylon.mnt-pylon_block3-dev_apkmirror_data.vol</div><div>3:  option shared-brick-count 3</div><div><br></div><div>dev_apkmirror_data.pylon.mnt-pylon_block2-dev_apkmirror_data.vol</div><div>3:  option shared-brick-count 3</div><div><br></div><div>dev_apkmirror_data.pylon.mnt-pylon_block1-dev_apkmirror_data.vol</div><div>3:  option shared-brick-count 3</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a href="http://www.androidpolice.com" target="_blank">Android Police</a>, <a href="http://www.apkmirror.com/" style="font-size:12.8000001907349px" target="_blank">APK Mirror</a><span style="font-size:12.8000001907349px">, Illogical Robot LLC</span></div><div dir="ltr"><a href="http://beerpla.net/" target="_blank">beerpla.net</a> | <a href="https://plus.google.com/+ArtemRussakovskii" target="_blank">+ArtemRussakovskii</a> | <a href="http://twitter.com/ArtemR" target="_blank">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Mon, Apr 16, 2018 at 9:22 PM, Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Artem,<div><br></div><div>Was the volume size correct before the bricks were expanded?</div><div><br></div><div>This sounds like [1] but that should have been fixed in 4.0.0. Can you let us know the values of <span style="color:rgb(0,0,0);white-space:pre-wrap"><font face="monospace, monospace">shared-brick-count</font></span><span style="font-family:arial,helvetica,sans-serif;color:rgb(0,0,0);white-space:pre-wrap"> in the files in /var/lib/glusterd/vols/</span><span style="font-family:arial,helvetica,sans-serif;font-size:12.8px">dev_<wbr>apkmirror_data/ ?</span></div><div><br></div><div>[1] <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1541880" target="_blank">https://bugzilla.redhat.co<wbr>m/show_bug.cgi?id=1541880</a></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On 17 April 2018 at 05:17, Artem Russakovskii <span dir="ltr"><<a href="mailto:archon810@gmail.com" target="_blank">archon810@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi Nithya,<div><br></div><div>I'm on Gluster 4.0.1. </div><div><br></div><div>I don't think the bricks were smaller before - if they were, maybe 20GB because Linode's minimum is 20GB, then I extended them to 25GB, resized with resize2fs as instructed, and rebooted many times over since. Yet, gluster refuses to see the full disk size.</div><div><br></div><div>Here's the status detail output:</div><div><br></div><div><div>gluster volume status dev_apkmirror_data detail</div><div>Status of volume: dev_apkmirror_data</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick        : Brick pylon:/mnt/pylon_block1/dev_ap<wbr>kmirror_data</div><div>TCP Port       : 49152        </div><div>RDMA Port      : 0          </div><div>Online        : Y          </div><div>Pid         : 1263         </div><div>File System     : ext4         </div><div>Device        : /dev/sdd       </div><div>Mount Options    : rw,relatime,data=ordered</div><div>Inode Size      : 256         </div><div>Disk Space Free   : 23.0GB        </div><div>Total Disk Space   : 24.5GB        </div><div>Inode Count     : 1638400       </div><div>Free Inodes     : 1625429       </div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick        : Brick pylon:/mnt/pylon_block2/dev_ap<wbr>kmirror_data</div><div>TCP Port       : 49153        </div><div>RDMA Port      : 0          </div><div>Online        : Y          </div><div>Pid         : 1288         </div><div>File System     : ext4         </div><div>Device        : /dev/sdc       </div><div>Mount Options    : rw,relatime,data=ordered</div><div>Inode Size      : 256         </div><div>Disk Space Free   : 24.0GB        </div><div>Total Disk Space   : 25.5GB        </div><div>Inode Count     : 1703936       </div><div>Free Inodes     : 1690965       </div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick        : Brick pylon:/mnt/pylon_block3/dev_ap<wbr>kmirror_data</div><div>TCP Port       : 49154        </div><div>RDMA Port      : 0          </div><div>Online        : Y          </div><div>Pid         : 1313         </div><div>File System     : ext4         </div><div>Device        : /dev/sde       </div><div>Mount Options    : rw,relatime,data=ordered</div><div>Inode Size      : 256         </div><div>Disk Space Free   : 23.0GB        </div><div>Total Disk Space   : 24.5GB        </div><div>Inode Count     : 1638400       </div><div>Free Inodes     : 1625433  </div></div><div><br></div><div><br></div><div><br></div><div>What's interesting here is that the gluster volume size is exactly 1/3 of the total (8357M * 3 = 25071M). Yet, each block device is separate, and the total storage available is 25071M on each brick.</div><div><br></div><div>The fstab is as follows:</div><div><div>/dev/disk/by-id/scsi-0Linode_V<wbr>olume_pylon_block1 /mnt/pylon_block1 ext4 defaults 0 2</div><div>/dev/disk/by-id/scsi-0Linode_V<wbr>olume_pylon_block2 /mnt/pylon_block2 ext4 defaults 0 2</div><div>/dev/disk/by-id/scsi-0Linode_V<wbr>olume_pylon_block3 /mnt/pylon_block3 ext4 defaults 0 2</div></div><div><br></div><div><div>localhost:/dev_apkmirror_data  /mnt/dev_apkmirror_data1  glusterfs defaults,_netdev,fopen-keep-ca<wbr>che,direct-io-mode=enable 0 0</div><div>localhost:/dev_apkmirror_data  /mnt/dev_apkmirror_data2  glusterfs defaults,_netdev,fopen-keep-ca<wbr>che,direct-io-mode=enable 0 0</div><div>localhost:/dev_apkmirror_data  /mnt/dev_apkmirror_data3  glusterfs defaults,_netdev,fopen-keep-ca<wbr>che,direct-io-mode=enable 0 0</div><div>localhost:/dev_apkmirror_data  /mnt/dev_apkmirror_data_ganesh<wbr>a  nfs4 defaults,_netdev,bg,intr,soft,<wbr>timeo=5,retrans=5,actimeo=10,r<wbr>etry=5 0 0</div></div><div><br></div><div>The latter entry is for an nfs ganesha test, in case it matters (which, btw, fails miserably with all kinds of stability issues about broken pipes).</div><div class="gmail_extra"><br></div><div class="gmail_extra">Note: this is a test server, so all 3 bricks are attached and mounted on the same server.</div><div class="gmail_extra"><span><br clear="all"><div><div class="m_-4067683178268585011m_5953270917478184854m_-1078111554924762131gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a href="http://www.androidpolice.com" target="_blank">Android Police</a>, <a href="http://www.apkmirror.com/" style="font-size:12.8000001907349px" target="_blank">APK Mirror</a><span style="font-size:12.8000001907349px">, Illogical Robot LLC</span></div><div dir="ltr"><a href="http://beerpla.net/" target="_blank">beerpla.net</a> | <a href="https://plus.google.com/+ArtemRussakovskii" target="_blank">+ArtemRussakovskii</a> | <a href="http://twitter.com/ArtemR" target="_blank">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div>
<br></span><div><div class="m_-4067683178268585011h5"><div class="gmail_quote">On Sun, Apr 15, 2018 at 10:56 PM, Nithya Balachandran <span dir="ltr"><<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">What version of Gluster are you running? Were the bricks smaller earlier?<div><br></div><div>Regards,</div><div>Nithya</div></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="m_-4067683178268585011m_5953270917478184854m_-1078111554924762131h5">On 15 April 2018 at 00:09, Artem Russakovskii <span dir="ltr"><<a href="mailto:archon810@gmail.com" target="_blank">archon810@gmail.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div class="m_-4067683178268585011m_5953270917478184854m_-1078111554924762131h5"><div dir="ltr">Hi,<div><br></div><div>I have a 3-brick replicate volume, but for some reason I can't get it to expand to the size of the bricks. The bricks are 25GB, but even after multiple gluster restarts and remounts, the volume is only about 8GB.</div><div><br></div><div>I believed I could always extend the bricks (we're using Linode block storage, which allows extending block devices after they're created), and gluster would see the newly available space and extend to use it.</div><div><br></div><div>Multiple Google searches, and I'm still nowhere. Any ideas?</div><div><br></div><div><div>df | ack "block|data"</div><div>Filesystem                          1M-blocks   Used Available Use% Mounted on</div><div>/dev/sdd                            25071M  1491M  22284M  7% /mnt/pylon_block1</div><div>/dev/sdc                            26079M  1491M  23241M  7% /mnt/pylon_block2</div><div>/dev/sde                            25071M  1491M  22315M  7% /mnt/pylon_block3</div><div>localhost:/dev_apkmirror_data                  8357M   581M   7428M  8% /mnt/dev_apkmirror_data1<br></div><div>localhost:/dev_apkmirror_data                  8357M   581M   7428M  8% /mnt/dev_apkmirror_data2</div><div>localhost:/dev_apkmirror_data                  8357M   581M   7428M  8% /mnt/dev_apkmirror_data3</div><div><div class="m_-4067683178268585011m_5953270917478184854m_-1078111554924762131m_986064788906494013m_2334149182902156042gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><br></div><div dir="ltr"><div dir="ltr">gluster volume info</div><div dir="ltr"> </div><div dir="ltr">Volume Name: dev_apkmirror_data</div><div dir="ltr">Type: Replicate</div><div dir="ltr">Volume ID: cd5621ee-7fab-401b-b720-088637<wbr>17ed56</div><div dir="ltr">Status: Started</div><div dir="ltr">Snapshot Count: 0</div><div dir="ltr">Number of Bricks: 1 x 3 = 3</div><div dir="ltr">Transport-type: tcp</div><div dir="ltr">Bricks:</div><div dir="ltr">Brick1: pylon:/mnt/pylon_block1/dev_ap<wbr>kmirror_data</div><div dir="ltr">Brick2: pylon:/mnt/pylon_block2/dev_ap<wbr>kmirror_data</div><div dir="ltr">Brick3: pylon:/mnt/pylon_block3/dev_ap<wbr>kmirror_data</div><div dir="ltr">Options Reconfigured:</div><div dir="ltr">disperse.eager-lock: off</div><div dir="ltr">cluster.lookup-unhashed: auto</div><div dir="ltr">cluster.read-hash-mode: 0</div><div dir="ltr">performance.strict-o-direct: on</div><div dir="ltr">cluster.shd-max-threads: 12</div><div dir="ltr">performance.nl-cache-timeout: 600</div><div dir="ltr">performance.nl-cache: on</div><div dir="ltr">cluster.quorum-count: 1</div><div dir="ltr">cluster.quorum-type: fixed</div><div dir="ltr">network.ping-timeout: 5</div><div dir="ltr">network.remote-dio: enable</div><div dir="ltr">performance.rda-cache-limit: 256MB</div><div dir="ltr">performance.parallel-readdir: on</div><div dir="ltr">network.inode-lru-limit: 500000</div><div dir="ltr">performance.md-cache-timeout: 600</div><div dir="ltr">performance.cache-invalidation<wbr>: on</div><div dir="ltr">performance.stat-prefetch: on</div><div dir="ltr">features.cache-invalidation-ti<wbr>meout: 600</div><div dir="ltr">features.cache-invalidation: on</div><div dir="ltr">performance.io-thread-count: 32</div><div dir="ltr">server.event-threads: 4</div><div dir="ltr">client.event-threads: 4</div><div dir="ltr">performance.read-ahead: off</div><div dir="ltr">cluster.lookup-optimize: on</div><div dir="ltr">performance.client-io-threads: on</div><div dir="ltr">performance.cache-size: 1GB</div><div dir="ltr">transport.address-family: inet</div><div dir="ltr">performance.readdir-ahead: on</div><div dir="ltr">nfs.disable: on</div><div dir="ltr">cluster.readdir-optimize: on</div><div dir="ltr"><br></div><div dir="ltr"><br></div><div>Thank you.</div></div><div dir="ltr"><br>Sincerely,<br>Artem<br><br>--<br>Founder, <a href="http://www.androidpolice.com" target="_blank">Android Police</a>, <a href="http://www.apkmirror.com/" style="font-size:12.8px" target="_blank">APK Mirror</a><span style="font-size:12.8px">, Illogical Robot LLC</span></div><div dir="ltr"><a href="http://beerpla.net/" target="_blank">beerpla.net</a> | <a href="https://plus.google.com/+ArtemRussakovskii" target="_blank">+ArtemRussakovskii</a> | <a href="http://twitter.com/ArtemR" target="_blank">@ArtemR</a><br></div></div></div></div></div></div></div></div></div></div></div>
</div></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>