<div dir="ltr">For what it's worth here, after I added a hot tier to the pool, the brick sizes are now reporting the correct size of all bricks combined instead of just one brick.<div><br></div><div>Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect?</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <span dir="ltr"><<a href="mailto:tomfite@gmail.com" target="_blank">tomfite@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Sure!<span class=""><div><br></div><div><span style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px">> 1 - output of gluster volume heal <volname> info</span></div><div><br></div></span><div><div>Brick pod-sjc1-gluster1:/data/<wbr>brick1/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster2:/data/<wbr>brick1/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster1:/data/<wbr>brick2/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster2:/data/<wbr>brick2/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster1:/data/<wbr>brick3/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster2:/data/<wbr>brick3/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div></div><span class=""><div><br></div><div><span style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px">> 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log</span></div><div><br></div></span><div>Attached</div><span class=""><div><br style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px"><span style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px">> 3 - output of gluster volume <volname> info</span></div><div><br></div></span><div><div>[root@pod-sjc1-gluster2 ~]# gluster volume info</div><div> </div><div>Volume Name: gv0</div><div>Type: Distributed-Replicate</div><div>Volume ID: d490a9ec-f9c8-4f10-a7f3-<wbr>e1b6d3ced196</div><div>Status: Started</div><div>Snapshot Count: 13</div><div>Number of Bricks: 3 x 2 = 6</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: pod-sjc1-gluster1:/data/<wbr>brick1/gv0</div><div>Brick2: pod-sjc1-gluster2:/data/<wbr>brick1/gv0</div><div>Brick3: pod-sjc1-gluster1:/data/<wbr>brick2/gv0</div><div>Brick4: pod-sjc1-gluster2:/data/<wbr>brick2/gv0</div><div>Brick5: pod-sjc1-gluster1:/data/<wbr>brick3/gv0</div><div>Brick6: pod-sjc1-gluster2:/data/<wbr>brick3/gv0</div><div>Options Reconfigured:</div><div>performance.cache-refresh-<wbr>timeout: 60</div><div>performance.stat-prefetch: on</div><div>server.allow-insecure: on</div><div>performance.flush-behind: on</div><div>performance.rda-cache-limit: 32MB</div><div>network.tcp-window-size: 1048576</div><div>performance.nfs.io-threads: on</div><div>performance.write-behind-<wbr>window-size: 4MB</div><div>performance.nfs.write-behind-<wbr>window-size: 512MB</div><div>performance.io-cache: on</div><div>performance.quick-read: on</div><div>features.cache-invalidation: on</div><div>features.cache-invalidation-<wbr>timeout: 600</div><div>performance.cache-<wbr>invalidation: on</div><div>performance.md-cache-timeout: 600</div><div>network.inode-lru-limit: 90000</div><div>performance.cache-size: 4GB</div><div>server.event-threads: 16</div><div>client.event-threads: 16</div><div>features.barrier: disable</div><div>transport.address-family: inet</div><div>nfs.disable: on</div><div>performance.client-io-threads: on</div><div>cluster.lookup-optimize: on</div><div>server.outstanding-rpc-limit: 1024</div><div>auto-delete: enable</div><div>You have new mail in /var/spool/mail/root</div></div><span class=""><div><br></div><div><span style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px">> 4 - output of gluster volume <volname> status</span></div><div><br></div></span><div><div>[root@pod-sjc1-gluster2 ~]# gluster volume status gv0</div><div>Status of volume: gv0</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>Brick pod-sjc1-gluster1:/data/</div><div>brick1/gv0 49152 0 Y 3198 </div><div>Brick pod-sjc1-gluster2:/data/</div><div>brick1/gv0 49152 0 Y 4018 </div><div>Brick pod-sjc1-gluster1:/data/</div><div>brick2/gv0 49153 0 Y 3205 </div><div>Brick pod-sjc1-gluster2:/data/</div><div>brick2/gv0 49153 0 Y 4029 </div><div>Brick pod-sjc1-gluster1:/data/</div><div>brick3/gv0 49154 0 Y 3213 </div><div>Brick pod-sjc1-gluster2:/data/</div><div>brick3/gv0 49154 0 Y 4036 </div><div>Self-heal Daemon on localhost N/A N/A Y 17869</div><div>Self-heal Daemon on pod-sjc1-gluster1.exava</div><div><a href="http://ult.com" target="_blank">ult.com</a> N/A N/A Y 3183 </div><div> </div><div>Task Status of Volume gv0</div><div>------------------------------<wbr>------------------------------<wbr>------------------</div><div>There are no active volume tasks</div><div> </div></div><span class=""><div><br></div><div><span style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px">> 5 - Also, could you try unmount the volume and mount it again and check the size?</span><br></div><div><span style="color:rgb(0,0,0);font-family:"times new roman","new york",times,serif;font-size:16px"><br></span></div></span><div><span style="color:rgb(0,0,0)"><font face="arial, helvetica, sans-serif">I have done this a few times but it doesn't seem to help.</font></span></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 21, 2017 at 11:18 AM, Ashish Pandey <span dir="ltr"><<a href="mailto:aspandey@redhat.com" target="_blank">aspandey@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:times new roman,new york,times,serif;font-size:12pt;color:#000000"><br>Could youplease provide following -<br><br>1 - output of gluster volume heal <volname> info<br>2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log<br>3 - output of gluster volume <volname> info<br>4 - output of gluster volume <volname> status<br>5 - Also, could you try unmount the volume and mount it again and check the size?<br><br> <br><br><br><br><br><br><hr id="m_-8037049553253596551m_2186444879813588696zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>"Teknologeek Teknologeek" <<a href="mailto:teknologeek06@gmail.com" target="_blank">teknologeek06@gmail.com</a>><br><b>To: </b><a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br><b>Sent: </b>Wednesday, December 20, 2017 2:54:40 AM<br><b>Subject: </b>[Gluster-users] Wrong volume size with df<div><div class="m_-8037049553253596551h5"><br><br><div dir="ltr"><div><div><div><div>I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ).<br><br></div>After a server crash, "gluster peer status" reports all peers as connected.<br><br></div>"gluster volume status detail" shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size.<br><br></div>When browsing the data, they seem to be ok tho.<br><br></div>I need some help to understand what's going on as i can't delete the volume and recreate it from scratch.<br></div>
<br></div></div><span>______________________________<wbr>_________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></span></div><br></div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
</div></div></blockquote></div><br></div>