<div dir="ltr">Sure!<div><br></div><div><span style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px">&gt; 1 - output of gluster volume heal &lt;volname&gt; info</span></div><div><br></div><div><div>Brick pod-sjc1-gluster1:/data/brick1/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster2:/data/brick1/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster1:/data/brick2/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster2:/data/brick2/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster1:/data/brick3/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div><div><br></div><div>Brick pod-sjc1-gluster2:/data/brick3/gv0</div><div>Status: Connected</div><div>Number of entries: 0</div></div><div><br></div><div><span style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px">&gt; 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log</span></div><div><br></div><div>Attached</div><div><br style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px"><span style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px">&gt; 3 - output of gluster volume &lt;volname&gt; info</span></div><div><br></div><div><div>[root@pod-sjc1-gluster2 ~]# gluster volume info</div><div> </div><div>Volume Name: gv0</div><div>Type: Distributed-Replicate</div><div>Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196</div><div>Status: Started</div><div>Snapshot Count: 13</div><div>Number of Bricks: 3 x 2 = 6</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: pod-sjc1-gluster1:/data/brick1/gv0</div><div>Brick2: pod-sjc1-gluster2:/data/brick1/gv0</div><div>Brick3: pod-sjc1-gluster1:/data/brick2/gv0</div><div>Brick4: pod-sjc1-gluster2:/data/brick2/gv0</div><div>Brick5: pod-sjc1-gluster1:/data/brick3/gv0</div><div>Brick6: pod-sjc1-gluster2:/data/brick3/gv0</div><div>Options Reconfigured:</div><div>performance.cache-refresh-timeout: 60</div><div>performance.stat-prefetch: on</div><div>server.allow-insecure: on</div><div>performance.flush-behind: on</div><div>performance.rda-cache-limit: 32MB</div><div>network.tcp-window-size: 1048576</div><div>performance.nfs.io-threads: on</div><div>performance.write-behind-window-size: 4MB</div><div>performance.nfs.write-behind-window-size: 512MB</div><div>performance.io-cache: on</div><div>performance.quick-read: on</div><div>features.cache-invalidation: on</div><div>features.cache-invalidation-timeout: 600</div><div>performance.cache-invalidation: on</div><div>performance.md-cache-timeout: 600</div><div>network.inode-lru-limit: 90000</div><div>performance.cache-size: 4GB</div><div>server.event-threads: 16</div><div>client.event-threads: 16</div><div>features.barrier: disable</div><div>transport.address-family: inet</div><div>nfs.disable: on</div><div>performance.client-io-threads: on</div><div>cluster.lookup-optimize: on</div><div>server.outstanding-rpc-limit: 1024</div><div>auto-delete: enable</div><div>You have new mail in /var/spool/mail/root</div></div><div><br></div><div><span style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px">&gt; 4 - output of gluster volume &lt;volname&gt; status</span></div><div><br></div><div><div>[root@pod-sjc1-gluster2 ~]# gluster volume status gv0</div><div>Status of volume: gv0</div><div>Gluster process                             TCP Port  RDMA Port  Online  Pid</div><div>------------------------------------------------------------------------------</div><div>Brick pod-sjc1-gluster1:/data/</div><div>brick1/gv0                                  49152     0          Y       3198 </div><div>Brick pod-sjc1-gluster2:/data/</div><div>brick1/gv0                                  49152     0          Y       4018 </div><div>Brick pod-sjc1-gluster1:/data/</div><div>brick2/gv0                                  49153     0          Y       3205 </div><div>Brick pod-sjc1-gluster2:/data/</div><div>brick2/gv0                                  49153     0          Y       4029 </div><div>Brick pod-sjc1-gluster1:/data/</div><div>brick3/gv0                                  49154     0          Y       3213 </div><div>Brick pod-sjc1-gluster2:/data/</div><div>brick3/gv0                                  49154     0          Y       4036 </div><div>Self-heal Daemon on localhost               N/A       N/A        Y       17869</div><div>Self-heal Daemon on pod-sjc1-gluster1.exava</div><div><a href="http://ult.com">ult.com</a>                                     N/A       N/A        Y       3183 </div><div> </div><div>Task Status of Volume gv0</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div><div> </div></div><div><br></div><div><span style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px">&gt; 5 - Also, could you try unmount the volume and mount it again and check the size?</span><br></div><div><span style="color:rgb(0,0,0);font-family:&quot;times new roman&quot;,&quot;new york&quot;,times,serif;font-size:16px"><br></span></div><div><span style="color:rgb(0,0,0)"><font face="arial, helvetica, sans-serif" style="">I have done this a few times but it doesn&#39;t seem to help.</font></span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 21, 2017 at 11:18 AM, Ashish Pandey <span dir="ltr">&lt;<a href="mailto:aspandey@redhat.com" target="_blank">aspandey@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:times new roman,new york,times,serif;font-size:12pt;color:#000000"><br>Could youplease provide following -<br><br>1 - output of gluster volume heal &lt;volname&gt; info<br>2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log<br>3 - output of gluster volume &lt;volname&gt; info<br>4 - output of gluster volume &lt;volname&gt; status<br>5 - Also,  could you try unmount the volume and mount it again and check the size?<br><br> <br><br><br><br><br><br><hr id="m_2186444879813588696zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><b>From: </b>&quot;Teknologeek Teknologeek&quot; &lt;<a href="mailto:teknologeek06@gmail.com" target="_blank">teknologeek06@gmail.com</a>&gt;<br><b>To: </b><a href="mailto:gluster-users@gluster.org" target="_blank">gluster-users@gluster.org</a><br><b>Sent: </b>Wednesday, December 20, 2017 2:54:40 AM<br><b>Subject: </b>[Gluster-users] Wrong volume size with df<div><div class="h5"><br><br><div dir="ltr"><div><div><div><div>I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ).<br><br></div>After a server crash, &quot;gluster peer status&quot; reports all peers as connected.<br><br></div>&quot;gluster volume status detail&quot; shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size.<br><br></div>When browsing the data, they seem to be ok tho.<br><br></div>I need some help to understand what&#39;s going on as i can&#39;t delete the volume and recreate it from scratch.<br></div>
<br></div></div><span class="">______________________________<wbr>_________________<br>Gluster-users mailing list<br><a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br><a href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></span></div><br></div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div>