<div dir="ltr"><div>There is no need but it could happen accidentally and I think it should be protect or should not be permissible.<br><br></div> <br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 17, 2017 at 8:36 AM, Atin Mukherjee <span dir="ltr"><<a href="mailto:amukherj@redhat.com" target="_blank">amukherj@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br></div><div><br><div class="gmail_quote"><div><div class="h5"><div>On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL <<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>Hi All,<br><br></div>Here we have below steps to reproduce the issue <br><br><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>Reproduction steps:</p><p> </p><p>root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force ----- create the gluster volume</p><p>volume create: brick: success: please start the volume to access data</p><p>root@128:~# gluster volume set brick nfs.disable true</p><p>volume set: success</p><p>root@128:~# gluster volume start brick</p><p>volume start: brick: success</p><p>root@128:~# gluster volume info</p><p>Volume Name: brick</p><p>Type: Distribute</p><p>Volume ID: a59b479a-2b21-426d-962a-79d6d2<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr>94fee3</p></span></div><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>Status: Started</p><p>Number of Bricks: 1</p><p>Transport-type: tcp</p><p>Bricks:</p></span></div><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>Brick1: 128.224.95.140:/tmp/brick</p><p>Options Reconfigured:</p><p>nfs.disable: true</p><p>performance.readdir-ahead: on</p><p>root@128:~# gluster volume status</p><p>Status of volume: brick</p></span></div><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>Gluster process TCP Port RDMA Port Online Pid</p><p>------------------------------<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr>------------------------------<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr>------------------</p></span></div><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>Brick 128.224.95.140:/tmp/brick 49155 0 Y 768</p><p> </p><p>Task Status of Volume brick</p></span></div><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>------------------------------<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr>------------------------------<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr>------------------</p><p>There are no active volume tasks</p><p> </p></span></div><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p>root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/</p><p>root@128:~# cd gluster/</p><p>root@128:~/gluster# du -sh</p><p>0 .</p><p>root@128:~/gluster# mkdir -p test/</p><p>root@128:~/gluster# cp ~/tmp.file gluster/</p><p>root@128:~/gluster# cp tmp.file test</p><p>root@128:~/gluster# cd /tmp/brick</p><p>root@128:/tmp/brick# du -sh *</p><p>768K test</p><p>768K tmp.file</p><p>root@128:/tmp/brick# rm -rf test --------- delete the test directory and data in the server side, not reasonable</p><p>root@128:/tmp/brick# ls</p><p>tmp.file</p><p>root@128:/tmp/brick# du -sh *</p><p>768K tmp.file</p><p><b>root@128:/tmp/brick# du -sh (brick dir)</b></p><p><b>1.6M .</b></p><p>root@128:/tmp/brick# cd .glusterfs/</p><p>root@128:/tmp/brick/.glusterfs<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr># du -sh *</p><p>0 00</p><p>0 2a</p><p>0 bb</p><p>768K c8</p><p>0 c9</p><p>0 changelogs</p><p>768K d0</p><p>4.0K health_check</p><p>0 indices</p><p>0 landfill</p><p><b>root@128:/tmp/brick/.glusterfs<span class="m_4541403935555205702m_-864130623090266048gmail-wbr"></span><wbr># du -sh (.glusterfs dir)</b></p><p><b>1.6M .</b></p><p>root@128:/tmp/brick# cd ~/gluster</p><p>root@128:~/gluster# ls</p><p>tmp.file</p><p><b>root@128:~/gluster# du -sh * (Mount dir)</b></p><p><b>768K tmp.file</b></p><p> </p><p>In
the reproduce steps, we delete the test directory in the server side,
not in the client side. I think this delete operation is not reasonable.
Please ask the customer to check whether they do this unreasonable
operation.</p></span></div></blockquote><div><br></div></div></div><div>What's the need of deleting data from backend (i.e bricks) directly?<br></div><div><div class="h5"><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><span class="m_4541403935555205702m_-864130623090266048gmail-feeditemtext m_4541403935555205702m_-864130623090266048gmail-cxfeeditemtext"><p></p><p><br></p><p><span style="color:rgb(255,0,0)"><b>It seems while deleting the data from BRICK, metadata will not deleted from .glusterfs directory.</b></span></p><p><span style="color:rgb(255,0,0)"><b><br></b></span></p><p><span style="color:rgb(255,0,0)"><b>I don't know whether it is a bug of limitations, please let us know about this?</b></span><br></p><p><br></p><p>Regards,<br></p><p>Abhishek<br></p></span><br></div><div class="gmail_extra"></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <span><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><br><div class="gmail_extra"><br><div class="gmail_quote"><span>On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <span><<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>yes it is ext4. but what is the impact of this.<br></div></blockquote><div><br></div></span><div>Did you have a lot of data before and you deleted all that data? ext4 if I remember correctly doesn't decrease size of directory once it expands it. So in ext4 inside a directory if you create lots and lots of files and delete them all, the directory size would increase at the time of creation but won't decrease after deletion. I don't have any system with ext4 at the moment to test it now. This is something we faced 5-6 years back but not sure if it is fixed in ext4 in the latest releases.<br></div><div><div class="m_4541403935555205702m_-864130623090266048h5"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div></div><div class="gmail_extra"><div><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071h5"><br><div class="gmail_quote">On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <span><<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Yes<br></div><div class="gmail_extra"><div><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776h5"><br><div class="gmail_quote">On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <span><<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>Means the fs where this brick has been created?</p><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784HOEnZb"><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784h5">
<div class="gmail_quote">On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Is your backend filesystem ext4?<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <span><<a href="mailto:abhishpaliwal@gmail.com" target="_blank">abhishpaliwal@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><p>No,we are not using sharding</p><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784m_4681589385748561667m_5122406701403343859HOEnZb"><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784m_4681589385748561667m_5122406701403343859h5">
<div class="gmail_quote">On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <<a href="mailto:ab1@metalit.com" target="_blank">ab1@metalit.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784m_4681589385748561667m_5122406701403343859m_479303579361822880m_2617352453806323094moz-cite-prefix">Il 12/04/2017 14:16, ABHISHEK PALIWAL
ha scritto:<br>
</div>
<blockquote type="cite">I have did more investigation and find out that brick
dir size is equivalent to gluster mount point but .glusterfs
having too much difference<br>
<br>
<span lang="sv">
<div style="margin:0px"><span style="background-color:rgb(255,255,0)"><font face="Calibri,sans-serif" size="2"><span style="font-size:11pt"></span></font></span></div>
</span></blockquote>
<br>
You are probably using sharding?<br>
<br>
<div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784m_4681589385748561667m_5122406701403343859m_479303579361822880m_2617352453806323094moz-signature"><br>
<div style="font:10pt Arial;color:#000;display:inline-block">
Buon lavoro.<br>
<i>Alessandro Briosi</i><br>
<br>
<b><span style="color:#418bd4">METAL.it Nord S.r.l.</span></b><br>
Via Maioliche 57/C - 38068 Rovereto (TN)<br>
Tel.+39.0464.430130 - Fax +39.0464.437393<br>
<a class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784m_4681589385748561667m_5122406701403343859m_479303579361822880m_2617352453806323094moz-txt-link-abbreviated" href="http://www.metalit.com" target="_blank">www.metalit.com</a>
</div>
<br>
</div>
</div>
</blockquote></div>
</div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784m_4681589385748561667m_5122406701403343859gmail_signature" data-smartmail="gmail_signature"><div>Pranith<br></div></div>
</div>
</blockquote></div>
</div></div></blockquote></div><br><br clear="all"><br></div></div><span class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776HOEnZb"><font color="#888888">-- <br><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776m_-5672094628813993784gmail_signature" data-smartmail="gmail_signature"><div>Pranith<br></div></div>
</font></span></div>
</blockquote></div><br><br clear="all"><br></div></div><span class="m_4541403935555205702m_-864130623090266048m_5691273583136462071HOEnZb"><font color="#888888">-- <br><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071m_9176244938813743776gmail_signature" data-smartmail="gmail_signature"><div><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</font></span></div>
</blockquote></div></div></div><span class="m_4541403935555205702m_-864130623090266048HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_4541403935555205702m_-864130623090266048m_5691273583136462071gmail_signature" data-smartmail="gmail_signature"><div>Pranith<br></div></div>
</font></span></div></div>
</blockquote></div><br><br clear="all"><br></div><div class="gmail_extra">-- <br><div class="m_4541403935555205702m_-864130623090266048gmail_signature" data-smartmail="gmail_signature"><div><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></blockquote></div></div></div></div><span class="HOEnZb"><font color="#888888"><div dir="ltr">-- <br></div><div data-smartmail="gmail_signature">- Atin (atinm)</div>
</font></span></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><br><br><br><br>Regards<br>
Abhishek Paliwal<br>
</div></div>
</div>