<div dir="ltr"><div><div><div><div><div><div>Unfortunately you'll need to delete those shards manually from the bricks.<br></div>I am assuming you know how to identify shards that belong to a particular image.<br></div>Since the VM is deleted, no IO will be happening on those remaining shards.<br></div><br></div>You would need to identify the shards, find all hard links associated with every shard,<br></div>and delete the shards and their hard links from the backend.<br><br></div><div>Do you mind raising a bug for this issue? I'll send a patch to move the deletion of the shards<br></div><div>to the background.<br><br><a href="https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS">https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS</a><br><br></div>-Krutika<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 9, 2017 at 12:29 AM, Georgi Mirchev <span dir="ltr"><<a href="mailto:gmirchev@usa.net" target="_blank">gmirchev@usa.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
<br>
<div class="m_-1325316210594450546moz-cite-prefix">На 03/08/2017 в 03:37 PM, Krutika
Dhananjay написа:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>Thanks for your feedback.<br>
</div>
<div><br>
</div>
<div>May I know what was the shard-block-size?<br>
</div>
<div><br>
</div>
</div>
</blockquote>
</span><p>The shard size is 4 MB.<br>
</p><span class="">
<blockquote type="cite">
<div dir="ltr">
<div>One way to fix this would be to make shard translator
delete only the base file (0th shard) in the IO path and move<br>
</div>
<div>the deletion of the rest of the shards to background. I'll
work on this.<br>
</div>
<div><br>
</div>
</div>
</blockquote></span>
Is there a manual way?<div><div class="h5"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>-Krutika<br>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Fri, Mar 3, 2017 at 10:35 PM, GEORGI
MIRCHEV <span dir="ltr"><<a href="mailto:gmirchev@usa.net" target="_blank">gmirchev@usa.net</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I have deleted two large files (around 1 TB each) via
gluster client (mounted<br>
on /mnt folder). I used a simple rm command, e.g "rm
/mnt/hugefile". This<br>
resulted in hang of the cluster (no io can be done, the VM
hanged). After a<br>
few minutes my ssh connection to the gluster node gets
disconnected - I had to<br>
reconnect, which was very strange, probably some kind of
timeout. Nothing in<br>
dmesg so it's probably the ssh that terminated the
connection.<br>
<br>
After that the cluster works, everything seems fine, the
file is gone in the<br>
client but the space is not reclaimed.<br>
<br>
The deleted file is also gone from bricks, but the shards
are still there and<br>
use up all the space.<br>
<br>
I need to reclaim the space. How do I delete the shards /
other metadata for a<br>
file that no longer exists?<br>
<br>
<br>
Versions:<br>
glusterfs-server-3.8.9-1.el7.x<wbr>86_64<br>
glusterfs-client-xlators-3.8.9<wbr>-1.el7.x86_64<br>
glusterfs-geo-replication-3.8.<wbr>9-1.el7.x86_64<br>
glusterfs-3.8.9-1.el7.x86_64<br>
glusterfs-fuse-3.8.9-1.el7.x86<wbr>_64<br>
vdsm-gluster-4.19.4-1.el7.cent<wbr>os.noarch<br>
glusterfs-cli-3.8.9-1.el7.x86_<wbr>64<br>
glusterfs-libs-3.8.9-1.el7.x86<wbr>_64<br>
glusterfs-api-3.8.9-1.el7.x86_<wbr>64<br>
<br>
--<br>
Georgi Mirchev<br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div></div></div>
</blockquote></div><br></div>