[Gluster-users] Deleting huge file from glusterfs hangs the cluster for a while
Krutika Dhananjay
kdhananj at redhat.com
Wed Mar 8 13:37:45 UTC 2017
Thanks for your feedback.
May I know what was the shard-block-size?
One way to fix this would be to make shard translator delete only the base
file (0th shard) in the IO path and move
the deletion of the rest of the shards to background. I'll work on this.
-Krutika
On Fri, Mar 3, 2017 at 10:35 PM, GEORGI MIRCHEV <gmirchev at usa.net> wrote:
> Hi,
>
> I have deleted two large files (around 1 TB each) via gluster client
> (mounted
> on /mnt folder). I used a simple rm command, e.g "rm /mnt/hugefile". This
> resulted in hang of the cluster (no io can be done, the VM hanged). After a
> few minutes my ssh connection to the gluster node gets disconnected - I
> had to
> reconnect, which was very strange, probably some kind of timeout. Nothing
> in
> dmesg so it's probably the ssh that terminated the connection.
>
> After that the cluster works, everything seems fine, the file is gone in
> the
> client but the space is not reclaimed.
>
> The deleted file is also gone from bricks, but the shards are still there
> and
> use up all the space.
>
> I need to reclaim the space. How do I delete the shards / other metadata
> for a
> file that no longer exists?
>
>
> Versions:
> glusterfs-server-3.8.9-1.el7.x86_64
> glusterfs-client-xlators-3.8.9-1.el7.x86_64
> glusterfs-geo-replication-3.8.9-1.el7.x86_64
> glusterfs-3.8.9-1.el7.x86_64
> glusterfs-fuse-3.8.9-1.el7.x86_64
> vdsm-gluster-4.19.4-1.el7.centos.noarch
> glusterfs-cli-3.8.9-1.el7.x86_64
> glusterfs-libs-3.8.9-1.el7.x86_64
> glusterfs-api-3.8.9-1.el7.x86_64
>
> --
> Georgi Mirchev
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170308/cc6fb811/attachment.html>
More information about the Gluster-users
mailing list