<div dir="ltr"><div>If you delete those from the bricks it will start healing them - restoring from other bricks</div><div>I have similar issue with email storage which uses maildir format with millions of small files</div><div><br></div><div>doing delete on the server takes days</div><div><br></div><div>sometimes worth recreating volumes wiping .glusterfs on bricks, deleting files on bricks, creating volumes again and repopulating .glusterfs by querying attr<br></div><div><a href="https://lists.gluster.org/pipermail/gluster-users/2018-July/034310.html">https://lists.gluster.org/pipermail/gluster-users/2018-July/034310.html</a><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jul 4, 2018 at 9:57 AM, hsafe <span dir="ltr"><<a href="mailto:hsafe@devopt.net" target="_blank">hsafe@devopt.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi all,<br>
<br>
I have a rather simplistic question, there are dirs that contain a lot of small files in a 2x replica set accessed natively on the clients. Due to the directory file number; it fails to show the dir contents from clients.<br>
<br>
In case of move or deletion of the dirs natively and from the server's view of the dirs , how does glusterfs converge or "heal" if you can call it the dirs as emptied or as if moved?<br>
<br>
I am running on Glusterfs-server and Glusterfs-client version: 3.10.12.<br>
<br>
To add more details,it is that we learned it the hard way that our app is shipping too small files into dirs with daily accumulaiton, accesed for serving by an nginx.<br>
<br>
Here is a little more info:<br>
<br>
# gluster volume info<br>
<br>
Volume Name: gv1<br>
Type: Replicate<br>
Volume ID: f1c955a1-7a92-4b1b-acb5-8b72b4<wbr>1aaace<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 2 = 2<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: IMG-01:/images/storage/brick1<br>
Brick2: IMG-02:/images/storage/brick1<br>
Options Reconfigured:<br>
nfs.disable: true<br>
diagnostics.count-fop-hits: on<br>
diagnostics.latency-measuremen<wbr>t: on<br>
server.statedump-path: /tmp<br>
performance.readdir-ahead: on<br>
# gluster volume status<br>
Status of volume: gv1<br>
Gluster process <wbr> TCP Port RDMA Port Online Pid<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
Brick IMG-01:/images/storage/brick1 <wbr> 49152 0 Y 3577<br>
Brick IMG-02:/images/storage/brick1 <wbr> 49152 0 Y 21699<br>
Self-heal Daemon on localhost N/A N/A Y 24813<br>
Self-heal Daemon on IMG-01 N/A N/A Y 3560<br>
<br>
Task Status of Volume gv1<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
There are no active volume tasks<br>
<br>
<br>
______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></blockquote></div><br></div>