[Gluster-devel] missing files

David F. Robinson david.robinson at corvidtec.com
Wed Feb 11 13:28:09 UTC 2015


My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB and it took the tar extraction from 1-minute to 7-minutes. 

My suspicion is that it is related to number of files and not necessarily file size. Shyam is looking into reproducing this behavior on a redhat system. 

David  (Sent from mobile)

===============================
David F. Robinson, Ph.D. 
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310      [cell]
704.799.7974      [fax]
David.Robinson at corvidtec.com
http://www.corvidtechnologies.com

> On Feb 11, 2015, at 7:38 AM, Justin Clift <justin at gluster.org> wrote:
> 
> On 11 Feb 2015, at 12:31, David F. Robinson <david.robinson at corvidtec.com> wrote:
>>> 
>>> Some time ago I had a similar performance problem (with 3.4 if I remember correctly): a just created volume started to work fine, but after some time using it performance was worse. Removing all files from the volume didn't improve the performance again.
>> 
>> I guess my problem is a little better depending on how you look at it. If I date the data from the volume, the performance goes back to that of an empty volume. I don't have to delete the .glusterfs entries to regain my performance. I only have to delete the data from the mount point.
> 
> Interesting.  Do you have somewhat accurate stats on how much data (eg # of entries, size
> of files) was in the data set that did this?
> 
> Wondering if it's repeatable, so we can replicate the problem and solve. :)
> 
> + Justin
> 
> --
> GlusterFS - http://www.gluster.org
> 
> An open source, distributed file system scaling to several
> petabytes, and handling thousands of clients.
> 
> My personal twitter: twitter.com/realjustinclift
> 


More information about the Gluster-devel mailing list