[Gluster-users] Very slow ls
Franco.Broi at iongeo.com
Sat Feb 22 01:25:33 UTC 2014
On 21 Feb 2014 22:03, Vijay Bellur <vbellur at redhat.com> wrote:
> On 02/18/2014 12:42 AM, Franco Broi wrote:
> > On 18 Feb 2014 00:13, Vijay Bellur <vbellur at redhat.com> wrote:
> > >
> > > On 02/17/2014 07:00 AM, Franco Broi wrote:
> > > >
> > > > I mounted the filesystem with trace logging turned on and can see that
> > > > after the last successful READDIRP there is a lot of other connections
> > > > being made the clients repeatedly which takes minutes to complete.
> > >
> > > I did not observe anything specific which points to clients repeatedly
> > > reconnecting. Can you point to the appropriate line numbers for this?
> > >
> > > Can you also please describe the directory structure being referred here?
> > >
> > I was tailing the log file while the readdir script was running and
> > could see respective READDIRP calls for each readdir, after the last
> > call all the rest of the stuff in the log file was returning nothing but
> > took minutes to complete. This particular example was a directory
> > containing a number of directories, one for each of the READDIRP calls
> > in the log file.
> One possible tuning that can possibly help:
> volume set <volname> cluster.readdir-optimize on
> Let us know if there is any improvement after enabling this option.
I'll give it a go but I think this is a bug and not a performance issue. I've filed a bug report on bugzilla.
This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users