[Gluster-users] Client hang on find of directories

Mohit Anchlia mohitanchlia at gmail.com
Mon Apr 25 16:19:44 UTC 2011


What's the downside of turning off stat-prefetch? Would self healing still work?

On Mon, Apr 25, 2011 at 5:08 AM, Burnash, James <jburnash at knight.com> wrote:
> Thanks Joe.
>
> I will go through your recommendations and come back with my findings.
>
> Based on your (and other) comments in previous threads, I have already turned off stat-prefetch while I sort through multiple issues here on this GlusterFS namespace.
>
> James Burnash, Unix Engineering
>
> -----Original Message-----
> From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Joe Landman
> Sent: Sunday, April 24, 2011 4:11 PM
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Client hang on find of directories
>
> On 04/24/2011 03:03 PM, Burnash, James wrote:
>> Gluster 3.1.1
>>
>> CentOS 5.5 (servers), CentOS 5.2 (client).
>>
>> /pfs2 is the mount point for a Duplicated-Replicate volume of 4 servers.
>>
>> Given this command line executed on the client:
>>
>> root at jc1lnxsamm46:/root # time find /pfs2/online_archive/2010 -type d -print
>>
>> and this output:
>>
>
> [...]
>
>> Client was originally deployed as GlusterFS 3.0.4, that was uninstalled,
>> version 3.1.1 was installed, and then later upgraded to 3.1.3.
>>
>> Any ideas on what is going on here?
>
> possibly multiple things.  Looks like a slow stat issue, compounded by a
> run-time link issue.  You might need to locate where you installed your
> glusterfs installation, and make sure it is included in a file in
> /etc/ld.so.conf.d/gluster.conf
>
>        /usr/local/lib
>        /usr/local/lib64
>
> then run
>
>        ldconfig -v
>
> and then restart gluster.
>
> As to the slow aspect, large directories with many stats will take a
> very long time.  At this point in time, we are turning off stat-prefetch
> and a number of other things by default due to breakage, for
> deployments.  This will negatively impact stat performance (requiring at
> least one round trip per stat), and show up as huge time delays in large
> directories.
>
> It might help to turn up debugging on the servers, and pastebin the logs.
>
>>
>> Thanks,
>>
>> James Burnash
>>
>> Unix Engineering
>>
>>
>>
>> DISCLAIMER:
>> This e-mail, and any attachments thereto, is intended only for use by
>> the addressee(s)named herein and
>> may contain legally privileged and/or confidential information. If you
>> are not the intended recipient of this
>> e-mail, you are hereby notified that any dissemination, distribution or
>> copying of this e-mail and any attachments
>> thereto, is strictly prohibited. If you have received this in error,
>> please immediately notify me and permanently
>> delete the original and any printout thereof.E-mail transmission cannot
>> be guaranteed to be secure or error-free.
>> The sender therefore does not accept liability for any errors or
>> omissions in the contents of this message which
>> arise as a result of e-mail transmission.
>> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY
>> Knight Capital Group may, at its discretion, monitor and review the
>> content of all e-mail communications.
>>
>> http://www.knight.com <http://www.knight.com/>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> --
> Joseph Landman, Ph.D
> Founder and CEO
> Scalable Informatics, Inc.
> email: landman at scalableinformatics.com
> web  : http://scalableinformatics.com
>        http://scalableinformatics.com/sicluster
> phone: +1 734 786 8423 x121
> fax  : +1 866 888 3112
> cell : +1 734 612 4615
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list