[Gluster-users] Performance for operations like find
Carl Boberg
carl.boberg at memnonnetworks.com
Sat Mar 3 11:42:03 UTC 2012
Or maybe just turn off automatic self-heal if it is possible? If it is, is
there some way to still do a manual self heal?
---
Carl Boberg
Operations
Memnon Networks AB
Tegnérgatan 34, SE-113 59 Stockholm
Mobile: +46(0)70 467 27 12
www.memnonnetworks.com
On Sat, Mar 3, 2012 at 12:09, Carl Boberg <carl.boberg at memnonnetworks.com>wrote:
> Im thinking of maybe trying this as a solution to my needs:
>
> http://community.gluster.org/a/nfs-performance-with-fuse-client-redundancy/
>
>
> Is that something people on this list would recommend for performance in
> my situation, where we use find commands quite a lot on the volumes? Is it
> a "supported"/common solution?
>
> As I understand it this will let me get NFS performance with Gluster
> automatic failover in a replicated setup?
>
>
> Cheers
> ---
> Carl Boberg
> Operations
>
> Memnon Networks AB
> Tegnérgatan 34, SE-113 59 Stockholm
>
> Mobile: +46(0)70 467 27 12
> www.memnonnetworks.com
>
>
>
> On Fri, Mar 2, 2012 at 15:17, Greg Swift <gregswift at gmail.com> wrote:
>
>> oh... and the majority of that was tested on 3.2.1 on RHEL 5.7 systems.
>> We have recently upgraded to 3.2.5 but this did not change noticeably.
>>
>> On Fri, Mar 2, 2012 at 08:16, Greg Swift <gregswift at gmail.com> wrote:
>>
>>> I'd like to point out that I've had similar experience to Carl but
>>> without the +mtime in my finds and we did try Gluster's NFS. At one point
>>> recently I threw together a spreadsheet documenting the differences that
>>> also includes details regarding the various things I tried as I compared a
>>> direct SAN (our previous environment) to our current Gluster based system,
>>> to Gluster's NFS implementation on the same volumes. The most surprising
>>> point was that the Gluster NFS did so well.
>>>
>>> its attached.
>>>
>>> -greg
>>>
>>> On Fri, Mar 2, 2012 at 07:17, Carl Boberg <
>>> carl.boberg at memnonnetworks.com> wrote:
>>>
>>>> There are about 4000 files in the dir.
>>>>
>>>> Ran it again after clearing the caches and now it took over 3 minutes
>>>> on the gfs and about 4 seconds on the nfs (this is not gluster nfs but an
>>>> old, classic nfs share)
>>>>
>>>> All clients are centos 5.6 and the 2 servers are centos 6.2
>>>> Runnig Gluster 3.2.5 rpm install with replicate setup from the docs.
>>>>
>>>> If it is the self heal operation that is the cause of the slowdown is
>>>> there away around not triggering it? Or better yet, any custom options to
>>>> add to the config to make this kind of find command go a bit quicker?
>>>>
>>>> Our application read and write files to the volume but we also have a
>>>> section for admins in the application that utilizes find and grep to find
>>>> specific files by date or content. This tool is vital for problem
>>>> solving and if it takes so much more time to do such operations its just
>>>> not usable...
>>>>
>>>> Ideas?
>>>>
>>>> Cheers
>>>>
>>>> ---
>>>> Carl Boberg
>>>> Operations
>>>>
>>>> Memnon Networks AB
>>>> Tegnérgatan 34, SE-113 59 Stockholm
>>>>
>>>> Mobile: +46(0)70 467 27 12
>>>> www.memnonnetworks.com
>>>>
>>>>
>>>>
>>>> On Fri, Mar 2, 2012 at 11:58, Brian Candler <B.Candler at pobox.com>wrote:
>>>>
>>>>> On Fri, Mar 02, 2012 at 11:43:27AM +0100, Carl Boberg wrote:
>>>>> > time find /mnt/nfs/<datadir> -type f -mtime -2
>>>>> >
>>>>> > real 2m0.067s <--
>>>>> > user 0m0.030s
>>>>> > sys 0m0.252s
>>>>>
>>>>> The -mtime -2 is forcing gluster to do a stat() on every file, and this
>>>>> makes gluster do a self-heal operation where it needs to access the
>>>>> file on
>>>>> both volumes:
>>>>>
>>>>>
>>>>> http://www.gluster.org/community/documentation/index.php/Gluster_3.1:_Triggering_Self-Heal_on_Replicate
>>>>> http://www.youtube.com/watch?v=AsgtE7Ph2_k
>>>>>
>>>>> Having said that, 2 minutes seems pretty slow. How many files are
>>>>> there in
>>>>> total, i.e. without the -mtime filter?
>>>>>
>>>>> Is it possible the NFS test had the inode data in cache, so was an
>>>>> unfair
>>>>> comparison? I suggest you do
>>>>> echo 3 >/proc/sys/vm/drop_caches
>>>>> (as root) on both client and server before each test.
>>>>>
>>>>> Regards,
>>>>>
>>>>> Brian.
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120303/95222969/attachment.html>
More information about the Gluster-users
mailing list