[Gluster-users] Does brick fs play a large role on listing files client side?

Anand Avati anand.avati at gmail.com
Wed Dec 5 00:35:39 UTC 2012


Support for READDIRPLUS in FUSE improves directory listing performance
significantly. You will have to hold on till
http://git.kernel.org/?p=linux/kernel/git/mszeredi/fuse.git;a=commit;h=81aecda8796572e490336680530d53a10771c2a9trickles
down into your distro kernel however.

Avati

On Tue, Dec 4, 2012 at 2:43 PM, Kushnir, Michael (NIH/NLM/LHC) [C] <
michael.kushnir at nih.gov> wrote:

>  Our code knows exactly what files it needs and what directories they are
> in. So, that’s not the problem. I’m interested more for administrative
> functions like the ability to pull file lists to report things like, “we
> have x articles in storage with x images,” cleaning out files older than X,
> and etc… ****
>
> ** **
>
> Thanks,****
>
> Michael****
>
> ** **
>
> *From:* Bryan Whitehead [mailto:driver at megahappy.net]
> *Sent:* Tuesday, December 04, 2012 5:36 PM
>
> *To:* Kushnir, Michael (NIH/NLM/LHC) [C]
> *Cc:* Andrew Holway; gluster-users at gluster.org
>
> *Subject:* Re: [Gluster-users] Does brick fs play a large role on listing
> files client side?****
>
> ** **
>
> I think performance.cache-refresh-timeout *might* cache directory
> listings, so you can try bumping that value up. But probably someone else
> on the list needs to clarify if that will actually cache a directory (might
> only cache a file).****
>
> ** **
>
> If not, you can write a translator to cache directory listings. A good
> place to start is the code Jeff Darcy wrote:
> https://github.com/jdarcy/negative-lookup****
>
> ** **
>
> The best solution would be to directly use the API in your own code - but
> I don't think that is really going to be available until gluster 3.4?
> Basically fuse directory lookups are expensive so it is best to use it as
> little as possible.****
>
> ** **
>
> On Tue, Dec 4, 2012 at 2:30 PM, Kushnir, Michael (NIH/NLM/LHC) [C] <
> michael.kushnir at nih.gov> wrote:****
>
> Thanks for the reply,****
>
>
> > Are you just using a single brick? Gluster is a scale-out NAS file
> system so is usually used when you want to want to aggregate the disk
> performance and disk space of many machines into a singe Global Name Space.
> ****
>
> I currently have one server with 8 bricks. Once I get through evaluation,
> we will expand to multiple servers with 24 bricks each. We are looking to
> have a replica count of 2  for each brick eventually.
>
> On my gluster server, I can run an ls against /export/*/imgs and get file
> listings from each brick in seconds. However, on my client, I run ls
> against the /imgs/ directory on the gluster volume and wait days. Even if I
> mount the gluster volume on the storage server itself, ls takes a long long
> time.
>
> So, what are my options for improving the speed of directory listing on
> gluster clients? Would changing brick FS to ext4 make a difference in the
> time it takes to list on the client? Should I try mounting the volume over
> NFS? Something else?
>
> Thanks,
> Michael****
>
>
>
> -----Original Message-----
> From: Andrew Holway [mailto:a.holway at syseleven.de]
> Sent: Tuesday, December 04, 2012 4:47 PM
> To: Kushnir, Michael (NIH/NLM/LHC) [C]
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Does brick fs play a large role on listing
> files client side?
>
>
> On Dec 4, 2012, at 5:30 PM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote:
>
> > My GlusterFS deployment right now is 8 x 512GB OCZ Vertex 4 (no RAID)
> connected to Dell PERC H710, formatted as XFS and put together into a
> distributed volume.
>
> Hi,
>
> Are you just using a single brick? Gluster is a scale-out NAS file system
> so is usually used when you want to want to aggregate the disk performance
> and disk space of many machines into a singe Global Name Space.
>
> ocfs (cluster filesystem) is more for when you have a single disk volume
> attached via SCSI to many machines. More than one machine cannot for
> instance access the same ext4 filesystem concurrently. ocfs provides a
> locking mechanism allowing many systems to access the same SCSI device at
> the same time.
>
> Gluster is to NFS as OCFS is to EXT4 (kinda).
>
> The lag your getting might be due to FUSE (Filesystem in Userspace). FUSE
> allows weird and wonderful filesystems to be mounted in userspace meaning
> kernel support is not required. This is typically much slower than kernel
> enabled filesystems.
>
> Cheers,
>
> Andrew
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users****
>
> ** **
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121204/26a341a6/attachment.html>


More information about the Gluster-users mailing list