[Gluster-users] Need help with optimizing GlusterFS for Apache
Robert Krig
robert at bitcaster.de
Tue Oct 18 12:23:49 UTC 2011
On 10/18/2011 01:14 PM, Joe Landman wrote:
>
> On 10/18/2011 06:14 AM, Robert Krig wrote:
>
>> I think I'm going to have to abandon GlusterFS for our Image files. The
>> performance is abysmal. I've tried all sorts of settings, but at some
>> point the http process just keeps spawning more and more processess
>> because clients are waiting because the directory can't be read, since
>> glusterfs is busy.
>> We're not even reaching 500 apache requests per second and already
>> apache locks up.
>>
>> I'm pretty sure it can't be the hardware, since we're talking about a 12
>> Core Hyperthreading Xeon CPU, with 48GB of ram and 30TB of storage in a
>> hardware Raid.
>
> From our experience, and please don't take this incorrectly, the vast
> majority of storage users (and for that matter, storage companies)
> don't know how to design their RAIDs to their needs. A "fast" CPU (12
> core Xeon would be X5650 or higher) won't impact small file read speed
> all that much. 48 GB of ram could, if you can cache enough of your
> small files.
>
> What you need, for your small random file read, is an SSD or Flash
> cache. It has to be large enough that its relevant for your use case.
> I am not sure what your working set size is for your images, but you
> can buy them from small 300GB units through several 10s of TB. Small
> random file performance is extremely good, and you can put gluster
> atop it as a file system if you wish to run the images off the cache
> ... or you can use it as a block level cache, which you then need to
> warm up prior to inital use (and then adjust after changes).
>
>> I realise that GlusterFS is not ideal for many small files, but this is
>> beyond ridiculous. It certainly doesn't help that the documentation
>> doesn't even properly explain how to activate different translators, or
>> where exactly to edit them by hand in the config files.
>>
>> If anyone has any suggestions, I'd be happy to hear them.
>
> See above. As noted, most people (and companies) do anywhere from a
> bad to terrible job on storage system design. No one should be using
> a large RAID5 or RAID6 for small random file reads. Its simply the
> wrong design. I am guessing its unlikely that you have a RAID10, but
> even with that, you are going to be rate limited by the number of
> drives you have and their about 100 IOP rates.
>
> This particular problem isn't likely Gluster's fault. It is likely
> your storage design. I'd suggest doing a quick test using fio to
> ascertain how many random read IOPs you can get out of your file
> system. If you want to handle 500 apache requests per second, how
> many IOPs does this imply (how many files does each request require to
> fulfill)? Chances are that you exceed the IOP capacity of your
> storage by several times.
>
> Your best bet is either a caching system, or putting the small
> randomly accessed image files on SSD or Flash, and using that. Try
> that before you abandon Gluster.
>
>
>
Our storage consists of video files, ranging anywhere from 20-100
megabytes. The majority of our storage consists of videos, and only a
fraction of it is images. However, the images get read the most.
I agree with you on the raid thing, but in this scenario it was
important to have a maximum of safe storage.
Our previous systems are in fact identical, except a little slower, but
same hardware. Only difference being, no use of GlusterFS. We use a
rsync script which gets called whenever a user uploads a file, which
syncs the corresponding files to the storage mirror.
The overall IO under load on our old system is rarely above 8 megabytes
per second. And it handles the load without even breaking a sweat.
But I've come up with a workaround for our problem. Simply use the
gluster export directory as read only for the images.
Uploads are few and far between, and those can be handled separately,
90% of the time, files just need to be read. It's only during uploads
that files need to be written and replicated.
More information about the Gluster-users
mailing list