[Gluster-devel] Handling huge number of file read requests
Amrik Singh
asingh at ideeinc.com
Fri May 4 13:50:41 UTC 2007
Hi Guys,
We are hoping that glusterfs would help us in the particular problem
that we are facing with our cluster. We have a visual search application
that runs on a cluster with around 300 processors. These compute nodes
run a search for images that are hosted on an NFS server. In certain
circumstances all these compute nodes are sending requests for query
images at extremely high rates (20-40 images per second). When 300 nodes
send 20-40 requests per second for these images, the NFS server just
can't cope with it and we start seeing a lot of retransmissions and a
very high wait time on the server as well as on the nodes. The images
are sized at around 2MB each.
With the current application we are not in a position where we can
quickly change the way things are being done so we are looking for a
file system that can handle this kind of situation. We tried glusterfs
with the default settings but we did not see any improvement. Is there a
way to tune glusterfs to handle this kind of situation.
I can provide more details about our setup as needed.
thanks
--
Amrik Singh
Idée Inc.
http://www.ideeinc.com
More information about the Gluster-devel
mailing list