[Gluster-users] NFS replacement
Stephan von Krawczynski
skraw at ithnet.com
Mon Aug 31 15:17:45 UTC 2009
On Mon, 31 Aug 2009 19:48:46 +0530
Shehjar Tikoo <shehjart at gluster.com> wrote:
> Stephan von Krawczynski wrote:
> > Hello all,
> > after playing around for some weeks we decided to make some real world tests
> > with glusterfs. Therefore we took a nfs-client and mounted the very same data
> > with glusterfs. The client does some logfile processing every 5 minutes and
> > needs around 3,5 mins runtime in a nfs setup.
> > We found out that it makes no sense to try this setup with gluster replicate
> > as long as we do not have the same performance in a single server setup with
> > glusterfs. So now we have one server mounted (halfway replicate) and would
> > like to tune performance.
> > Does anyone have experience with some simple replacement like that? We had to
> > find out that almost all performance options have exactly zero effect. The
> > only thing that seems to make at least some difference is read-ahead on the
> > server. We end up with around 4,5 - 5,5 minutes runtime of the scripts, which
> > is on the edge as we need something quite below 5 minutes (just like nfs was).
> > Our goal is to maximise performance in this setup and then try a real
> > replication setup with two servers.
> > The load itselfs looks like around 100 scripts starting at one time and
> > processing their data.
> > Any ideas?
> What nfs server are you using? The in-kernel one?
> You could try the unfs3booster server, which is the original unfs3
> with our modifications for bug fixes and slight performance
> improvements. It should give better performance in certain cases
> since it avoids the FUSE bottleneck on the server.
> For more info, do take a look at this page:
> When using unfs3booster, please use GlusterFS release 2.0.6 since
> that has the required changes to make booster work with NFS.
I read the docs, but I don't understand the advantage. Why should we use nfs
as kind of a transport layer to an underlying glusterfs server, when we can
easily export the service (i.e. glusterfs) itself. Remember, we don't want nfs
on the client any longer, but a replicate setup with two servers (though we do
not use it right now, but nevertheless it stays our primary goal).
It sounds obvious to me that a nfs-over-gluster must be slower than a pure
kernel-nfs. On the other hand glusterfs per se may even have some advantages
on the network side, iff performance tuning (and of course the options
themselves) is well designed.
The first thing we noticed is that load dropped dramatically both on server
and client when not using kernel-nfs. Client dropped from around 20 to around
4. Server dropped from around 10 to around 5.
Since all boxes are pretty much dedicated to their respective jobs a lot of
caching is going on anyways. So I would not expect nfs to have advantages only
because it is kernel-driven. And the current numbers (loss of around 30% in
performance) show that nfs performance is not completely out of reach.
What advantages would you expect from using unfs3booster at all?
Another thing we really did not understand is the _negative_ effect of adding
iothreads on client or server. Our nfs setup needs around 90 nfs kernel
threads to run smoothly. Every number greater than 8 iothreads reduces the
performance of glusterfs measurably.
More information about the Gluster-users