[Gluster-users] Throughout over infiniband

Brian Candler B.Candler at pobox.com
Mon Sep 10 07:48:03 UTC 2012


On Sun, Sep 09, 2012 at 09:28:47PM +0100, Andrei Mikhailovsky wrote:
>    While trying to figure out the cause of the bottleneck i've realised
>    that the bottle neck is coming from the client side as running
>    concurrent test from two clients would give me about 650mb/s per each
>    client.

Yes - so in workloads where you have many concurrent clients, this isn't a
problem.  It's only a problem if you have a single client doing a lot of
sequential operations.

My guess would be it's something to do with latency: i.e. client sends
request, waits for response before sending next request. A random-read
workload is the worst case, and this is a "laws of physics" thing. Consider:

- Client issues request to read file A
- Request gets transferred over network
- Fileserver issues seek/read
- Response gets transferred back over network
- Client issues request to read file B
- ... etc

If the client is written to issue only one request at a time then there's no
way to optimise this - the server cannot guess in advance what the next read
will be.

Have you tried doing exactly the same test but over NFS? I didn't see that
in your posting (you only mentioned NFS in the context of KVM)

When you are doing lots of writes you should be able to pipeline the data. 
The difference is, with a local filesystem you have an instant latency, and
writing data is just stuffing dirty blocks into the VFS cache.  With a
remote filesystem, when you open a file you have to wait for an OK response
before you start writing.  Again, you should compare this against NFS before
writing off Gluster.

>    P.S. If you are looking to use glusterfs as the backend storage for the
>    kvm virtualisation, I would warn you that it's a tricky business. I've
>    managed to make things work, but the performance is far worse than any
>    of my pessimistic expectations! An example - a mounted glusterfs-rdma
>    file system on the server running kvm would give me around 700-850mb/s
>    throughput. I was only getting 50mb/s max when doing the test from the
>    vm stored on that partition.

Yes, this has been observed by lots of people. KVM block access mapped to
FUSE file access mapped to Gluster doesn't perform well.  However some
patches have been written for KVM to use the gluster protocol directly and
the performance is way, way better.  KVM machines are just userland
processes, and the I/O stays entirely in userland.  I'm looking forward to
these being incorporated into mainline KVM.

Regards,

Brian.



More information about the Gluster-users mailing list