[Gluster-users] gluster for home directories?
Rik Theys
Rik.Theys at esat.kuleuven.be
Thu Mar 8 10:18:32 UTC 2018
Hi,
On 03/08/2018 10:52 AM, Manoj Pillai wrote:
> On Wed, Mar 7, 2018 at 8:29 PM, Rik Theys <Rik.Theys at esat.kuleuven.be
> <mailto:Rik.Theys at esat.kuleuven.be>> wrote:
>
> I'm currently testing gluster (3.12, now 3.13) on older machines[1] and
> have created a replica 3 arbiter 1 volume 2x(2+1). I seem to run in all
> sorts of (performance) problems. I must be doing something wrong but
> I've tried all sorts of benchmarks and nothing seems to make my setup
> live up to what I would expect from this hardware.
>
> * I understand that gluster only starts to work well when multiple
> clients are connecting in parallel, but I did expect the single client
> performance to be better.
>
> * Unpacking the linux-4.15.7.tar.xz file on the brick XFS filesystem
> followed by a sync takes about 1 minute. Doing the same on the gluster
> volume using the fuse client (client is one of the brick servers) takes
> over 9 minutes and neither disk nor cpu nor network are reaching their
> bottleneck. Doing the same over NFS-ganesha (client is a workstation
> connected through gbit) takes even longer (more than 30min!?).
>
> I understand that unpacking a lot of small files may be the worst
> workload for a distributed filesystem, but when I look at the file sizes
> of the files in our users' home directories, more than 90% is smaller
> than 1MB.
>
> * A file copy of a 300GB file over NFS 4 (nfs-ganesha) starts fast
> (90MB/s) and then drops to 20MB/s. When I look at the servers during the
> copy, I don't see where the bottleneck is as the cpu, disk and network
> are not maxing out (on none of the bricks). When the same client copies
> the file to our current NFS storage it is limited by the gbit network
> connection of the client.
>
>
> Both untar and cp are single-threaded, which means throughput is mostly
> dictated by latency. Latency is generally higher in a distributed FS;
> nfs-ganesha has an extra hop to the backend, and hence higher latency
> for most operations compared to glusterfs-fuse.
>
> You don't necessarily need multiple clients for good performance with
> gluster. Many multi-threaded benchmarks give good performance from a
> single client. Here for e.g., if you run multiple copy commands in
> parallel from the same client, I'd expect your aggregate transfer rate
> to improve.
>
> Been a long while since I looked at nfs-ganesha. But in terms of upper
> bounds for throughput tests: data needs to flow over the
> client->nfs-server link, and then, depending on which servers the file
> is located on, either 1x (if the nfs-ganesha node is also hosting one
> copy of the file, and neglecting arbiter) or 2x over the s2s link. With
> 1Gbps links, that means an upper bound between 125 MB/s and 62.5 MB/s,
> in the steady state, unless I miscalculated.
Yes, you are correct, but the speeds I'm seeing are far below 62.5MB/s.
In the untar case, I fully understand the overhead as there a lot of
small files and therefore a lot of metadata overhead.
In the sequential write the speed should be much better as latency
is/should be less of an issue here?
I've been trying to find some documentation on nfs-ganesha but
everything I find seems to be outdated :-(.
The documentation on their wiki states:
"
Version 2.0
This version is in active development and is not considered stable
enough for production use. Its documentation is still incomplete.
"
Their latest version is 2.6.0...
Also I can not find what changed between 2.5 and 2.6. Sure I can look at
he git commits, but there is no maintained changes/changelog,...
>From what I've read nfs-ganesha should be able cache a lot of data, but
I can't find any good documentation on how to configure this.
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
More information about the Gluster-users
mailing list