[Gluster-users] Large gluster read performance overhead?

Raghuram BK ram at fractalio.com
Tue Apr 28 10:02:48 UTC 2015


I've been trying to figure out the performance overhead of gluster over the
underlying filesystem and find the difference to be quite stark. Is this
normal? Can something be done about it? I've tried to eliminate the network
overhead by doing everything locally and eliminate the effect of caching by
forcing hits to the hard drives.. Here's what I did :

1. Force the underlying filesystem(ZFS) to always read from disk
2. Create the underlying storage (zfs create frzpool/normal/d1)
3. Create a gluster distributed volume with only one brick on the local
machine. (gluster volume create g1
fractalio-pri.fractalio.lan:/frzpool/normal/d1 force)
4. Start it (gluster volume start g1)
5. Check the volume info :

[root at fractalio-pri fractalio]# gluster v info g1

Volume Name: g1
Type: Distribute
Volume ID: e50f13d2-cb98-47f4-8113-3f15b4b6306a
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: fractalio-pri.fractalio.lan:/frzpool/normal/d1

6. Mount it (mount -t glusterfs localhost:/g1 /mnt/g1)
7. Populate a test file into the volume :

[root at fractalio-pri fractalio]# dd if=/dev/zero of=/mnt/g1/ddfile1 bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 8.4938 s, 247 MB/s

8. Read the file from the gluster mount :

[root at fractalio-pri fractalio]# dd if=/mnt/g1/ddfile1 of=/dev/zero bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 84.4174 s, 24.8 MB/s

9. Read the file directly from the underlying storage :

[root at fractalio-pri fractalio]# dd if=/frzpool/normal/d1/ddfile1
of=/dev/zero bs=1M
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 24.722 s, 84.8 MB/s


The throughput comes down from 84.8MB/s to 24.8MB/s, a 240% overhead?!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150428/f5b39700/attachment.html>


More information about the Gluster-users mailing list