[Gluster-users] GlusterFS with FUSE slow vs ZFS volume
Pranith Kumar Karampuri
pkarampu at redhat.com
Thu Feb 5 11:17:18 UTC 2015
+Kiran Patil may know about this.
On 02/03/2015 12:56 AM, ML mail wrote:
> I am testing GlusterFS for the first time and have installed the latest GlusterFS 3.5 stable version on Debian 7 on brand new SuperMicro hardware with ZFS instead of hardware RAID. My ZFS pool is a RAIDZ-2 with 6 SATA disks of 2 TB each.
> After setting up a first and single test brick on my currently single test node I wanted first to see how much slower will GlusterFS be compared to writting directly to the ZFS volume. For that purpose I have mounted my GlusterFS volume locally on the same server using FUSE.
> For my tests I have used bonnie++ with the command "bonnie++ -n16 -b" and I must say I am quite shocked to see that with this current setup GlusterFS slows down the whole file system with a factor of approximately 6 to 8. For example:
> ZFS volume
> Sequential output by block (read): 936 MB/sec
> Sequential input by block (write): 1520 MB/sec
> GlusterFS on top of same ZFS volume mounted with FUSE
> Sequential output by block (read): 114 MB/sec
> Sequential input by block (write): 312 MB/sec
> Now I was wondering if such a performance drop on a single GlusterFS node is expected? If not is it maybe ZFS which is messing up things?
> bonnie++ took 3 minutes to rune on the ZFS volume and 18 minutes on the GlusterFS mount. I have copied the bonnie++ results below just in case in CVS format:
> Maybe they are a few performance tuning trick that I am not aware of?
> Let me know if I should provide any more information. In advance thanks for your comments.
> Best regards
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users