[Gluster-users] NFS userspace vs kernelspace ESX
Matthew at verso.com.au
Wed Feb 24 13:01:38 UTC 2010
I've been evaluating glusterfs as a storage solution for ESX over the
last couple of months. I have 2 debian boxes interconnected over 10gbe
wmth gluster setup in a replication/mirror raid1 arrangement. glusterfs
performance is excellent. Simple 'dd' tests reveal I can write at
300mb+/s to the mount. No complaints here. So I proceeded to 're export'
the glusterfs mount via NFS - for service to my ESX farm. Initially I
used the debian distro's nfs-kernel-server package to achieve this.
NFS is reliable, however, sadly, maximum performance to any gigabit
client is less than 20mb/s.
If I use NFS to export a directory outside of the glusterfs mount (i.e
no 're exporting' going on) I can begin to saturate the link with
90mb+/s transfers. So I switch my approach, and move to the unfs3booster
userspace daemon as outlined on gluster.org. Once compiled and installed
I can saturate the link on the re export of glusterfs.
Fantastic! Problem solved I thought.
However - a weird bug in ESX4 becomes evident . I can mount the NFS
export on the ESX without a problem. It reports size and free space
correctly. But I can't browse any directory that contains a large file.
How large? I'm not sure exactly. I can explore directories that contain
multiple small files, 1GB files are ok. But I can not list the content
of directories that contain large (16GB) images/vmdk's. The datastore
browser experiences a response time out when I try. There are no
specific errors in the client or server syslog's that elude to why this
I guess I am looking for anyone who has any experience with this type of
NFS exporting to ESX. Has anyone had success with the userspace or the
kernel space daemons. Do you experience the performance issues or the
directory listing problem as I detail above?
Incidentally, today I eliminated FUSE from the equation by switching to
the booster client and the results are the same as above. If not
Any suggestions? I am at a loss here!
More information about the Gluster-users