[Gluster-users] glustefs as vmware datastore in production

Jonathan Archer jf_archer at yahoo.com
Wed Jun 6 09:53:23 UTC 2018

What are you using for your presentation if not NFS? Are you using VMWare as the hypervisor?
Are you using a cluster vip across your nodes or using a single entry point via one node?

    On Wednesday, 6 June 2018, 10:04:42 BST, Dave Sherohman <dave at sherohman.org> wrote:  
 On Tue, Jun 05, 2018 at 06:38:16PM -0700, Benjamin Kingston wrote:
> You're better off exporting LUNs via iSCSI.

Speak for yourself.  I'm running the VMs on multiple physical systems
and migrating between them.  We were using LVM on top of iSCSI LUNs
before setting up gluster and it was a constant PITA having to propagate
filesystem metadata between the host systems, with the occasional
filesystem corruption when one host expected an lv to be a certain size
(or whatever) and a different host expected something else.

Turning the disk images into files on a remote filesystem removed all of
those issues.

clvm probably would have also resolved those problems, but gluster
looked easier to set up, and it worked.  I had one minor problem with
FUSE (which was resolved by switching to libgfapi) and one less-minor
problem because I misunderstood how gluster handles quorum (which was
resolved by switching from replica 2 to replica 2+A).  Other than that,
gluster has worked perfectly for me in my use case since day one.

> I spent a long time trying to get NFS to work via NFS-Ganesha as a
> datastore and the performance is not there, especially since HA NFS
> isn't an official feature of NFS-Ganesha.

Perhaps your issue was in the NFS layer, which I'm not using.  Even when
I was using FUSE mounts instead of libgfapi, I was mounting them as GFS,
not NFS.

> Also keep in mind your write speed is cut in half/thirds/etc... with
> gluster as a VM datastore if you use replication since all writes are
> multiplied.

Yep, that's the price you pay for HA.

Also, although the writes are multiplied, they're also (at least
partially) concurrent, so performance isn't as bad as "divide by the
number of replicas".

Dave Sherohman
Gluster-users mailing list
Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180606/222f8b93/attachment.html>

More information about the Gluster-users mailing list