<html><head></head><body><div style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:16px;"><div style="font-family:Helvetica Neue, Helvetica, Arial, sans-serif;font-size:16px;"><div></div>
<div>Dave,</div><div><br></div><div>What are you using for your presentation if not NFS? Are you using VMWare as the hypervisor?</div><div><br></div><div>Are you using a cluster vip across your nodes or using a single entry point via one node?</div><div><br></div><div>Cheers</div><div>Jon</div><div><br></div><div><br></div>
</div><div id="ydpb1bf260byahoo_quoted_8677294378" class="ydpb1bf260byahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
On Wednesday, 6 June 2018, 10:04:42 BST, Dave Sherohman <dave@sherohman.org> wrote:
</div>
<div><br></div>
<div><br></div>
<div>On Tue, Jun 05, 2018 at 06:38:16PM -0700, Benjamin Kingston wrote:<br clear="none">> You're better off exporting LUNs via iSCSI.<br clear="none"><br clear="none">Speak for yourself. I'm running the VMs on multiple physical systems<br clear="none">and migrating between them. We were using LVM on top of iSCSI LUNs<br clear="none">before setting up gluster and it was a constant PITA having to propagate<br clear="none">filesystem metadata between the host systems, with the occasional<br clear="none">filesystem corruption when one host expected an lv to be a certain size<br clear="none">(or whatever) and a different host expected something else.<br clear="none"><br clear="none">Turning the disk images into files on a remote filesystem removed all of<br clear="none">those issues.<br clear="none"><br clear="none">clvm probably would have also resolved those problems, but gluster<br clear="none">looked easier to set up, and it worked. I had one minor problem with<br clear="none">FUSE (which was resolved by switching to libgfapi) and one less-minor<br clear="none">problem because I misunderstood how gluster handles quorum (which was<br clear="none">resolved by switching from replica 2 to replica 2+A). Other than that,<br clear="none">gluster has worked perfectly for me in my use case since day one.<br clear="none"><br clear="none">> I spent a long time trying to get NFS to work via NFS-Ganesha as a<br clear="none">> datastore and the performance is not there, especially since HA NFS<br clear="none">> isn't an official feature of NFS-Ganesha.<br clear="none"><br clear="none">Perhaps your issue was in the NFS layer, which I'm not using. Even when<br clear="none">I was using FUSE mounts instead of libgfapi, I was mounting them as GFS,<br clear="none">not NFS.<br clear="none"><br clear="none">> Also keep in mind your write speed is cut in half/thirds/etc... with<br clear="none">> gluster as a VM datastore if you use replication since all writes are<br clear="none">> multiplied.<br clear="none"><br clear="none">Yep, that's the price you pay for HA.<br clear="none"><br clear="none">Also, although the writes are multiplied, they're also (at least<br clear="none">partially) concurrent, so performance isn't as bad as "divide by the<br clear="none">number of replicas".<div class="ydpb1bf260byqt9571585885" id="ydpb1bf260byqtfd69239"><br clear="none"><br clear="none">-- <br clear="none">Dave Sherohman<br clear="none">_______________________________________________<br clear="none">Gluster-users mailing list<br clear="none"><a shape="rect" href="mailto:Gluster-users@gluster.org" rel="nofollow" target="_blank">Gluster-users@gluster.org</a><br clear="none"><a shape="rect" href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="nofollow" target="_blank">http://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none"></div></div>
</div>
</div></div></body></html>