[Gluster-users] Trying XenServer again with Gluster

Russell Purinton russell.purinton at gmail.com
Tue Mar 22 13:25:39 UTC 2016


Thanks Andre,

Citrix XenServer does not have qemu support for libgfapi unfortunately,
though I have posted a Feature Request with them to possibly support it in
the future.  Not sure if they will.

That's unfortunate that it can't be done with 2 servers.  It makes sense
though.   Do you think it would work with 4 servers in the pool but still
using Replica 2, or is Replica 3 the minimum?   We've got a large amount of
data, and using replica 2 would cost us about $878 per month whereas
replica 3 would cost us about $1317/mo for the same amount of storage...

On Tue, Mar 22, 2016 at 8:48 AM, André Bauer <abauer at magix.net> wrote:

> Hi Russel,
>
> i'm a KVM user but imho XEN also supports accessing vm images through
> libgfapi so you don't need to mount via NFS or fuse client.
>
> Infos:
>
> http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt
>
> Second point is that you need to have at least 3 replicas to get a
> working HA setup, because server quorum does not work for 2 replicas.
>
> Infos:
> https://www.gluster.org/pipermail/gluster-users/2015-November/024189.html
>
> Regards
> André
>
>
> Am 20.03.2016 um 19:41 schrieb Russell Purinton:
> > Hi all, Once again I’m trying to get XenServer working reliably with
> > GlusterFS storage for the VHDs. I’m mainly interested in the ability to
> > have a pair of storage servers, where if one goes down, the VMs can keep
> > running uninterrupted on the other server. So, we’ll be using the
> > replicate translator to make sure all the data resides on both servers.
> >
> > So initially, I tried using the Gluster NFS server. XenServer supports
> > NFS out of the box, so this seemed like a good way to go without having
> > to hack XenServer much. I found some major performance issues with this
> > however.
> >
> > I’m using a server with 12 SAS drives on a single RAID card, with dual
> > 10GbE NICs. Without Gluster, using the normal Kernel NFS server, I can
> > read and write to this server at over 400MB/sec. VMS run well. However
> > when I switch to Gluster for the NFS server, my write performance drops
> > to 20MB/sec. Read performance remains high. I found out this is due to
> > XenServer’s use of O_DIRECT for VHD access. It helped a lot when the
> > server had DDR cache on the RAID card, but for servers without that the
> > performance was unusable.
> >
> > So I installed the gluster-client in XenServer itself, and mounted the
> > volume in dom0. I then created a SR of type “file”. Success, sort of! I
> > can do just about everything on that SR, VMs run nicely, and performance
> > is acceptable at 270MB/sec, BUT…. I have a problem when I transfer an
> > existing VM to it. The transfer gets only so far along then data stops
> > moving. XenServer still says it’s copying, but no data is being sent. I
> > have to force restart the XenHost to clear the issue (and the VM isn’t
> > moved). Other file access to the FUSE mount still works, and other VMs
> > are unaffected.
> >
> > I think the problem may possibly involve file locks or perhaps a
> > performance translator. I’ve tried disabling as many performance
> > translators as I can, but no luck.
> >
> > I didn’t find anything interesting in the logs, and no crash dumps. I
> > tried to do a volume statedump to see the list of locks, but it seemed
> > to only output some cpu stats in /tmp.
> >
> > Is there a generally accepted list of volume options to use with Gluster
> > for volumes meant to store VHDs? Has anyone else had a similar
> > experience with VHD access locking up?
> >
> > Russell
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>
>
> --
> Mit freundlichen Grüßen
> André Bauer
>
> MAGIX Software GmbH
> André Bauer
> Administrator
> August-Bebel-Straße 48
> 01219 Dresden
> GERMANY
>
> tel.: 0351 41884875
> e-mail: abauer at magix.net
> abauer at magix.net <mailto:Email>
> www.magix.com <http://www.magix.com/>
>
> Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
> Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205
>
> Find us on:
>
> <http://www.facebook.com/MAGIX> <http://www.twitter.com/magix_de>
> <http://www.youtube.com/wwwmagixcom> <http://www.magixmagazin.de>
> ----------------------------------------------------------------------
> The information in this email is intended only for the addressee named
> above. Access to this email by anyone else is unauthorized. If you are
> not the intended recipient of this message any disclosure, copying,
> distribution or any action taken in reliance on it is prohibited and
> may be unlawful. MAGIX does not warrant that any attachments are free
> from viruses or other defects and accepts no liability for any losses
> resulting from infected email transmissions. Please note that any
> views expressed in this email may be those of the originator and do
> not necessarily represent the agenda of the company.
> ----------------------------------------------------------------------
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160322/a89783ac/attachment.html>


More information about the Gluster-users mailing list