[Gluster-users] Trying XenServer again with Gluster
Russell Purinton
russell.purinton at gmail.com
Wed Mar 30 01:22:25 UTC 2016
With some recent posts regarding Arbiter, I’m a little confused…
I understand Replica 2 you can’t get a quorum to prevent split brains, but does that only apply to having 2 servers?
If I have 4 servers running distribute 2 replica 2, I still get quorum right? I don’t need to add an arbiter volume, or do I ?
What if I have 4 servers running distribute 2 replica 2 and I have 2 additional servers that are part of the pool, but not running bricks, do I still need replica 3 arbiter 1 for HA?
> On Mar 22, 2016, at 8:48 AM, André Bauer <abauer at magix.net> wrote:
>
> Hi Russel,
>
> i'm a KVM user but imho XEN also supports accessing vm images through
> libgfapi so you don't need to mount via NFS or fuse client.
>
> Infos:
> http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt <http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt>
>
> Second point is that you need to have at least 3 replicas to get a
> working HA setup, because server quorum does not work for 2 replicas.
>
> Infos:
> https://www.gluster.org/pipermail/gluster-users/2015-November/024189.html <https://www.gluster.org/pipermail/gluster-users/2015-November/024189.html>
>
> Regards
> André
>
>
> Am 20.03.2016 um 19:41 schrieb Russell Purinton:
>> Hi all, Once again I’m trying to get XenServer working reliably with
>> GlusterFS storage for the VHDs. I’m mainly interested in the ability to
>> have a pair of storage servers, where if one goes down, the VMs can keep
>> running uninterrupted on the other server. So, we’ll be using the
>> replicate translator to make sure all the data resides on both servers.
>>
>> So initially, I tried using the Gluster NFS server. XenServer supports
>> NFS out of the box, so this seemed like a good way to go without having
>> to hack XenServer much. I found some major performance issues with this
>> however.
>>
>> I’m using a server with 12 SAS drives on a single RAID card, with dual
>> 10GbE NICs. Without Gluster, using the normal Kernel NFS server, I can
>> read and write to this server at over 400MB/sec. VMS run well. However
>> when I switch to Gluster for the NFS server, my write performance drops
>> to 20MB/sec. Read performance remains high. I found out this is due to
>> XenServer’s use of O_DIRECT for VHD access. It helped a lot when the
>> server had DDR cache on the RAID card, but for servers without that the
>> performance was unusable.
>>
>> So I installed the gluster-client in XenServer itself, and mounted the
>> volume in dom0. I then created a SR of type “file”. Success, sort of! I
>> can do just about everything on that SR, VMs run nicely, and performance
>> is acceptable at 270MB/sec, BUT…. I have a problem when I transfer an
>> existing VM to it. The transfer gets only so far along then data stops
>> moving. XenServer still says it’s copying, but no data is being sent. I
>> have to force restart the XenHost to clear the issue (and the VM isn’t
>> moved). Other file access to the FUSE mount still works, and other VMs
>> are unaffected.
>>
>> I think the problem may possibly involve file locks or perhaps a
>> performance translator. I’ve tried disabling as many performance
>> translators as I can, but no luck.
>>
>> I didn’t find anything interesting in the logs, and no crash dumps. I
>> tried to do a volume statedump to see the list of locks, but it seemed
>> to only output some cpu stats in /tmp.
>>
>> Is there a generally accepted list of volume options to use with Gluster
>> for volumes meant to store VHDs? Has anyone else had a similar
>> experience with VHD access locking up?
>>
>> Russell
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>
>>
>
>
> --
> Mit freundlichen Grüßen
> André Bauer
>
> MAGIX Software GmbH
> André Bauer
> Administrator
> August-Bebel-Straße 48
> 01219 Dresden
> GERMANY
>
> tel.: 0351 41884875
> e-mail: abauer at magix.net <mailto:abauer at magix.net>
> abauer at magix.net <mailto:abauer at magix.net> <mailto:Email>
> www.magix.com <http://www.magix.com/> <http://www.magix.com/ <http://www.magix.com/>>
>
> Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
> Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205
>
> Find us on:
>
> <http://www.facebook.com/MAGIX <http://www.facebook.com/MAGIX>> <http://www.twitter.com/magix_de <http://www.twitter.com/magix_de>>
> <http://www.youtube.com/wwwmagixcom <http://www.youtube.com/wwwmagixcom>> <http://www.magixmagazin.de <http://www.magixmagazin.de/>>
> ----------------------------------------------------------------------
> The information in this email is intended only for the addressee named
> above. Access to this email by anyone else is unauthorized. If you are
> not the intended recipient of this message any disclosure, copying,
> distribution or any action taken in reliance on it is prohibited and
> may be unlawful. MAGIX does not warrant that any attachments are free
> from viruses or other defects and accepts no liability for any losses
> resulting from infected email transmissions. Please note that any
> views expressed in this email may be those of the originator and do
> not necessarily represent the agenda of the company.
> ----------------------------------------------------------------------
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users <http://www.gluster.org/mailman/listinfo/gluster-users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160329/945d7e69/attachment.html>
More information about the Gluster-users
mailing list