[Gluster-users] Problmes with GlusterFS 3.6.2 Proxmox 3.3 using OpenVZ containers

Roman romeo.r at gmail.com
Mon Apr 20 20:57:31 UTC 2015


Hi, mate!

I'm also using Proxmox, but.. not with containers (my advice would be  not
to use them). Proxmox with kvm comes with libgfapi (or smth like that)
module by default, so it is like 10x faster. Also, the times when openvz
containers were much more faster, than kvm are far behind. KVM nowadays is
just as fast as 98% of the host itself.. So why you need that old
technology like openvz? KVM supports templating also, fast snapshots with
lvm, online changes to cpu, ram etc, etc, etc... /me doesn't see any future
for openvz anymore (I was huge openvz fan some years ago btw). IMHO RH is
not that stupid to support it in the core by default (some time ago it was
pure xen only), now, when one installs CentOS or RH it offers kvm by
default as virtualizing solution.

But.. If you just have (by some reason) to use containers.. what are the
mount options for your openvz mount ? Have you tried to tune them (like
enable direct io mode etc). There are a lot of options available to make
gusterfs faster under linux.

2015-04-20 23:44 GMT+03:00 wodel youchi <wodel.youchi at gmail.com>:

> Hi all,
>
> We made  a setup of 3 Glusterfs nodes for one of our clients, two master
> nodes and a slave node.
> The GlusterFS storage is used with Proxmox hypervisors.
>
> The platform :
>
> Two hypervisors Proxmox v3.3 with latest updates
>
> 2 nodes GlusterFS 3.6.2 (latest updates) on Centos7.1x64 (latest updates)
> + 1 node for geo-replication.
>
> On the cluster GlusterFS we have two volumes data1 and data2, each
> composed of one brick, the two volumes are on : replicated mode.
> the volumes are geo-replicated on the third node on slavedata1 and
> slavedata2.
>
> We're using OpenVZ containers for the services (web, mail, ftp ...etc.).
>
> The problems:
>
> 1 - We're experiencing poor performance with our services, especially mail
> and web : web pages are slow, if we start parallel jobs, creation/ a
> restore of containers it took a lot of time, and sometimes job fails.
>
> 2 - We have also problems with geo-replication:
>     a - It starts successfully, some days after we found it in faulty
> state, we didn't find the cause of the problem yet.
>     b - it does not continue synchronization, here is the little story:
>      We had 16Gb of data on data2, they were synced correctly on the slave
> volume: slavedata2, the we've copied a 100Gb container on the master
> volume, the container is a zimbra mail server, the container was not
> synced, we recreated the slavedata2 volume, the geo-replication stopped at
> 16Gb.
> We stopped the container and recreated the slavedata2 volume again, this
> time it seems working!!!??? at least for the moment, I keep monitoring.
>
> 3 - At some point we had to delete the geo-replication session, after
> completing all the manipulations (recreating the slavedata2 volume), we
> recreated the geo session, when we started it, it went to faulty state, the
> gstatus command displayed : volume data2 is down (all bricks are down),
> why??? no idea, the strange part: glu v info/status data2, commands didn't
> show any error messages. We stopped and restarted the volume, and then we
> started the geo-replication.
>
>
> Questions :
>
> 1 - Is GlusterFS suitable for containers?
> 2 - If yes, do we have to tune it for small files usage? and if yes, any
> advice?
> 3 - Is there a way to show options values of a volume?
>
>
> Thanks in advance.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Best regards,
Roman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150420/ad388d8d/attachment.html>


More information about the Gluster-users mailing list