[Gluster-users] Problmes with GlusterFS 3.6.2 Proxmox 3.3 using OpenVZ containers

wodel youchi wodel.youchi at gmail.com
Wed Apr 22 21:15:51 UTC 2015


Hi again, and thanks for the reply.

As I said we installed the storage platform for one of our clients who
needed a solution with some data replication.

So we opted for GlusterFS.

I never used containers, and I don't know much about them, all my
virtualization installation were with KVM, I know I am deploying test
environments with Ovirt.

The client have old servers, some of them don't even support hardware
virtualization. so for now we have no choice but to use containers. There
is a plan to purchase new servers, but we're still waiting.

What do you mean by "what are the mount options for your openvz mount ?",
the GlusterFS volumes are mounted by Proxmox, which then uses the space to
create containers in it, we did the mount via the GUI of proxmox, the
options showed by the mount command are:
fuse.glusterfs
(rw,realtime,user_id=0,group_id=0,default_premissions,allow_other,max_read=131072)


Is the a command to show current volume options values? I we change an
options with glu vol set, how do we go back?

PS: You said that openVZ containers have no future, I don't if this is the
same for the other container technologies!! RedHat is also pushing forward
Docker, which many consider as the future of virtualization.




2015-04-20 21:57 GMT+01:00 Roman <romeo.r at gmail.com>:

> Hi, mate!
>
> I'm also using Proxmox, but.. not with containers (my advice would be  not
> to use them). Proxmox with kvm comes with libgfapi (or smth like that)
> module by default, so it is like 10x faster. Also, the times when openvz
> containers were much more faster, than kvm are far behind. KVM nowadays is
> just as fast as 98% of the host itself.. So why you need that old
> technology like openvz? KVM supports templating also, fast snapshots with
> lvm, online changes to cpu, ram etc, etc, etc... /me doesn't see any future
> for openvz anymore (I was huge openvz fan some years ago btw). IMHO RH is
> not that stupid to support it in the core by default (some time ago it was
> pure xen only), now, when one installs CentOS or RH it offers kvm by
> default as virtualizing solution.
>
> But.. If you just have (by some reason) to use containers.. what are the
> mount options for your openvz mount ? Have you tried to tune them (like
> enable direct io mode etc). There are a lot of options available to make
> gusterfs faster under linux.
>
> 2015-04-20 23:44 GMT+03:00 wodel youchi <wodel.youchi at gmail.com>:
>
>> Hi all,
>>
>> We made  a setup of 3 Glusterfs nodes for one of our clients, two master
>> nodes and a slave node.
>> The GlusterFS storage is used with Proxmox hypervisors.
>>
>> The platform :
>>
>> Two hypervisors Proxmox v3.3 with latest updates
>>
>> 2 nodes GlusterFS 3.6.2 (latest updates) on Centos7.1x64 (latest updates)
>> + 1 node for geo-replication.
>>
>> On the cluster GlusterFS we have two volumes data1 and data2, each
>> composed of one brick, the two volumes are on : replicated mode.
>> the volumes are geo-replicated on the third node on slavedata1 and
>> slavedata2.
>>
>> We're using OpenVZ containers for the services (web, mail, ftp ...etc.).
>>
>> The problems:
>>
>> 1 - We're experiencing poor performance with our services, especially
>> mail and web : web pages are slow, if we start parallel jobs, creation/ a
>> restore of containers it took a lot of time, and sometimes job fails.
>>
>> 2 - We have also problems with geo-replication:
>>     a - It starts successfully, some days after we found it in faulty
>> state, we didn't find the cause of the problem yet.
>>     b - it does not continue synchronization, here is the little story:
>>      We had 16Gb of data on data2, they were synced correctly on the
>> slave volume: slavedata2, the we've copied a 100Gb container on the master
>> volume, the container is a zimbra mail server, the container was not
>> synced, we recreated the slavedata2 volume, the geo-replication stopped at
>> 16Gb.
>> We stopped the container and recreated the slavedata2 volume again, this
>> time it seems working!!!??? at least for the moment, I keep monitoring.
>>
>> 3 - At some point we had to delete the geo-replication session, after
>> completing all the manipulations (recreating the slavedata2 volume), we
>> recreated the geo session, when we started it, it went to faulty state, the
>> gstatus command displayed : volume data2 is down (all bricks are down),
>> why??? no idea, the strange part: glu v info/status data2, commands didn't
>> show any error messages. We stopped and restarted the volume, and then we
>> started the geo-replication.
>>
>>
>> Questions :
>>
>> 1 - Is GlusterFS suitable for containers?
>> 2 - If yes, do we have to tune it for small files usage? and if yes, any
>> advice?
>> 3 - Is there a way to show options values of a volume?
>>
>>
>> Thanks in advance.
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Best regards,
> Roman.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150422/1160d104/attachment.html>


More information about the Gluster-users mailing list