[Gluster-users] GlusterFS as virtual machine storage
pavel.szalbot at gmail.com
Wed Aug 23 19:13:28 UTC 2017
Hi, I believe it is not that simple. Even replica 2 + arbiter volume
with default network.ping-timeout will cause the underlying VM to
remount filesystem as read-only (device error will occur) unless you
tune mount options in VM's fstab.
On Wed, Aug 23, 2017 at 6:59 PM, <lemonnierk at ulrar.net> wrote:
> What he is saying is that, on a two node volume, upgrading a node will
> cause the volume to go down. That's nothing weird, you really should use
> 3 nodes.
> On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:
>> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
>> > Hi, after many VM crashes during upgrades of Gluster, losing network
>> > connectivity on one node etc. I would advise running replica 2 with
>> > arbiter.
>> Hi Pavel, this is bad news :(
>> So, in your case at least, Gluster was not stable? Something as simple
>> as an update would let it crash?
>> > I once even managed to break this setup (with arbiter) due to network
>> > partitioning - one data node never healed and I had to restore from
>> > backups (it was easier and kind of non-production). Be extremely
>> > careful and plan for failure.
>> I would use VM locking via sanlock or virtlock, so a split brain should
>> not cause simultaneous changes on both replicas. I am more concerned
>> about volume heal time: what will happen if the standby node
>> crashes/reboots? Will *all* data be re-synced from the master, or only
>> changed bit will be re-synced? As stated above, I would like to avoid
>> using sharding...
>> Danti Gionatan
>> Supporto Tecnico
>> Assyoma S.r.l. - www.assyoma.it
>> email: g.danti at assyoma.it - info at assyoma.it
>> GPG public key ID: FF5F32A8
>> Gluster-users mailing list
>> Gluster-users at gluster.org
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users