[Gluster-users] GlusterFS as virtual machine storage

WK wkmail at bneit.com
Thu Aug 24 00:13:46 UTC 2017


That really isnt an arbiter issue or for that matter a Gluster issue. We 
have seen that with vanilla NAS servers that had some issue or another.

Arbiter simply makes it less likely to be an issue than replica 2 but in 
turn arbiter is less 'safe' than replica 3.

However, in regards to Gluster and RO behaviour

The default timeout for most OS versions is 30 seconds and the Gluster 
timeout is 42, so yes you can trigger an RO event.

# cat /sys/block/sda/device/timeout
30

Though it is easy enough to raise as Pavel mentioned

# echo 90 > /sys/block/sda/device/timeout

As a purely observational note, we have noticed that EXT3/4 filesystems 
on VMs will go read-only much easier than XFS systems (even with the 
default timeout and irregardless of storage type). We have always 
wondered about that, though part of that observation is biased because 
we tend to use XFS on newer VMs which mean newer, better kernels.

Likewise virtio "disks" don't even have a timeout value that I am aware 
of and I don't recall them being extremely sensitive to disk issues on 
either Gluster, NFS or DAS.

All our newer VMs use virtio instead of sata/ide emulation AND XFS so we 
rarely see a RO situation and if we do, it was a good thing the VMs did 
go RO to protect themselves while the storage system freaked out.






On 8/23/2017 12:26 PM, lemonnierk at ulrar.net wrote:
> Really ? I can't see why. But I've never used arbiter so you probably
> know more about this than I do.
>
> In any case, with replica 3, never had a problem.
>
> On Wed, Aug 23, 2017 at 09:13:28PM +0200, Pavel Szalbot wrote:
>> Hi, I believe it is not that simple. Even replica 2 + arbiter volume
>> with default network.ping-timeout will cause the underlying VM to
>> remount filesystem as read-only (device error will occur) unless you
>> tune mount options in VM's fstab.
>> -ps
>>
>>
>> On Wed, Aug 23, 2017 at 6:59 PM,  <lemonnierk at ulrar.net> wrote:
>>> What he is saying is that, on a two node volume, upgrading a node will
>>> cause the volume to go down. That's nothing weird, you really should use
>>> 3 nodes.
>>>
>>> On Wed, Aug 23, 2017 at 06:51:55PM +0200, Gionatan Danti wrote:
>>>> Il 23-08-2017 18:14 Pavel Szalbot ha scritto:
>>>>> Hi, after many VM crashes during upgrades of Gluster, losing network
>>>>> connectivity on one node etc. I would advise running replica 2 with
>>>>> arbiter.
>>>> Hi Pavel, this is bad news :(
>>>> So, in your case at least, Gluster was not stable? Something as simple
>>>> as an update would let it crash?
>>>>
>>>>> I once even managed to break this setup (with arbiter) due to network
>>>>> partitioning - one data node never healed and I had to restore from
>>>>> backups (it was easier and kind of non-production). Be extremely
>>>>> careful and plan for failure.
>>>> I would use VM locking via sanlock or virtlock, so a split brain should
>>>> not cause simultaneous changes on both replicas. I am more concerned
>>>> about volume heal time: what will happen if the standby node
>>>> crashes/reboots? Will *all* data be re-synced from the master, or only
>>>> changed bit will be re-synced? As stated above, I would like to avoid
>>>> using sharding...
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> --
>>>> Danti Gionatan
>>>> Supporto Tecnico
>>>> Assyoma S.r.l. - www.assyoma.it
>>>> email: g.danti at assyoma.it - info at assyoma.it
>>>> GPG public key ID: FF5F32A8
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170823/31980700/attachment.html>


More information about the Gluster-users mailing list