[Gluster-users] VM disks corruption on 3.7.11

Nicolas Ecarnot nicolas at ecarnot.net
Tue May 24 11:12:13 UTC 2016


Le 24/05/2016 12:54, Lindsay Mathieson a écrit :
> On 24/05/2016 8:24 PM, Kevin Lemonnier wrote:
>> So the VM were configured with cache set to none, I just tried with
>> cache=directsync and it seems to be fixing the issue. Still need to run
>> more test, but did a couple already with that option and no I/O errors.
>>
>> Never had to do this before, is it known ? Found the clue in some old mail
>> from this mailing list, did I miss some doc saying you should be using
>> directsync with glusterfs ?
>
> Interesting, I remember seeing some issues with cache=none on the
> proxmox mailing list. I use writeback or default, which might be why I
> haven't encountered theses issue. I suspect you would find writethrough
> works as well.
>
>
>  From the proxmox wiki:
>
>
> "/This mode causes qemu-kvm to interact with the disk image file or
> block device with O_DIRECT semantics, so the host page cache is bypassed //
> //     and I/O happens directly between the qemu-kvm userspace buffers
> and the          storage device. Because the actual storage device may
> report //
> //     a write as completed when placed in its write queue only, the
> guest's virtual storage adapter is informed that there is a writeback
> cache, //
> //     so the guest would be expected to send down flush commands as
> needed to manage data integrity.//
> //     Equivalent to direct access to your hosts' disk, performance wise./"
>
>
> I'll restore a test vm and try cache=none myself.

Hi,

Is there any risk this could also apply to oVirt VMs stored on glusterFS?
I see no place I could specify this cache setting in an oVirt+gluster setup.

-- 
Nicolas ECARNOT


More information about the Gluster-users mailing list