[Gluster-users] VM disks corruption on 3.7.11

Lindsay Mathieson lindsay.mathieson at gmail.com
Tue May 24 10:54:24 UTC 2016


On 24/05/2016 8:24 PM, Kevin Lemonnier wrote:
> So the VM were configured with cache set to none, I just tried with
> cache=directsync and it seems to be fixing the issue. Still need to run
> more test, but did a couple already with that option and no I/O errors.
>
> Never had to do this before, is it known ? Found the clue in some old mail
> from this mailing list, did I miss some doc saying you should be using
> directsync with glusterfs ?

Interesting, I remember seeing some issues with cache=none on the 
proxmox mailing list. I use writeback or default, which might be why I 
haven't encountered theses issue. I suspect you would find writethrough 
works as well.


 From the proxmox wiki:


"/This mode causes qemu-kvm to interact with the disk image file or 
block device with O_DIRECT semantics, so the host page cache is bypassed //
//     and I/O happens directly between the qemu-kvm userspace buffers 
and the          storage device. Because the actual storage device may 
report //
//     a write as completed when placed in its write queue only, the 
guest's virtual storage adapter is informed that there is a writeback 
cache, //
//     so the guest would be expected to send down flush commands as 
needed to manage data integrity.//
//     Equivalent to direct access to your hosts' disk, performance wise./"


I'll restore a test vm and try cache=none myself.

-- 
Lindsay Mathieson

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160524/2c226559/attachment.html>


More information about the Gluster-users mailing list