[Gluster-users] KVM guest I/O errors with xfs backed gluster volumes

Anand Avati avati at gluster.org
Tue Oct 29 07:51:12 UTC 2013


Looks like what is happening is that qemu performs ioctls() on the backend
to query logical_block_size (for direct IO alignment). That works on XFS,
but fails on FUSE (hence qemu ends up performing IO with default 512
alignment rather than 4k).

Looks like this might be something we can enhance gluster driver in qemu.
Note that glusterfs does not have an ioctl() FOP, but we could probably
wire up a virtual xattr call for this purpose.

Copying Bharata to check if he has other solutions in mind.

Avati



On Tue, Oct 29, 2013 at 12:13 AM, Anand Avati <avati at gluster.org> wrote:

> What happens when you try to use KVM on an image directly on XFS, without
> involving gluster?
>
> Avati
>
>
> On Sun, Oct 27, 2013 at 5:53 PM, Jacob Yundt <jyundt at gmail.com> wrote:
>
>> I think I finally made some progress on this bug!
>>
>> I noticed that all disks in my gluster server(s) have 4K sectors.
>> Using an older disk with 512 byte sectors, I did _not_ get any errors
>> on my gluster client / KVM server.  I switched back to using my newer
>> 4K drives and manually set the XFS sector size (sectsz) to 512.  With
>> the manually set sector size of 512, everything worked as expected.
>>
>> I think I might be hitting some sort of qemu/libvirt bug.  However,
>> all of the bugs I found that sound similar[1][2] have already been
>> fixed in RHEL6.
>>
>> Anyone else using XFS backed bricks on 4K sector drives to host KVM
>> images in RHEL6?
>>
>> -Jacob
>>
>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=608548
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=748902
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131029/89aa1ea6/attachment.html>


More information about the Gluster-users mailing list