[Gluster-users] 3.7.13 & proxmox/qemu
David Gossage
dgossage at carouselchecks.com
Thu Jul 21 14:19:50 UTC 2016
Has their been any release notes or bug reports about the removal of aio
support being intentional? In the case of proxmox it seems to be an easy
workaround to resolve more or less.
However In the case of oVirt I can change cache method per VM with a custom
property key, but the dd process that tests storage backends has in the
python scripts the direct flag hard coded in from what I have found so far.
I could potentially swap to nfs-ganesha but again in ovirt exporting and
importing a storage domain with a differing protocol is not necessarily
what you want to be doing if you can avoid it. I'd probably end up
creating a 2nd gluster volume and have to migrate disk by disk.
Just trying to figure out what the roadmap of this is and what resolution I
should be ultimately heading for.
*David Gossage*
*Carousel Checks Inc. | System Administrator*
*Office* 708.613.2284
On Sat, Jul 9, 2016 at 7:49 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:
> Did a quick test this morning - 3.7.13 is now working with libgfapi - yay!
>
>
> However I do have to enable write-back or write-through caching in qemu
> before the vm's will start, I believe this is to do with aio support. Not a
> problem for me.
>
> I see there are settings for storage.linux-aio and storage.bd-aio - not
> sure as to whether they are relevant or which ones to play with.
>
> thanks,
>
> --
> Lindsay Mathieson
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160721/0644b50c/attachment.html>
More information about the Gluster-users
mailing list