[Gluster-users] NIC died migration timetable moved up

David Gossage dgossage at carouselchecks.com
Sun Jul 10 01:01:20 UTC 2016


On Sat, Jul 9, 2016 at 7:45 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> On 10/07/2016 5:17 AM, David Gossage wrote:
>
>> Came in this morning to update to 3.7.12 and noticed that 3.7.13 had been
>> released.  So shut down VM's and gluster volumes and updated.
>> update process itself went smoothly but on starting up oVirt engine the
>> main gluster storage volume didn't activate.  I manually activated and it
>> came up but oVirt wouldn't report on how much space was used.  However
>> ovirt nodes did mount and allow me to start VM's.  However after a few
>> minutes it would claim to be inactive again even if the nodes themselevs
>> still had access and mounted volumes and the VM's were still running. Found
>> these errors flooding the gluster logs on nodes.
>>
>
> Hi David, I did a quick test this morning with Proxmox and 3.7.13 and was
> able to get it working with the fuse mount *and* libgfapi.
>
>
> One caveat - you *have* to enable qemu caching, either write-back or
> write-through. 3.7.12 & 13 seem to now disable aio support, and qemu
> requires that when caching is turned off.
>
>
I'll see if I can free up test setup to play around with it some more.  It'
seems stable at 3.7.11 for now so I'll probably be spending my time on
getting the disks sharded now, and getting the 3rd node back in cluster.


> There are setting for aio in gluster that I haven't played with yet.
>
> --
> Lindsay Mathieson
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160709/3112e77b/attachment.html>


More information about the Gluster-users mailing list