[Gluster-devel] [Gluster-users] 3.7.13 & proxmox/qemu
David Gossage
dgossage at carouselchecks.com
Thu Jul 21 18:00:45 UTC 2016
On Thu, Jul 21, 2016 at 12:48 PM, David Gossage <dgossage at carouselchecks.com
> wrote:
> On Thu, Jul 21, 2016 at 9:58 AM, David Gossage <
> dgossage at carouselchecks.com> wrote:
>
>> On Thu, Jul 21, 2016 at 9:52 AM, Niels de Vos <ndevos at redhat.com> wrote:
>>
>>> On Sun, Jul 10, 2016 at 10:49:52AM +1000, Lindsay Mathieson wrote:
>>> > Did a quick test this morning - 3.7.13 is now working with libgfapi -
>>> yay!
>>> >
>>> >
>>> > However I do have to enable write-back or write-through caching in qemu
>>> > before the vm's will start, I believe this is to do with aio support.
>>> Not a
>>> > problem for me.
>>> >
>>> > I see there are settings for storage.linux-aio and storage.bd-aio -
>>> not sure
>>> > as to whether they are relevant or which ones to play with.
>>>
>>> Both storage.*-aio options are used by the brick processes. Depending on
>>> what type of brick you have (linux = filesystem, bd = LVM Volume Group)
>>> you could enable the one or the other.
>>>
>>> We do have a strong suggestion to set these "gluster volume group .."
>>> options:
>>>
>>> https://github.com/gluster/glusterfs/blob/master/extras/group-virt.example
>>>
>>> From those options, network.remote-dio seems most related to your aio
>>> theory. It was introduced with http://review.gluster.org/4460 that
>>> contains some more details.
>>>
>>
>
> Wonder if this may be related at all
>
> * #1347553: O_DIRECT support for sharding
> https://bugzilla.redhat.com/show_bug.cgi?id=1347553
>
> Is it possible to downgrade from 3.8 back to 3.7.x
>
> Building test box right now anyway but wondering.
>
May be anecdotal with small sample size but the few people who have had
issue all seemed to have zfs backed gluster volumes.
Now that i recall back to the day I updated. The gluster volume on xfs I
use for my hosted engine never had issues.
>
>
>
>>
>> Thanks with the exception of stat-prefetch I have those enabled
>> I could try turning that back off though at the time of update to 3.7.13
>> it was off. I didnt turn it back on till later in next week after
>> downgrading back to 3.7.11.
>>
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ccgl1.gl.local:/gluster1/BRICK1/1
>> Brick2: ccgl2.gl.local:/gluster1/BRICK1/1
>> Brick3: ccgl4.gl.local:/gluster1/BRICK1/1
>> Options Reconfigured:
>> diagnostics.brick-log-level: WARNING
>> features.shard-block-size: 64MB
>> features.shard: on
>> performance.readdir-ahead: on
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: on
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> server.allow-insecure: on
>> cluster.self-heal-window-size: 1024
>> cluster.background-self-heal-count: 16
>> performance.strict-write-ordering: off
>> nfs.disable: on
>> nfs.addr-namelookup: off
>> nfs.enable-ino32: off
>>
>>
>>> HTH,
>>> Niels
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160721/b0fa22e0/attachment-0001.html>
More information about the Gluster-devel
mailing list