[Gluster-devel] Suggestions for improving the block/gluster driver in QEMU

Prasanna Kalever pkalever at redhat.com
Thu Jul 28 10:50:59 UTC 2016


On Thu, Jul 28, 2016 at 4:13 PM, Niels de Vos <ndevos at redhat.com> wrote:
> On Thu, Jul 28, 2016 at 03:51:11PM +0530, Prasanna Kalever wrote:
>> On Thu, Jul 28, 2016 at 3:32 PM, Niels de Vos <ndevos at redhat.com> wrote:
>> > There are some features in QEMU that we could implement with the
>> > existing libgfapi functions. Kevin asked me about this a while back, and
>> > I have finally (sorry for the delay Kevin!) taken the time to look into
>> > it.
>> >
>> > There are some optional operations that can be set in the BlockDriver
>> > structure. The ones missing that we could have, or have useless
>> > implementations are these:
>> >
>> >   .bdrv_get_info/.bdrv_refresh_limits:
>> >     This seems to set values in a BlockDriverInfo and BlockLimits
>> >     structure that is used by QEMUs block layer. By setting the right
>> >     values, we can use glfs_discard() and glfs_zerofill() to reduce the
>> >     writing of 0-bytes that QEMU falls back on at the moment.
>>
>> Hey Niels and Kevin,
>>
>> In one of our discussions Jeff shown his interest in knowing about
>> discard support in gluster upstream.
>> I thinks his intention was same here.
>>
>> >
>> >   .bdrv_has_zero_init / qemu_gluster_has_zero_init:
>> >     Currently always returns 0. But if a file gets created on a Gluster
>> >     volume, it should never have old contents in it. Rewriting it with
>> >     0-bytes looks unneeded to me.
>>
>> I agree
>>
>> >
>> > With these improvements the gluster:// URL usage with QEMU (and now also
>> > the new JSON QAPI), certain operations are expected to be a little
>> > faster. Anyone starting to work on this would want to trace the actual
>> > operations (on a single-brick volume) with ltrace/wireshark on the
>> > system where QEMU runs.
>> >
>> > Who is interested to take this on?
>>
>> Of course I am very much interested to do this work :)
>>
>> But please expect at least a week or two at initializing this from my side,
>> as currently my plate is filled with block store tasks.
>>
>> Hopefully this is meant for 2.8 (as 2.7 is in hard-freeze) I think
>> delay should be acceptable.
>
> Thanks! There are no strict timelines for any of the community work. It
> all depends on what your manager(s) want to see in future productized
> versions. At the moment, and for all I know, this is just an improvement
> that we should do at one point.

Yeah, right!

Since we are okay with the timelines, I shall get this done to fall
with qemu-2.8

Thanks for bringing this into notice :)

--
Prasanna

>
> Niels


More information about the Gluster-devel mailing list