[Gluster-devel] [Gluster-users] uWSGI plugin and some question

Anand Avati anand.avati at gmail.com
Wed Jul 31 05:01:21 UTC 2013


On Tue, Jul 30, 2013 at 7:47 AM, Roberto De Ioris <roberto at unbit.it> wrote:

>
> > On Mon, Jul 29, 2013 at 10:55 PM, Anand Avati <anand.avati at gmail.com>
> > wrote:
> >
> >
> > I am assuming the module in question is this -
> > https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c
> .
> > I
> > see that you are not using the async variants of any of the glfs calls so
> > far. I also believe you would like these "synchronous" calls to play
> > nicely
> > with Coro:: by yielding in a compatible way (and getting woken up when
> > response arrives in a compatible way) - rather than implementing an
> > explicit glfs_stat_async(). The ->request() method does not seem to be be
> > naturally allowing the use of "explictly asynchronous" calls within.
> >
> > Can you provide some details of the event/request management in use? If
> > possible, I would like to provide hooks for yield and wakeup primitives
> in
> > gfapi (which you can wire with Coro:: or anything else) such that these
> > seemingly synchronous calls (glfs_open, glfs_stat etc.) don't starve the
> > app thread without yielding.
> >
> > I can see those hooks having a benefit in the qemu gfapi driver too,
> > removing a bit of code there which integrates callbacks into the event
> > loop
> > using pipes.
> >
> > Avati
> >
> >
>
> This is a prototype of async way:
>
>
> https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c#L43
>
> basically once the async request is sent, the uWSGI core (it can be a
> coroutine, a greenthread or another callback) wait for a signal (via pipe
> [could be eventfd() on linux]) of the callback completion:
>
>
> https://github.com/unbit/uwsgi/blob/master/plugins/glusterfs/glusterfs.c#L78
>
> the problem is that this approach is racey in respect of the
> uwsgi_glusterfs_async_io structure.


It is probably OK since you are waiting for the completion of the AIO
request before issuing the next. One question I have in your usage is, who
is draining the "\1" written to the pipe in uwsgi_glusterfs_read_async_cb()
? Since the same pipe is re-used for the next read chunk, won't you get an
immediate wake up if you tried polling on the pipe without draining?


> Can i assume after glfs_close() all of
> the pending callbacks are cleared ?


With the way you are using the _async() calls, you do have the guarantee -
because you are waiting for the completion of each AIO request right after
issuing.

The enhancement to gfapi I was proposing was to expose hooks at yield() and
wake() points for external consumers to wire in their own ways of switching
out of the stack. This is still a half baked idea, but it will let you use
only glfs_read(), glfs_stat() etc. (and NOT the explicit async variants),
and the hooks will let you do wait_read_hook() and write(pipefd, '\1')
respectively in a generic way independent of the actual call.


> In such a way i could simply
> deallocate it (now it is on the stack) at the end of the request.
>

You probably need to do all that in case you want to have multiple
outstanding AIOs at the same time. From what I see, you just need
co-operative waiting till call completion.

Also note that the ideal block size for performing IO is 128KB. 8KB is too
little for a distributed filesystem.

Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130730/ec9de77c/attachment-0001.html>


More information about the Gluster-devel mailing list