[Gluster-devel] io-threads segfault

Anand Avati avati at zresearch.com
Sat Oct 6 08:56:58 UTC 2007


Kevan,
 Using io-threads on the server helps pushing I/O to another thread keeping
the server process more responsive for metadata operations. You may not see
a direct increase in read or write throughput but it makes a difference when
under heavy load from a lot of clients.

avati

On 10/5/07, Kevan Benson <kbenson at a-1networks.com> wrote:
>
>
> Yep, I got them both.  I added them at the same time, I knew to look at
> both to fix them.
>
> Interestingly enough, I never seem to see any performance benefit from
> adding io-threads.  I suspect that either
> a) I'm not utilizing them correctly or
> b) My usage and testing doesn't trigger the cases where it's useful.
>
> I've been focusing on fault tolerance and high availability up until
> now, so I'm just starting to experiment with the performance
> translators.  Does anyone have any suggestions for performance
> translators to use and where in a config where AFR and Unify are done on
> the client side?
>
> Alexander Attarian wrote:
> > And also master has master as its slave for io-threads, might want to
> > fix it too :)
> >
> > On 10/4/07, Kevan Benson <kbenson at a-1networks.com> wrote:
> >
> >> Yep, it does.  Thanks!  I knew it was probably a config problem on my
> >> side somewhere, but I figured you would want to know about the
> segfault.
> >>
> >> Anand Avati wrote:
> >>
> >>> Oops,
> >>>  'slave' has 'slave' as its subvolume in the spec. missed that check
> in the
> >>> parser! thanks for notifying. a proper spec should still work for you
> >>> though.
> >>>
> >>> thanks,
> >>> avati
> >>>
> >>> On 10/5/07, Kevan Benson <kbenson at a-1networks.com> wrote:
> >>>
> >>>
> >>>> Trying to start glusterfsd with this config causes a
> segfault.  Removing
> >>>> io-threads sections results in a running config.  Looks like an
> infinite
> >>>> loop of some sort.
> >>>>
> >>>> volume ns
> >>>>         type storage/posix
> >>>>         option directory /data/graphita/glustertest/namespace
> >>>> end-volume
> >>>>
> >>>> volume master-raw
> >>>>         type storage/posix
> >>>>         option directory /data/graphita/glustertest/share
> >>>> end-volume
> >>>>
> >>>> volume master-single
> >>>>         type features/posix-locks
> >>>>         subvolumes master-raw
> >>>> end-volume
> >>>>
> >>>> volume master
> >>>>            type performance/io-threads
> >>>>            option thread-count 8
> >>>>            option cache-size 64MB
> >>>>            subvolumes master
> >>>> end-volume
> >>>>
> >>>> volume slave-raw
> >>>>         type storage/posix
> >>>>         option directory /data/graphita/glustertest/afr
> >>>> end-volume
> >>>>
> >>>> volume slave-single
> >>>>         type features/posix-locks
> >>>>         subvolumes slave-raw
> >>>> end-volume
> >>>>
> >>>> volume slave
> >>>>            type performance/io-threads
> >>>>            option thread-count 8
> >>>>            option cache-size 64MB
> >>>>            subvolumes slave
> >>>> end-volume
> >>>>
> >>>> volume server
> >>>>         type protocol/server
> >>>>         option transport-type tcp/server
> >>>>         option listen-port 6996
> >>>>         subvolumes ns master slave
> >>>>         option auth.ip.ns.allow 172.16.1.*
> >>>>         option auth.ip.master.allow 172.16.1.*
> >>>>         option auth.ip.slave.allow 172.16.1.*
> >>>> end-volume
> >>>>
> >>>> volume trace
> >>>>         type debug/trace
> >>>>         subvolumes server
> >>>>         option debug on
> >>>> end-volume
> >>>>
> >>>> Here's the backtrace.
> >>>> #0  0x00d2b73a in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #1  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #2  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #3  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #4  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #5  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #6  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #7  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #8  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #9  0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #10 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #11 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #12 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #13 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #14 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #15 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #16 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #17 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #18 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #19 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #20 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>> #21 0x00d2b73f in xlator_init_rec (xl=0x8556008) at xlator.c:228
> >>>>
> >>>> This continues on, I haven't had the patience to find the end.
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> Gluster-devel mailing list
> >>>> Gluster-devel at nongnu.org
> >>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>>
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel at nongnu.org
> >> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>
> >>
> >
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > .
> >
> >
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
It always takes longer than you expect, even when you take into account
Hofstadter's Law.

-- Hofstadter's Law



More information about the Gluster-devel mailing list