[Gluster-devel] Qemu segfaults with recent git
M. Mohan Kumar
mohan at in.ibm.com
Sat Sep 1 07:16:58 UTC 2012
Looks like mem_pool list corrupted?
Breakpoint 1, mem_get0 (mem_pool=0x555556561dd0) at mem-pool.c:354
354 {
(gdb) n
357 if (!mem_pool) {
(gdb)
354 {
(gdb)
357 if (!mem_pool) {
(gdb)
362 ptr = mem_get(mem_pool);
(gdb) s
mem_get (mem_pool=0x555556561dd0) at mem-pool.c:372
372 {
(gdb) n
378 if (!mem_pool) {
(gdb)
383 LOCK (&mem_pool->lock);
(gdb)
386 if (mem_pool->cold_count) {
(gdb)
385 mem_pool->alloc_count++;
(gdb)
386 if (mem_pool->cold_count) {
(gdb)
387 list = mem_pool->list.next;
(gdb)
391 mem_pool->cold_count--;
(gdb)
388 list_del (list);
(gdb)
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff35eb46a in list_del (old=0x7fffee177054) at list.h:51
51 old->prev->next = old->next;
(gdb) up
#1 mem_get (mem_pool=0x555556561dd0) at mem-pool.c:388
388 list_del (list);
(gdb) p *mem_pool
$3 = {list = {next = 0x7fffee177054, prev = 0x7fffee286fcc}, hot_count =
1, cold_count = 16383, lock = 0,
padded_sizeof_type = 68, pool = 0x7fffee177010, pool_end =
0x7fffee287010, real_sizeof_type = 40, alloc_count =
2, pool_misses = 0, max_alloc = 1, curr_stdalloc = 0, max_stdalloc =
0, name =
0x555556561e50 "glusterfs:data_pair_t", global_list = {next =
0x555556561d98, prev = 0x555556561ed8}}
(gdb)
On Fri, 31 Aug 2012 17:08:29 -0700, Anand Avati <anand.avati at gmail.com> wrote:
Non-text part: multipart/alternative
> On Fri, Aug 31, 2012 at 5:56 AM, M. Mohan Kumar <mohan at in.ibm.com> wrote:
>
> > On Fri, 31 Aug 2012 00:27:15 -0700, Anand Avati <anand.avati at gmail.com>
> > wrote:
> > Non-text part: multipart/alternative
> > > Can you please do a git bisect to identify the faulty patch?
> > >
> >
> > When I reverted this patch, qemu works
> >
> > 49ba15d599a8979d1d3df7a39204d52081d8719e fuse: make background queue
> > length configurable
> >
> >
> None of the files modified in that patch are even loaded or linked in a
> qemu configuration. Very unlikely to be the cause!
>
> Avati
Non-text part: text/html
More information about the Gluster-devel
mailing list