[Gluster-devel] Spurious regression of tests/basic/mgmt_v3-locks.t

Xavier Hernandez xhernandez at datalab.es
Fri Oct 31 10:18:07 UTC 2014


Hi,

On 10/31/2014 09:31 AM, Xavier Hernandez wrote:
> Hi Atin,
>
> On 10/31/2014 05:47 AM, Atin Mukherjee wrote:
>> On 08/24/2014 11:41 PM, Justin Clift wrote:
>>> I'd be kind of concerned about dropping the test case instead of it
>>> being fixed.  It sort of seems like these last few spurious failures
>>> may be due to subtle bugs in GlusterFS (my impression :>), so
>>> probably better to get them fixed. :)
>>
>> Justin,
>>
>> For last three runs, I've observed the same failure. I think its really
>> the time to debug this without any further delay. Can you please share a
>> rackspace machine such that I can debug this issue?
>>
>> Xavi,
>>
>> Some of the ec regressions are also having spurious failures [1],
>> locally all of them are getting through though.
>>
>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/2320/consoleFull
>>
>
> I'll see what's happening.

I think I've found the bug. The bug is not related to ec, but to the 
memory pool framework (at least this is what everything seems to indicate).

This specific instance of failure has happened during the dump of the 
pending frames initiated by a USR1 signal.

In gf_proc_dump_call_frame() a copy of the frame is made inside a locked 
region:

88              ret = TRY_LOCK(&call_frame->lock);
89              if (ret)
90                      goto out;
91
92              memcpy(&my_frame, call_frame, sizeof(my_frame));
93              UNLOCK(&call_frame->lock);

call_frame->lock does not protect most of the updates to fields inside 
the call_frame_t structure, specially the pointers to wind_from, 
wind_to, unwind_from and unwind_to modified in macros STACK_WIND and 
STACK_UNWIND.

This shouldn't be a problem if all these updates were atomic, however it 
seems that the memory pool framework can return unaligned pointers (at 
least on 64-bits architectures):

(gdb) print call_frame
$19 = (call_frame_t *) 0x7f4609a141c4

This means that all pointers inside the structure can be unaligned:

(gdb) print &call_frame->unwind_from
$20 = (const char **) 0x7f4609a14244

Translated to the microprocessor level, this means that a modification 
of the unwind_from field will need 2 memory access cycles making the 
update non atomic and prone to partial reads by other threads.

In fact this seems to be what happened:

(gdb) print *call_frame
$21 = {root = 0x7f460984a280, parent = 0x7f460984a8e8,
next = 0x7f4609a13454, prev = 0x7f4609a15540, local = 0x0,
this = 0xae2470, ret = 0x7f45fec75311 <ec_lookup_cbk>, ref_count = 0,
lock = 1, cookie = 0x9, complete = _gf_true, op = GF_FOP_NULL,
begin = {tv_sec = 0, tv_usec = 0}, end = {tv_sec = 0, tv_usec = 0},
wind_from = 0x7f45fecdc082 <__FUNCTION__.13893> "ec_wind_lookup",
wind_to = 0x7f45fecdbd20 "ec->xl_list[idx]->fops->lookup",
unwind_from = 0x7f45fef26c80 <__FUNCTION__.19453> "client3_3_lookup_cbk",
unwind_to = 0x7f45fecdbd3f "ec_lookup_cbk"}
(gdb) print my_frame
$22 = {root = 0x7f460984a280, parent = 0x7f460984a8e8,
next = 0x7f4609a13454, prev = 0x7f4609a15540, local = 0xb6a0b4,
this = 0xae2470, ret = 0x7f45fec75311 <ec_lookup_cbk>, ref_count = 0,
lock = 0, cookie = 0x9, complete = _gf_false, op = GF_FOP_NULL,
begin = {tv_sec = 0, tv_usec = 0}, end = {tv_sec = 0, tv_usec = 0},
wind_from = 0x7f45fecdc082 <__FUNCTION__.13893> "ec_wind_lookup",
wind_to = 0x7f45fecdbd20 "ec->xl_list[idx]->fops->lookup",
unwind_from = 0x7f4500000000 <error: Cannot access memory at address 
0x7f4500000000>,
unwind_to = 0x7f45fecdbd3f "ec_lookup_cbk"}

The copy made to my_frame has only copied half of the unwind_from 
pointer because it was being updated in another thread. If we check 
current contents of call_frame, we can see that the pointer has 
completed to be updated before crashing, but the copy on my_frame 
remains incorrect:

(gdb) print call_frame->unwind_from
$23 = 0x7f45fef26c80 <__FUNCTION__.19453> "client3_3_lookup_cbk"
(gdb) print my_frame.unwind_from
$24 = 0x7f4500000000 <error: Cannot access memory at address 0x7f4500000000>

This can cause all sorts of problems. From random crashes to garbage data.

I'm not sure if this bug can be triggered by other use cases...

Xavi


More information about the Gluster-devel mailing list