[Gluster-devel] GlusterFS 1.3.9tla786 unify client crashes while moving dirs

Amar S. Tumballi amar at zresearch.com
Thu Jun 26 17:05:31 UTC 2008


Hi NovA,
 Fix committed. Thanks for the backtraces.

Regards,
Amar

2008/6/26 Amar S. Tumballi <amar at zresearch.com>:

> Hi NovA,
>  Thanks for these backtraces. I will look into it right away.
>
> Regards,
> Amar
>
> 2008/6/26 NovA <av.nova at gmail.com>:
>
> Hello everybody!
>>
>> Since 1.3.9tla784 glusterFS crashes while moving dirs. For example,
>> the command "mv dir1/ dir2/" leads to crash with the following
>> back-trace:
>> -------
>> Program terminated with signal 11, Segmentation fault.
>> #0  unify_rename_cbk (frame=0x2aaab0ce8cd0, cookie=0x2aaab0ce9e20,
>> this=0x6109b0, op_ret=0,
>>    op_errno=2, buf=0x2aaab0ce7210) at unify.c:3266
>> 3266              for (index = 0; list[index] != -1; index++)
>>
>> (gdb) bt
>> #0  unify_rename_cbk (frame=0x2aaab0ce8cd0, cookie=0x2aaab0ce9e20,
>> this=0x6109b0, op_ret=0,
>>    op_errno=2, buf=0x2aaab0ce7210) at unify.c:3266
>> #1  0x00002aaaaaab330b in client_rename_cbk (frame=0x2aaab0ce9e20,
>> args=<value optimized out>)
>>    at client-protocol.c:3578
>> #2  0x00002aaaaaab2372 in notify (this=0x610620, event=<value
>> optimized out>, data=0x649200)
>>    at client-protocol.c:4937
>> #3  0x00002ac8cb3d36d3 in sys_epoll_iteration (ctx=0x606010) at epoll.c:64
>> #4  0x00002ac8cb3d2a79 in poll_iteration (ctx=0x0) at transport.c:312
>> #5  0x0000000000402948 in main (argc=-883030544, argv=0x7fffdf905658)
>> at glusterfs.c:565
>> -------
>>
>> Also glusterFS unify crashes if it founds a duplicate of a file
>> (generated by previous glusterFS versions). In such a case the client
>> log ends up with lines:
>> --------
>> 2008-06-20 10:49:55 E [unify.c:881:unify_open] bricks:
>> /nova/.mc/filepos: entry_count is 3
>> 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
>> /nova/.mc/filepos: found on c33
>> 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
>> /nova/.mc/filepos: found on c-ns
>> 2008-06-20 10:49:55 E [unify.c:884:unify_open] bricks:
>> /nova/.mc/filepos: found on c48
>> 2008-06-20 10:50:25 E [unify.c:335:unify_lookup] bricks: returning
>> ESTALE for /nova/.mc/filepos: file count is 3
>> 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
>> /nova/.mc/filepos: found on c33
>> 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
>> /nova/.mc/filepos: found on c-ns
>> 2008-06-20 10:50:25 E [unify.c:339:unify_lookup] bricks:
>> /nova/.mc/filepos: found on c48
>> 2008-06-20 10:50:25 E [fuse-bridge.c:468:fuse_entry_cbk]
>> glusterfs-fuse: 3068: (34) /nova/.mc/filepos => -1 (116)
>> 2008-06-20 10:50:25 E [unify.c:881:unify_open] bricks:
>> /nova/.mc/filepos: entry_count is 3
>> 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
>> /nova/.mc/filepos: found on c33
>> 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
>> /nova/.mc/filepos: found on c-ns
>> 2008-06-20 10:50:25 E [unify.c:884:unify_open] bricks:
>> /nova/.mc/filepos: found on c48
>>
>> [... skipped client.vol spec ...]
>>
>> frame : type(1) op(35)
>> frame : type(1) op(35)
>> frame : type(1) op(35)
>> ------
>>
>> And the back-trace obtained from the core-file is:
>> --------
>> Program terminated with signal 11, Segmentation fault.
>> #0  __destroy_inode (inode=0x2b602017eb70) at inode.c:296
>> 296         for (pair = inode->ctx->members_list; pair; pair = pair->next)
>> {
>>
>> (gdb) bt
>> #0  __destroy_inode (inode=0x2b602017eb70) at inode.c:296
>> #1  0x00002aaaaacca31e in unify_rename_unlink_cbk
>> (frame=0x2b602017e9c0, cookie=0xd27d10,
>>    this=0x2b602017e9d0, op_ret=<value optimized out>, op_errno=2) at
>> unify.c:3150
>> #2  0x00002aaaaaab146c in client_unlink_cbk (frame=0xd27d10,
>> args=<value optimized out>)
>>    at client-protocol.c:3519
>> #3  0x00002aaaaaab2372 in notify (this=0x60def0, event=<value
>> optimized out>, data=0x630080)
>>    at client-protocol.c:4937
>> #4  0x00002b601f80f6d3 in sys_epoll_iteration (ctx=0x606010) at epoll.c:64
>> #5  0x00002b601f80ea79 in poll_iteration (ctx=0x2017eba000002b60) at
>> transport.c:312
>> #6  0x0000000000402948 in main (argc=530695664, argv=0x7fff8b4c9208)
>> at glusterfs.c:565
>> -------
>>
>> Hope this can be fixed soon. :)
>>
>> With best regards,
>>   Andrey
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Amar Tumballi
> Gluster/GlusterFS Hacker
> [bulde on #gluster/irc.gnu.org]
> http://www.zresearch.com - Commoditizing Super Storage!




-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!



More information about the Gluster-devel mailing list