[Gluster-users] healing does not heal

Ravishankar N ravishankar at redhat.com
Thu Jan 9 05:27:07 UTC 2020


On 08/01/20 7:56 pm, lejeczek wrote:
> On 08/01/2020 11:28, Ravishankar N wrote:
>> On 08/01/20 3:55 pm, lejeczek wrote:
>>> On 08/01/2020 02:08, Ravishankar N wrote:
>>>> On 07/01/20 8:07 pm, lejeczek wrote:
>>>>> Which process should I be gdbing, selfheal's?
>>>>>
>>>> No the brick process on one of the nodes where file is missing.
>>>>
>>> okey, would you mind showing exec/cmd for debug? I want to be sure you
>>> get exactly what you need.
>> Is it possible to give me temporary ssh access to your servers to
>> debug? If it is, then you can email me off-list with the login details
>> :-)
>>> many thanks, L.
>>>
> oh no, I'm afraid not, no access from outside, it's public institution
> here with very limited outside access, sorry.
>
> But if you can share a all the CMDs you want me to do I'll try to get
> you all the things.
>
> many thanks, L.
>
Okay, here goes:

#gdb -p $brick_pid
(gdb) break server4_mknod_cbk
continue

Once the break point is hit,

(gdb) backtrace
#0  server4_mknod_cbk (frame=0x7f1ca40016a8, cookie=0x7f1ca4006268, 
this=0x7f1cb402d7f0, op_ret=0, op_errno=2, inode=0x7f1ca4006098, 
stbuf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720, xdata=0x0)
     at server-rpc-fops_v2.c:538
#1  0x00007f1cc1aef27b in io_stats_mknod_cbk (frame=0x7f1ca4006268, 
cookie=0x7f1ca4006638, this=0x7f1cb402b170, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720,
     xdata=0x0) at io-stats.c:2251
#2  0x00007f1cc1b89641 in marker_mknod_cbk (frame=0x7f1ca4006638, 
cookie=0x7f1ca4006748, this=0x7f1cb4022520, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720,
     xdata=0x0) at marker.c:2061
#3  0x00007f1cd3d6f091 in default_mknod_cbk (frame=0x7f1ca4006748, 
cookie=0x7f1cac001b28, this=0x7f1cb401eba0, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720,
     xdata=0x0) at defaults.c:1327
#4  0x00007f1cc1bd267c in up_mknod_cbk (frame=0x7f1cac001b28, 
cookie=0x7f1cac003f18, this=0x7f1cb401ceb0, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720, xdata=0x0)
     at upcall.c:1021
#5  0x00007f1cc1c56285 in pl_mknod_cbk (frame=0x7f1cac003f18, 
cookie=0x7f1cac002108, this=0x7f1cb4015530, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720, xdata=0x0)
     at posix.c:4345
#6  0x00007f1cc1c8b8fb in posix_acl_mknod_cbk (frame=0x7f1cac002108, 
cookie=0x7f1cac001678, this=0x7f1cb4013950, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720,
     xdata=0x0) at posix-acl.c:1323
#7  0x00007f1cc1cb19ee in br_stub_mknod_cbk (frame=0x7f1cac001678, 
cookie=0x7f1cac001ff8, this=0x7f1cb4011990, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, stbuf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720,
     xdata=0x0) at bit-rot-stub.c:2629
#8  0x00007f1cc1ccb05f in changelog_mknod_cbk (frame=0x7f1cac001ff8, 
cookie=0x7f1cac0023e8, this=0x7f1cb400f7d0, op_ret=0, op_errno=2, 
inode=0x7f1ca4006098, buf=0x7f1cc006e860, preparent=0x7f1cc006e7c0, 
postparent=0x7f1cc006e720,
     xdata=0x0) at changelog.c:818
#9  0x00007f1cc1d37a77 in posix_mknod (frame=0x7f1cac0023e8, 
this=0x7f1cb400a360, loc=0x7f1ca4002ab0, mode=33188, dev=0, umask=0, 
xdata=0x7f1ca40022e8) at posix-entry-ops.c:557
#10 0x00007f1cd3d85a5f in default_mknod (frame=0x7f1cac0023e8, 
this=0x7f1cb400d600, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at defaults.c:2710
#11 0x00007f1cc1ccbfdf in changelog_mknod (frame=0x7f1cac001ff8, 
this=0x7f1cb400f7d0, loc=0x7f1ca4002ab0, mode=33188, dev=0, umask=0, 
xdata=0x7f1ca40022e8) at changelog.c:939
#12 0x00007f1cc1cb1ef9 in br_stub_mknod (frame=0x7f1cac001678, 
this=0x7f1cb4011990, loc=0x7f1ca4002ab0, mode=33188, dev=0, umask=0, 
xdata=0x7f1ca40022e8) at bit-rot-stub.c:2642
#13 0x00007f1cc1c8bd28 in posix_acl_mknod (frame=0x7f1cac002108, 
this=0x7f1cb4013950, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at posix-acl.c:1342
#14 0x00007f1cc1c567c7 in pl_mknod (frame=0x7f1cac003f18, 
this=0x7f1cb4015530, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at posix.c:4355
#15 0x00007f1cd3d85a5f in default_mknod (frame=0x7f1cac003f18, 
this=0x7f1cb40172a0, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at defaults.c:2710
#16 0x00007f1cc1c0d857 in ro_mknod (frame=0x7f1cac003f18, 
this=0x7f1cb4019430, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at read-only-common.c:209
#17 0x00007f1cd3d85a5f in default_mknod (frame=0x7f1cac003f18, 
this=0x7f1cb401b170, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at defaults.c:2710
#18 0x00007f1cc1bd2ae7 in up_mknod (frame=0x7f1cac001b28, 
this=0x7f1cb401ceb0, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at upcall.c:1042
#19 0x00007f1cd3d7b1a8 in default_mknod_resume (frame=0x7f1ca4006748, 
this=0x7f1cb401eba0, loc=0x7f1ca4002ab0, mode=33188, rdev=0, umask=0, 
xdata=0x7f1ca40022e8) at defaults.c:1961
#20 0x00007f1cd3cc8046 in call_resume_wind (stub=0x7f1ca4002a68) at 
call-stub.c:2046
#21 0x00007f1cd3cda128 in call_resume (stub=0x7f1ca4002a68) at 
call-stub.c:2555
#22 0x00007f1cc1bb7372 in iot_worker (data=0x7f1cb405a7a0) at 
io-threads.c:232
#23 0x00007f1cd3a114e2 in start_thread () from /lib64/libpthread.so.0
#24 0x00007f1cd3660693 in clone () from /lib64/libc.so.6

Note that in the backtrace above, op_ret=0. i.e. the mknod was 
successful. In your case you should see op_ret=-1 and op_errno=13 
probably when the file default.sock tries to get created.

Thanks,

Ravi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200109/a67dc658/attachment.html>


More information about the Gluster-users mailing list