[Bugs] [Bug 1233046] New: use after free bug in dht
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jun 18 06:52:47 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1233046
Bug ID: 1233046
Summary: use after free bug in dht
Product: Red Hat Gluster Storage
Version: 3.1
Component: gluster-dht
Assignee: rhs-bugs at redhat.com
Reporter: pkarampu at redhat.com
QA Contact: storage-qa-internal at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com,
rgowdapp at redhat.com
Depends On: 1231425, 1233042
Group: redhat
+++ This bug was initially created as a clone of Bug #1233042 +++
+++ This bug was initially created as a clone of Bug #1231425 +++
Description of problem:
While running parallel directory creation and deletion, Address sanitizer
showed the following use-after-free issue in dht. For loop should break
immediately after it wound as many times as the call_cnt.
root at localhost - ~
12:36:37 :) ⚡ ==30031== WARNING: ASan doesn't fully support
makecontext/swapcontext functions and may produce false positives in some
cases!
=================================================================
==30031== ERROR: AddressSanitizer: heap-use-after-free on address
0x60520059819c at pc 0x7ffb608863d4 bp 0x7ffb619acfe0 sp 0x7ffb619acfd0
READ of size 4 at 0x60520059819c thread T5
#0 0x7ffb608863d3 in dht_unlock_inodelk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1568
#1 0x7ffb608a1dfc in dht_selfheal_dir_finish
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:114
#2 0x7ffb608a550f in dht_selfheal_dir_xattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:627
#3 0x7ffb6c64feef in default_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1089
#4 0x7ffb60c1ede9 in ec_xattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:93
#5 0x7ffb60c1f339 in ec_manager_xattr
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:165
#6 0x7ffb60bdd5ab in __ec_manager
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
#7 0x7ffb60bd379d in ec_resume
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:298
#8 0x7ffb60c357a2 in ec_combine
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-combine.c:933
#9 0x7ffb60c1e183 in ec_inode_write_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:60
#10 0x7ffb60c23e6d in ec_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:661
#11 0x7ffb60f3fea3 in client3_3_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:1034
#12 0x7ffb6c3b1c97 in rpc_clnt_handle_reply
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:766
#13 0x7ffb6c3b23bc in rpc_clnt_notify
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:894
#14 0x7ffb6c3a9dad in rpc_transport_notify
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:543
#15 0x7ffb61ef60f3 in socket_event_poll_in
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2290
#16 0x7ffb61ef6bef in socket_event_handler
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2403
#17 0x7ffb6c745e6f in event_dispatch_epoll_handler
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:572
#18 0x7ffb6c74663b in event_dispatch_epoll_worker
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:674
#19 0x7ffb6ca0cbb7 (/lib64/libasan.so.0+0x19bb7)
#20 0x3cf4207ee4 in start_thread (/lib64/libpthread.so.0+0x3cf4207ee4)
#21 0x3cf3ef4d1c in __clone (/lib64/libc.so.6+0x3cf3ef4d1c)
0x60520059819c is located 2076 bytes inside of 2100-byte region
[0x605200597980,0x6052005981b4)
freed by thread T5 here:
#0 0x7ffb6ca090f9 (/lib64/libasan.so.0+0x160f9)
#1 0x7ffb6c6c9d78 in __gf_free
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:335
#2 0x7ffb6c6cab03 in mem_put
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:587
#3 0x7ffb6087db2a in dht_local_wipe
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:446
#4 0x7ffb6087d0ed in dht_lock_stack_destroy
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:276
#5 0x7ffb608849a2 in dht_inodelk_done
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1421
#6 0x7ffb60884f5a in dht_unlock_inodelk_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1498
#7 0x7ffb6c652200 in default_inodelk_cbk
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1176
#8 0x7ffb60bf299e in ec_manager_inodelk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-locks.c:649
#9 0x7ffb60bdd5ab in __ec_manager
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
#10 0x7ffb60bdd7c4 in ec_manager
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1896
#11 0x7ffb60bf3962 in ec_inodelk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-locks.c:773
#12 0x7ffb60bc7c5e in ec_gf_inodelk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec.c:785
#13 0x7ffb60886358 in dht_unlock_inodelk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1573
#14 0x7ffb608a1dfc in dht_selfheal_dir_finish
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:114
#15 0x7ffb608a550f in dht_selfheal_dir_xattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:627
#16 0x7ffb6c64feef in default_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1089
#17 0x7ffb60c1ede9 in ec_xattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:93
#18 0x7ffb60c1f339 in ec_manager_xattr
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:165
#19 0x7ffb60bdd5ab in __ec_manager
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
#20 0x7ffb60bd379d in ec_resume
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:298
#21 0x7ffb60c357a2 in ec_combine
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-combine.c:933
#22 0x7ffb60c1e183 in ec_inode_write_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:60
#23 0x7ffb60c23e6d in ec_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:661
#24 0x7ffb60f3fea3 in client3_3_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:1034
#25 0x7ffb6c3b1c97 in rpc_clnt_handle_reply
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:766
#26 0x7ffb6c3b23bc in rpc_clnt_notify
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:894
#27 0x7ffb6c3a9dad in rpc_transport_notify
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:543
#28 0x7ffb61ef60f3 in socket_event_poll_in
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2290
#29 0x7ffb61ef6bef in socket_event_handler
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2403
previously allocated by thread T5 here:
#0 0x7ffb6ca09315 (/lib64/libasan.so.0+0x16315)
#1 0x7ffb6c6c8a92 in __gf_calloc
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:116
#2 0x7ffb6c6ca5fa in mem_get
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:484
#3 0x7ffb6c6ca165 in mem_get0
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/mem-pool.c:418
#4 0x7ffb6087dbba in dht_local_init
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:457
#5 0x7ffb6087d49f in dht_local_lock_init
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:367
#6 0x7ffb60885463 in dht_unlock_inodelk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1550
#7 0x7ffb608a1dfc in dht_selfheal_dir_finish
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:114
#8 0x7ffb608a550f in dht_selfheal_dir_xattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-selfheal.c:627
#9 0x7ffb6c64feef in default_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/defaults.c:1089
#10 0x7ffb60c1ede9 in ec_xattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:93
#11 0x7ffb60c1f339 in ec_manager_xattr
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:165
#12 0x7ffb60bdd5ab in __ec_manager
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:1879
#13 0x7ffb60bd379d in ec_resume
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-common.c:298
#14 0x7ffb60c357a2 in ec_combine
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-combine.c:933
#15 0x7ffb60c1e183 in ec_inode_write_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:60
#16 0x7ffb60c23e6d in ec_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/ec/src/ec-inode-write.c:661
#17 0x7ffb60f3fea3 in client3_3_setxattr_cbk
/home/pk1/workspace/rhs-glusterfs/xlators/protocol/client/src/client-rpc-fops.c:1034
#18 0x7ffb6c3b1c97 in rpc_clnt_handle_reply
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:766
#19 0x7ffb6c3b23bc in rpc_clnt_notify
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-clnt.c:894
#20 0x7ffb6c3a9dad in rpc_transport_notify
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-lib/src/rpc-transport.c:543
#21 0x7ffb61ef60f3 in socket_event_poll_in
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2290
#22 0x7ffb61ef6bef in socket_event_handler
/home/pk1/workspace/rhs-glusterfs/rpc/rpc-transport/socket/src/socket.c:2403
#23 0x7ffb6c745e6f in event_dispatch_epoll_handler
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:572
#24 0x7ffb6c74663b in event_dispatch_epoll_worker
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:674
#25 0x7ffb6ca0cbb7 (/lib64/libasan.so.0+0x19bb7)
Thread T5 created by T0 here:
#0 0x7ffb6c9fdd2a (/lib64/libasan.so.0+0xad2a)
#1 0x7ffb6c7468db in event_dispatch_epoll
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event-epoll.c:728
#2 0x7ffb6c6c711f in event_dispatch
/home/pk1/workspace/rhs-glusterfs/libglusterfs/src/event.c:127
#3 0x40ecf3 in main
/home/pk1/workspace/rhs-glusterfs/glusterfsd/src/glusterfsd.c:2333
#4 0x3cf3e21d64 in __libc_start_main (/lib64/libc.so.6+0x3cf3e21d64)
SUMMARY: AddressSanitizer: heap-use-after-free
/home/pk1/workspace/rhs-glusterfs/xlators/cluster/dht/src/dht-helper.c:1568
dht_unlock_inodelk
Shadow bytes around the buggy address:
0x0c0ac00aafe0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00aaff0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00ab000: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00ab010: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00ab020: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
=>0x0c0ac00ab030: fd fd fd[fd]fd fd fd fa fa fa fa fa fa fa fa fa
0x0c0ac00ab040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c0ac00ab050: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00ab060: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00ab070: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c0ac00ab080: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap righ redzone: fb
Freed Heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
ASan internal: fe
==30031== ABORTING
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
--- Additional comment from Anand Avati on 2015-06-13 08:06:22 EDT ---
REVIEW: http://review.gluster.org/11209 (cluster/dht: Prevent use after free
bug) posted (#1) for review on master by Pranith Kumar Karampuri
(pkarampu at redhat.com)
--- Additional comment from Anand Avati on 2015-06-15 02:53:04 EDT ---
REVIEW: http://review.gluster.org/11209 (cluster/dht: Prevent use after free
bug) posted (#2) for review on master by Pranith Kumar Karampuri
(pkarampu at redhat.com)
--- Additional comment from Anand Avati on 2015-06-16 00:08:34 EDT ---
REVIEW: http://review.gluster.org/11209 (cluster/dht: Prevent use after free
bug) posted (#3) for review on master by Pranith Kumar Karampuri
(pkarampu at redhat.com)
--- Additional comment from Anand Avati on 2015-06-17 08:14:16 EDT ---
COMMIT: http://review.gluster.org/11209 committed in master by Raghavendra G
(rgowdapp at redhat.com)
------
commit 1cc500f48005d8682f39f7c6355170df569c7603
Author: Pranith Kumar K <pkarampu at redhat.com>
Date: Sat Jun 13 17:33:14 2015 +0530
cluster/dht: Prevent use after free bug
Change-Id: I2d1f5bb2dd27f6cea52c059b4ff08ca0fa63b140
BUG: 1231425
Signed-off-by: Pranith Kumar K <pkarampu at redhat.com>
Reviewed-on: http://review.gluster.org/11209
Reviewed-by: Raghavendra G <rgowdapp at redhat.com>
Tested-by: Raghavendra G <rgowdapp at redhat.com>
--- Additional comment from Anand Avati on 2015-06-18 02:50:16 EDT ---
REVIEW: http://review.gluster.org/11305 (cluster/dht: Prevent use after free
bug) posted (#1) for review on release-3.7 by Pranith Kumar Karampuri
(pkarampu at redhat.com)
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1231425
[Bug 1231425] use after free bug in dht
https://bugzilla.redhat.com/show_bug.cgi?id=1233042
[Bug 1233042] use after free bug in dht
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=c9rbivWhpg&a=cc_unsubscribe
More information about the Bugs
mailing list