[Bugs] [Bug 1696136] New: gluster fuse mount crashed, when deleting 2T image file from oVirt Manager UI
bugzilla at redhat.com
bugzilla at redhat.com
Thu Apr 4 08:28:38 UTC 2019
https://bugzilla.redhat.com/show_bug.cgi?id=1696136
Bug ID: 1696136
Summary: gluster fuse mount crashed, when deleting 2T image
file from oVirt Manager UI
Product: GlusterFS
Version: mainline
Hardware: x86_64
OS: Linux
Status: NEW
Component: sharding
Keywords: Triaged
Severity: urgent
Priority: urgent
Assignee: bugs at gluster.org
Reporter: kdhananj at redhat.com
QA Contact: bugs at gluster.org
CC: amukherj at redhat.com, bkunal at redhat.com,
bugs at gluster.org, pasik at iki.fi, rhs-bugs at redhat.com,
sabose at redhat.com, sankarshan at redhat.com,
sasundar at redhat.com, storage-qa-internal at redhat.com,
ykaul at redhat.com
Depends On: 1694595
Blocks: 1694604
Target Milestone: ---
Classification: Community
+++ This bug was initially created as a clone of Bug #1694595 +++
Description of problem:
------------------------
When deleting the 2TB image file , gluster fuse mount process has crashed
Version-Release number of selected component (if applicable):
-------------------------------------------------------------
glusterfs-3.12.2-47
How reproducible:
-----------------
1/1
Steps to Reproduce:
-------------------
1. Create a image file of 2T from oVirt Manager UI
2. Delete the same image file after its created successfully
Actual results:
---------------
Fuse mount crashed
Expected results:
-----------------
All should work fine and no fuse mount crashes
--- Additional comment from SATHEESARAN on 2019-04-01 08:33:14 UTC ---
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
2019-04-01 07:57:53
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.12.2
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x9d)[0x7fc72c186b9d]
/lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fc72c191114]
/lib64/libc.so.6(+0x36280)[0x7fc72a7c2280]
/usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9627)[0x7fc71f8ba627]
/usr/lib64/glusterfs/3.12.2/xlator/features/shard.so(+0x9ef1)[0x7fc71f8baef1]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x3ae9c)[0x7fc71fb15e9c]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x9e8c)[0x7fc71fd88e8c]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xb79b)[0x7fc71fd8a79b]
/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0xc226)[0x7fc71fd8b226]
/usr/lib64/glusterfs/3.12.2/xlator/protocol/client.so(+0x17cbc)[0x7fc72413fcbc]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7fc72bf2ca00]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x26b)[0x7fc72bf2cd6b]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fc72bf28ae3]
/usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x7586)[0x7fc727043586]
/usr/lib64/glusterfs/3.12.2/rpc-transport/socket.so(+0x9bca)[0x7fc727045bca]
/lib64/libglusterfs.so.0(+0x8a870)[0x7fc72c1e5870]
/lib64/libpthread.so.0(+0x7dd5)[0x7fc72afc2dd5]
/lib64/libc.so.6(clone+0x6d)[0x7fc72a889ead]
--- Additional comment from SATHEESARAN on 2019-04-01 08:37:56 UTC ---
1. RHHI-V Information
----------------------
RHV 4.3.3
RHGS 3.4.4
2. Cluster Information
-----------------------
[root at rhsqa-grafton11 ~]# gluster pe s
Number of Peers: 2
Hostname: rhsqa-grafton10.lab.eng.blr.redhat.com
Uuid: 46807597-245c-4596-9be3-f7f127aa4aa2
State: Peer in Cluster (Connected)
Other names:
10.70.45.32
Hostname: rhsqa-grafton12.lab.eng.blr.redhat.com
Uuid: 8a3bc1a5-07c1-4e1c-aa37-75ab15f29877
State: Peer in Cluster (Connected)
Other names:
10.70.45.34
3. Volume information
-----------------------
Affected volume: data
[root at rhsqa-grafton11 ~]# gluster volume info data
Volume Name: data
Type: Replicate
Volume ID: 9d5a9d10-f192-49ed-a6f0-c912224869e8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: rhsqa-grafton10.lab.eng.blr.redhat.com:/gluster_bricks/data/data
Brick2: rhsqa-grafton11.lab.eng.blr.redhat.com:/gluster_bricks/data/data
Brick3: rhsqa-grafton12.lab.eng.blr.redhat.com:/gluster_bricks/data/data
(arbiter)
Options Reconfigured:
cluster.granular-entry-heal: enable
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
[root at rhsqa-grafton11 ~]# gluster volume status data
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick rhsqa-grafton10.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data 49154 0 Y 23403
Brick rhsqa-grafton11.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data 49154 0 Y 23285
Brick rhsqa-grafton12.lab.eng.blr.redhat.co
m:/gluster_bricks/data/data 49154 0 Y 23296
Self-heal Daemon on localhost N/A N/A Y 16195
Self-heal Daemon on rhsqa-grafton12.lab.eng
.blr.redhat.com N/A N/A Y 52917
Self-heal Daemon on rhsqa-grafton10.lab.eng
.blr.redhat.com N/A N/A Y 43829
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1694595
[Bug 1694595] gluster fuse mount crashed, when deleting 2T image file from RHV
Manager UI
https://bugzilla.redhat.com/show_bug.cgi?id=1694604
[Bug 1694604] gluster fuse mount crashed, when deleting 2T image file from RHV
Manager UI
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list