From bugzilla at redhat.com Fri Mar 1 05:30:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 05:30:06 +0000 Subject: [Bugs] [Bug 1682925] Gluster volumes never heal during oVirt 4.2->4.3 upgrade In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1682925 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |urgent Version|4.3 |5 CC| |bugs at gluster.org Component|Installation & Update |replicate Assignee|sabose at redhat.com |bugs at gluster.org Product|ovirt-node |GlusterFS QA Contact|weiwang at redhat.com | Summary|Gluster volumes never heal |Gluster volumes never heal |during 4.2->4.3 upgrade |during oVirt 4.2->4.3 | |upgrade Target Milestone|ovirt-4.3.2 |--- Flags|testing_ack? | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 05:31:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 05:31:36 +0000 Subject: [Bugs] [Bug 1682925] Gluster volumes never heal during oVirt 4.2->4.3 upgrade In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1682925 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sabose at redhat.com Blocks| |1677319 | |(Gluster_5_Affecting_oVirt_ | |4.3) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1677319 [Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 06:30:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 06:30:24 +0000 Subject: [Bugs] [Bug 1676546] Getting client connection error in gluster logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676546 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22289 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 06:30:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 06:30:25 +0000 Subject: [Bugs] [Bug 1676546] Getting client connection error in gluster logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676546 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22289 (Afr: Avoid logging of \"attempting to connect\") posted (#1) for review on master by Rinku Kothiya -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 06:48:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 06:48:21 +0000 Subject: [Bugs] [Bug 1676546] Getting client connection error in gluster logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676546 Rinku changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rkothiya at redhat.com Assignee|bugs at gluster.org |rkothiya at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 07:04:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 07:04:59 +0000 Subject: [Bugs] [Bug 1684385] New: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Bug ID: 1684385 Summary: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing Product: GlusterFS Version: 5 Status: NEW Component: core Keywords: Triaged Assignee: bugs at gluster.org Reporter: kdhananj at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When gluster bits were upgraded in a hyperconverged ovirt-gluster setup, one node at a time in online mode from 3.12.5 to 5.3, the following log messages were seen - [2019-02-26 16:24:25.126963] E [shard.c:556:shard_modify_size_and_block_count] (-->/usr/lib64/glusterfs/5.3/xlator/cluster/distribute.so(+0x82a45) [0x7ff71d05ea45] -->/usr/lib64/glusterfs/5.3/xlator/features/shard.so(+0x5c77) [0x7ff71cdb4c77] -->/usr/lib64/glusterfs/5.3/xlator/features/shard.so(+0x592e) [0x7ff71cdb492e] ) 0-engine-shard: Failed to get trusted.glusterfs.shard.file-size for 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7 Version-Release number of selected component (if applicable): How reproducible: 1/1 Steps to Reproduce: 1. 2. 3. Actual results: Expected results: shard.file.size xattr should always be accessible. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 07:13:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 07:13:48 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 --- Comment #1 from Krutika Dhananjay --- [root at tendrl25 glusterfs]# gluster v info engine Volume Name: engine Type: Replicate Volume ID: bb26f648-2842-4182-940e-6c8ede02195f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: tendrl27.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine Brick2: tendrl26.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine Brick3: tendrl25.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 07:23:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 07:23:02 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 --- Comment #2 from Krutika Dhananjay --- On further investigation, it was found that the shard xattrs were genuinely missing on all 3 replicas - [root at tendrl27 ~]# getfattr -d -m . -e hex /gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.engine-client-1=0x000000000000000000000000 trusted.afr.engine-client-2=0x000000000000000000000000 trusted.gfid=0x3ad3f0c6a4e64b17bd2997c32ecc54d7 trusted.gfid2path.5f2a4f417210b896=0x64373265323737612d353761642d343136322d613065332d6339346463316231366230322f696473 [root at localhost ~]# getfattr -d -m . -e hex /gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.engine-client-0=0x0000000e0000000000000000 trusted.afr.engine-client-2=0x000000000000000000000000 trusted.gfid=0x3ad3f0c6a4e64b17bd2997c32ecc54d7 trusted.gfid2path.5f2a4f417210b896=0x64373265323737612d353761642d343136322d613065332d6339346463316231366230322f696473 [root at tendrl25 ~]# getfattr -d -m . -e hex /gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.engine-client-0=0x000000100000000000000000 trusted.afr.engine-client-1=0x000000000000000000000000 trusted.gfid=0x3ad3f0c6a4e64b17bd2997c32ecc54d7 trusted.gfid2path.5f2a4f417210b896=0x64373265323737612d353761642d343136322d613065332d6339346463316231366230322f696473 Also from the logs, it appears the file underwent metadata self-heal moments before these errors started to appear- [2019-02-26 13:35:37.253896] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7 [2019-02-26 13:35:37.254734] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.file-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.254749] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.block-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.255777] I [MSGID: 108026] [afr-self-heal-common.c:1729:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7. sources=[0] sinks=2 [2019-02-26 13:35:37.258032] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7 [2019-02-26 13:35:37.258792] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.file-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.258807] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.block-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.259633] I [MSGID: 108026] [afr-self-heal-common.c:1729:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7. sources=[0] sinks=2 Metadata heal as we know does three things - 1. bulk getxattr from source brick; 2. removexattr on sink bricks 3. bulk setxattr on the sink bricks But what's clear from these logs is the dict_to_xdr() messages at the time of metadata heal, indicating that the shard xattrs were possibly not "sent on wire" as part of step 3. Turns out due to the newly introduced dict_to_xdr() code in 5.3 which is absent in 3.12.5. The bricks were upgraded to 5.3 in the order tendrl25 followed by tendrl26 with tendrl27 still at 3.12.5 when this issue was hit - Tendrl25: [2019-02-26 12:47:53.595647] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 5.3 (args: /usr/sbin/glusterfsd -s tendrl25.lab.eng.blr.redhat.com --volfile-id engine.tendrl25.lab.eng.blr.redhat.com.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/tendrl25.lab.eng.blr.redhat.com-gluster_bricks-engine-engine.pid -S /var/run/gluster/aae83600c9a783dd.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=9373b871-cfce-41ba-a815-0b330f6975c8 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153) Tendrl26: [2019-02-26 13:35:05.718052] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 5.3 (args: /usr/sbin/glusterfsd -s tendrl26.lab.eng.blr.redhat.com --volfile-id engine.tendrl26.lab.eng.blr.redhat.com.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/tendrl26.lab.eng.blr.redhat.com-gluster_bricks-engine-engine.pid -S /var/run/gluster/8010384b5524b493.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=18fa886f-8d1a-427c-a5e6-9a4e9502ef7c --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153) Tendrl27: [root at tendrl27 bricks]# rpm -qa | grep gluster glusterfs-fuse-3.12.15-1.el7.x86_64 glusterfs-libs-3.12.15-1.el7.x86_64 glusterfs-3.12.15-1.el7.x86_64 glusterfs-server-3.12.15-1.el7.x86_64 glusterfs-client-xlators-3.12.15-1.el7.x86_64 glusterfs-api-3.12.15-1.el7.x86_64 glusterfs-events-3.12.15-1.el7.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64 glusterfs-gnfs-3.12.15-1.el7.x86_64 glusterfs-geo-replication-3.12.15-1.el7.x86_64 glusterfs-cli-3.12.15-1.el7.x86_64 vdsm-gluster-4.20.46-1.el7.x86_64 python2-gluster-3.12.15-1.el7.x86_64 glusterfs-rdma-3.12.15-1.el7.x86_64 And as per the metadata heal logs, the source was brick0 (corresponding to tendrl27) and sink was brick 2 (corresponding to tendrl 25). This means step 1 of metadata heal did a getxattr on tendrl27 which was still at 3.12.5 and got the dicts with a certain format which didn't have the "value" type (because it's only introduced in 5.3). And this same dict was used for setxattr in step 3 which silently fails to add "trusted.glusterfs.shard.block-size" and "trusted.glusterfs.shard.file-size" xattrs to the setxattr request because of the dict_to_xdr() conversion failure in protocol/client but succeeds the overall operation. So afr thought the heal succeeded although the xattr that needed heal was never sent over the wire. This led to one or more files ending up with shard xattrs removed on-disk failing every other operation on it pretty much. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 07:25:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 07:25:25 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Blocks| |1677319 | |(Gluster_5_Affecting_oVirt_ | |4.3) Severity|unspecified |high Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1677319 [Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 07:29:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 07:29:29 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 --- Comment #3 from Krutika Dhananjay --- So the backward compatibility was broken with the introduction of the following patch - Patch that broke this compatibility - https://review.gluster.org/c/glusterfs/+/19098 commit 303cc2b54797bc5371be742543ccb289010c92f2 Author: Amar Tumballi Date: Fri Dec 22 13:12:42 2017 +0530 protocol: make on-wire-change of protocol using new XDR definition. With this patchset, some major things are changed in XDR, mainly: * Naming: Instead of gfs3/gfs4 settle for gfx_ for xdr structures * add iattx as a separate structure, and add conversion methods * the *_rsp structure is now changed, and is also reduced in number (ie, no need for different strucutes if it is similar to other response). * use proper XDR methods for sending dict on wire. Also, with the change of xdr structure, there are changes needed outside of xlator protocol layer to handle these properly. Mainly because the abstraction was broken to support 0-copy RDMA with payload for write and read FOP. This made transport layer know about the xdr payload, hence with the change of xdr payload structure, transport layer needed to know about the change. Updates #384 Change-Id: I1448fbe9deab0a1b06cb8351f2f37488cefe461f Signed-off-by: Amar Tumballi Any operation in a heterogeneous cluster which reads xattrs on-disk and subsequently writes it (like metadata heal for instance) will cause one or more on-disk xattrs to disappear. In fact logs suggest even dht on-disk layouts vanished - [2019-02-26 13:35:30.253348] I [MSGID: 109092] [dht-layout.c:744:dht_layout_dir_mismatch] 0-engine-dht: /36ea5b11-19fb-4755-b664-088f6e5c4df2: Disk layout missing, gfid = d0735acd-14ec-4ef9-8f5f-6a3c4ae12c08 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 08:00:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:00:34 +0000 Subject: [Bugs] [Bug 1684404] New: Multiple shd processes are running on brick_mux environmet Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Bug ID: 1684404 Summary: Multiple shd processes are running on brick_mux environmet Product: GlusterFS Version: mainline Hardware: x86_64 Status: NEW Component: glusterd Severity: high Priority: high Assignee: bugs at gluster.org Reporter: mchangir at redhat.com CC: bugs at gluster.org, moagrawa at redhat.com, pasik at iki.fi Depends On: 1683880 Blocks: 1672818 (glusterfs-6.0) Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1683880 +++ Description of problem: Multiple shd processes are running while created 100 volumes in brick_mux environment Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a 1x3 volume 2. Enable brick_mux 3.Run below command n1= n2= n3= for i in {1..10};do for h in {1..20};do gluster v create vol-$i-$h rep 3 $n1:/home/dist/brick$h/vol-$i-$h $n2:/home/dist/brick$h/vol-$i-$h $n3:/home/dist/brick$h/vol-$i-$h force gluster v start vol-$i-$h sleep 1 done done for k in $(gluster v list|grep -v heketi);do gluster v stop $k --mode=script;sleep 2;gluster v delete $k --mode=script;sleep 2;done Actual results: Multiple shd processes are running and consuming system resources Expected results: Only one shd process should be run Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1683880 [Bug 1683880] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 08:00:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:00:34 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Milind Changire changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1684404 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 [Bug 1684404] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 1 08:00:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:00:34 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Milind Changire changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1684404 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 [Bug 1684404] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 08:01:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:01:07 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 08:22:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:22:31 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22290 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 1 08:22:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:22:32 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22290 (glusterd: Multiple shd processes are spawned on brick_mux environment) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 1 08:23:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 08:23:03 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Mohit Agrawal --- Upstream patch is posted to resolve the same https://review.gluster.org/#/c/glusterfs/+/22290/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 1 11:34:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 11:34:42 +0000 Subject: [Bugs] [Bug 1684496] New: compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684496 Bug ID: 1684496 Summary: compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 Product: GlusterFS Version: 6 Status: NEW Component: libgfapi Assignee: bugs at gluster.org Reporter: kkeithle at redhat.com QA Contact: bugs at gluster.org CC: anoopcs at redhat.com, bugs at gluster.org, crobinso at redhat.com, extras-qa at fedoraproject.org, humble.devassy at gmail.com, jonathansteffan at gmail.com, kkeithle at redhat.com, matthias at saou.eu, ndevos at redhat.com, ramkrsna at gmail.com Depends On: 1684298 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1684298 +++ qemu fails to rebuild in rawhide, seems like glusterfs related issues, using 6.0-0.1.rc0.fc31 https://koji.fedoraproject.org/koji/taskinfo?taskID=33109555 https://kojipkgs.fedoraproject.org//work/tasks/9555/33109555/build.log A few example errors below. Any idea what's going on here? BUILDSTDERR: /builddir/build/BUILD/qemu-3.1.0/block/gluster.c: In function 'qemu_gluster_co_pwrite_zeroes': BUILDSTDERR: /builddir/build/BUILD/qemu-3.1.0/block/gluster.c:994:52: warning: passing argument 4 of 'glfs_zerofill_async' from incompatible pointer type [-Wincompatible-pointer-types] BUILDSTDERR: 994 | ret = glfs_zerofill_async(s->fd, offset, size, gluster_finish_aiocb, &acb); BUILDSTDERR: | ^~~~~~~~~~~~~~~~~~~~ BUILDSTDERR: | | BUILDSTDERR: | void (*)(struct glfs_fd *, ssize_t, void *) {aka void (*)(struct glfs_fd *, long int, void *)} BUILDSTDERR: In file included from /builddir/build/BUILD/qemu-3.1.0/block/gluster.c:12: BUILDSTDERR: /usr/include/glusterfs/api/glfs.h:993:73: note: expected 'glfs_io_cbk' {aka 'void (*)(struct glfs_fd *, long int, struct glfs_stat *, struct glfs_stat *, void *)'} but argument is of type 'void (*)(struct glfs_fd *, ssize_t, void *)' {aka 'void (*)(struct glfs_fd *, long int, void *)'} BUILDSTDERR: 993 | glfs_zerofill_async(glfs_fd_t *fd, off_t length, off_t len, glfs_io_cbk fn, BUILDSTDERR: | ~~~~~~~~~~~~^~ BUILDSTDERR: /builddir/build/BUILD/qemu-3.1.0/block/gluster.c: In function 'qemu_gluster_do_truncate': BUILDSTDERR: /builddir/build/BUILD/qemu-3.1.0/block/gluster.c:1035:13: error: too few arguments to function 'glfs_ftruncate' BUILDSTDERR: 1035 | if (glfs_ftruncate(fd, offset)) { Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684298 [Bug 1684298] compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 11:44:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 11:44:40 +0000 Subject: [Bugs] [Bug 1684496] compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684496 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-03-01 11:44:40 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 11:48:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 11:48:22 +0000 Subject: [Bugs] [Bug 1684496] compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684496 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1684500 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684500 [Bug 1684500] compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 14:51:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 14:51:45 +0000 Subject: [Bugs] [Bug 1684569] New: Upgrade from 4.1 and 5 is broken Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Bug ID: 1684569 Summary: Upgrade from 4.1 and 5 is broken Product: GlusterFS Version: 5 OS: Linux Status: NEW Component: core Assignee: bugs at gluster.org Reporter: kompastver at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1539843 --> https://bugzilla.redhat.com/attachment.cgi?id=1539843&action=edit logs on srv1 after upgrading and starting glusterfs Description of problem: Online upgrade for replicated volumes cause upgraded node to be in "Peer Rejected" state. Version-Release number of selected component (if applicable): >From 4.1.7 to 5.4 How reproducible: Always Steps to Reproduce: 1. setup replicated volume on v4.1 2. kill all gluster* processes on the first node 3. upgrade rpms up to v5.4 4. start gluster* processes on the first node Actual results: Peer is being rejected. And cluster operates without upgraded node. `gluster peer status` shows: State: Peer Rejected (Connected) Expected results: Online upgrade should works without downtime Additional info: Please see logs file in the attachment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 14:52:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 14:52:34 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 --- Comment #1 from Znamensky Pavel --- Created attachment 1539845 --> https://bugzilla.redhat.com/attachment.cgi?id=1539845&action=edit logs on srv2 after upgrading srv1 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 18:20:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 18:20:03 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22291 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 1 18:20:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 01 Mar 2019 18:20:04 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #572 from Worker Ant --- REVIEW: https://review.gluster.org/22291 (fd.c : remove redundant checks, CALLOC -> MALLOC) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:54:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:54:52 +0000 Subject: [Bugs] [Bug 1683900] Failed to dispatch handler In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683900 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-02 11:54:52 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22282 (socket: socket event handlers now return void) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:54:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:54:53 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Bug 1667103 depends on bug 1683900, which changed state. Bug 1683900 Summary: Failed to dispatch handler https://bugzilla.redhat.com/show_bug.cgi?id=1683900 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:55:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:55:14 +0000 Subject: [Bugs] [Bug 1683716] glusterfind: revert shebangs to #!/usr/bin/python3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683716 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-02 11:55:14 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22275 (glusterfind: revert shebangs to #!/usr/bin/python3) merged (#4) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:58:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:58:05 +0000 Subject: [Bugs] [Bug 1684777] New: gNFS crashed when processing "gluster v profile [vol] info nfs" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684777 Bug ID: 1684777 Summary: gNFS crashed when processing "gluster v profile [vol] info nfs" Product: GlusterFS Version: 6 Status: NEW Component: nfs Keywords: Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org, jefferymymy at 163.com Depends On: 1677559 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1677559 +++ Description of problem: when processing "gluster v profile [vol] info nfs" after gnfs start, gnfs will crash. dump trace info: /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7fcf5cb6a872] /lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7fcf5cb743a4] /lib64/libc.so.6(+0x35670)[0x7fcf5b1d5670] /usr/sbin/glusterfs(glusterfs_handle_nfs_profile+0x114)[0x7fcf5d066474] /lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7fcf5cba1502] /lib64/libc.so.6(+0x47110)[0x7fcf5b1e7110] Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.create dht volume naming dht_vol,and start the vol 2.start volume profile 3.kill gnfs process 4.process cli "service glusterd restart;gluster volume profile dht_vol info nfs" Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1677559 [Bug 1677559] gNFS crashed when processing "gluster v profile [vol] info nfs" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:58:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:58:05 +0000 Subject: [Bugs] [Bug 1677559] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677559 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1684777 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684777 [Bug 1684777] gNFS crashed when processing "gluster v profile [vol] info nfs" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:59:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:59:30 +0000 Subject: [Bugs] [Bug 1677559] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677559 --- Comment #4 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22235 (glusterfsd: Do not process PROFILE_NFS_INFO if graph is not ready) posted (#2) for review on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:59:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:59:31 +0000 Subject: [Bugs] [Bug 1677559] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677559 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22235 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:59:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:59:32 +0000 Subject: [Bugs] [Bug 1684777] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684777 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22235 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 2 11:59:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 02 Mar 2019 11:59:33 +0000 Subject: [Bugs] [Bug 1684777] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684777 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22235 (glusterfsd: Do not process PROFILE_NFS_INFO if graph is not ready) posted (#2) for review on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 3 19:13:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 03 Mar 2019 19:13:40 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22292 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 3 19:13:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 03 Mar 2019 19:13:40 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #573 from Worker Ant --- REVIEW: https://review.gluster.org/22292 (inode.c/h: CALLOC->MALLOC) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 3 19:35:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 03 Mar 2019 19:35:04 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22293 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 3 19:35:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 03 Mar 2019 19:35:05 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #574 from Worker Ant --- REVIEW: https://review.gluster.org/22293 (stack.h: alignment of structures and met_get0 -> mem_get) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 02:29:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 02:29:46 +0000 Subject: [Bugs] [Bug 1683816] Memory leak when peer detach fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683816 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-04 02:29:46 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22280 (mgmt/glusterd: Fix a memory leak when peer detach fails) merged (#1) on master by Vijay Bellur -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 02:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 02:30:27 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Component|core |glusterd -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 02:30:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 02:30:47 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |srakonde at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 03:13:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 03:13:48 +0000 Subject: [Bugs] [Bug 1684969] New: New GFID file recreated in a replica set after a GFID mismatch resolution Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684969 Bug ID: 1684969 Summary: New GFID file recreated in a replica set after a GFID mismatch resolution Product: GlusterFS Version: 6 Status: NEW Component: distribute Severity: high Priority: high Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org Depends On: 1661258, 1670259 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1670259 +++ If the gfid is not set for a directory on one brick, a lookup on that directory will cause a different GFID to be set on it, leading to a GFID mismatch. --- Additional comment from Nithya Balachandran on 2019-01-29 04:56:49 UTC --- Steps to reproduce the problem: 1. Create a 3 brick distribute volume 2. Fuse mount the volume and create some directories. cd /mnt/fuse1 mkdir gfid-mismatch mkdir gfid-mismatch/dir-1 3. delete the gfid and .glusterfs handle from the hashed brick [root at rhgs313-6 brick1]# setfattr -x trusted.gfid vol1-1/gfid-mismatch/dir-1 [root at rhgs313-6 brick1]# unlink vol1-1/.glusterfs/8e/6c/8e6c686c-93e9-4bd7-ac5e-98bbf852a62b [root at rhgs313-6 brick1]# [root at rhgs313-6 brick1]# [root at rhgs313-6 brick1]# [root at rhgs313-6 brick1]# getx vol1-*/gfid-mismatch/dir-1 # file: vol1-1/gfid-mismatch/dir-1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.glusterfs.dht=0x00000000000000000000000055555554 trusted.glusterfs.dht.mds=0x00000000 trusted.glusterfs.mdata=0x010000000000000000000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3 # file: vol1-2/gfid-mismatch/dir-1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0x8e6c686c93e94bd7ac5e98bbf852a62b trusted.glusterfs.dht=0x000000000000000055555555aaaaaaa9 trusted.glusterfs.mdata=0x010000000000000000000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3 # file: vol1-3/gfid-mismatch/dir-1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0x8e6c686c93e94bd7ac5e98bbf852a62b trusted.glusterfs.dht=0x0000000000000000aaaaaaaaffffffff trusted.glusterfs.mdata=0x010000000000000000000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3 4. Unmount and remount the fuse mount and list the contents of gfid-mismatch [root at rhgs313-6 ~]# umount -f /mnt/fuse1; mount -t glusterfs -s 192.168.122.6:/vol1 /mnt/fuse1 [root at rhgs313-6 ~]# cd /mnt/fuse1 [root at rhgs313-6 fuse1]# cd gfid-mismatch/ [root at rhgs313-6 gfid-mismatch]# l total 20K drwxr-xr-x. 2 root root 4.0K Jan 29 09:20 dir-1 5. Check the gfid for dir-1 on the backend bricks. [root at rhgs313-6 brick1]# getx vol1-*/gfid-mismatch/dir-1 # file: vol1-1/gfid-mismatch/dir-1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0x0c3a4860f93f416cb5261c3b2b06f52d < ---- GFID is DIFFERENT!! trusted.glusterfs.dht=0x00000000000000000000000055555554 trusted.glusterfs.dht.mds=0x00000000 trusted.glusterfs.mdata=0x010000000000000000000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3 # file: vol1-2/gfid-mismatch/dir-1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0x8e6c686c93e94bd7ac5e98bbf852a62b trusted.glusterfs.dht=0x000000000000000055555555aaaaaaa9 trusted.glusterfs.mdata=0x010000000000000000000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3 # file: vol1-3/gfid-mismatch/dir-1 security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.gfid=0x8e6c686c93e94bd7ac5e98bbf852a62b trusted.glusterfs.dht=0x0000000000000000aaaaaaaaffffffff trusted.glusterfs.mdata=0x010000000000000000000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3000000005c4fcd7500000000017863f3 The GFID on brick vol1-1 is set to trusted.gfid=0x0c3a4860f93f416cb5261c3b2b06f52d The original GFID was: trusted.gfid=0x8e6c686c93e94bd7ac5e98bbf852a62b --- Additional comment from Nithya Balachandran on 2019-01-29 04:58:11 UTC --- Upstream patch: https://review.gluster.org/#/c/glusterfs/+/22112/ --- Additional comment from Worker Ant on 2019-02-02 03:10:15 UTC --- REVIEW: https://review.gluster.org/22112 (cluster/dht: Do not use gfid-req in fresh lookup) merged (#7) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1670259 [Bug 1670259] New GFID file recreated in a replica set after a GFID mismatch resolution -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 03:13:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 03:13:48 +0000 Subject: [Bugs] [Bug 1670259] New GFID file recreated in a replica set after a GFID mismatch resolution In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670259 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1684969 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684969 [Bug 1684969] New GFID file recreated in a replica set after a GFID mismatch resolution -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 03:15:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 03:15:11 +0000 Subject: [Bugs] [Bug 1684969] New GFID file recreated in a replica set after a GFID mismatch resolution In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684969 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 03:18:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 03:18:27 +0000 Subject: [Bugs] [Bug 1684969] New GFID file recreated in a replica set after a GFID mismatch resolution In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684969 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-04 03:18:27 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 03:38:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 03:38:38 +0000 Subject: [Bugs] [Bug 1684777] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684777 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-04 03:38:38 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22235 (glusterfsd: Do not process PROFILE_NFS_INFO if graph is not ready) merged (#3) on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 04:21:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 04:21:41 +0000 Subject: [Bugs] [Bug 1672656] glustereventsd: crash, ABRT report for package glusterfs has reached 100 occurrences In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672656 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Priority|unspecified |medium Status|NEW |ASSIGNED CC| |atumball at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 04:57:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 04:57:01 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #52 from abhays --- (In reply to Raghavendra Bhat from comment #50) > Hi, > > Thanks for the logs. From the logs saw that the following things are > happening. > > 1) The scrubbing is started > > 2) Scrubber always decides whether a file is corrupted or not by comparing > the stored on-disk signature (gets by getxattr) with its own calculated > signature of the file. > > 3) Here, while getting the on-disk signature, getxattr is failing with > ENOMEM (i.e. Cannot allocate memory) because of the endianness. > > 4) Further testcases in the test fail because, they expect the bad-file > extended attribute to be present which scrubber could not set because of the > above error (i.e. had it been able to successfully get the signature of the > file via getxattr, it would have been able to compare the signature with its > own calculated signature and set the bad-file extended attribute to indicate > the file is corrupted). > > > Looking at the code to come up with a fix to address this. Thanks for the reply @Raghavendra. We are also looking into the same. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 05:23:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 05:23:28 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #53 from abhays --- (In reply to Nithya Balachandran from comment #51) > > > > > > I don't think so. I would recommend that you debug the tests on your systems > > and post patches which will work on both. > > Please note what I am referring to is for you to look at the .t files and > modify file names or remove hardcoding as required. Yes @Nithya, We understood that you want us to continue debugging the tests and provide patches if fix is found. While doing the same, we were able to fix the ./tests/bugs/nfs/bug-847622.t with the following patch:- diff --git a/tests/bugs/nfs/bug-847622.t b/tests/bugs/nfs/bug-847622.t index 3b836745a..f21884972 100755 --- a/tests/bugs/nfs/bug-847622.t +++ b/tests/bugs/nfs/bug-847622.t @@ -28,7 +32,7 @@ cd $N0 # simple getfacl setfacl commands TEST touch testfile -TEST setfacl -m u:14:r testfile +TEST setfacl -m u:14:r $B0/brick0/testfile TEST getfacl testfile Please check, if the above patch can be merged. However, the test cases are still failing and only pass if x86 hash values are provided(Refer to comment#8):- ./tests/bugs/glusterfs/bug-902610.t ./tests/bugs/posix/bug-1619720.t We have tried modifying filenames in these test cases, but nothing worked. We think that the test cases might pass if the behavior of the source code for hash value calculation is changed to support s390x architecture as well. Could you please look into the same? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 05:35:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 05:35:44 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(abhaysingh1722 at ya | |hoo.in) --- Comment #54 from Nithya Balachandran --- > > However, the test cases are still failing and only pass if x86 hash values > are provided(Refer to comment#8):- > ./tests/bugs/glusterfs/bug-902610.t > ./tests/bugs/posix/bug-1619720.t Please provide more information on what changes you tried. > > We have tried modifying filenames in these test cases, but nothing worked. > We think that the test cases might pass if the behavior of the source code > for hash value calculation is changed to support s390x architecture as well. > Could you please look into the same? This seem extremely unlikely at this point. Like I said, the code will work fine as long as the setup is not mixed-endian. As the test cases are the only things that fail and that is because they use hard coded values, such a huge change in the source code is not the first step. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 05:56:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 05:56:36 +0000 Subject: [Bugs] [Bug 1676429] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676429 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |spalai at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 06:21:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 06:21:00 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1672818 (glusterfs-6.0) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 06:21:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 06:21:00 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1684385 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 06:21:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 06:21:34 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1672318 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 07:19:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 07:19:31 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 abhays changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(abhaysingh1722 at ya | |hoo.in) | --- Comment #55 from abhays --- (In reply to Nithya Balachandran from comment #54) > > > > However, the test cases are still failing and only pass if x86 hash values > > are provided(Refer to comment#8):- > > ./tests/bugs/glusterfs/bug-902610.t > > ./tests/bugs/posix/bug-1619720.t > > Please provide more information on what changes you tried. For tests/bugs/glusterfs/bug-902610.t:- In the test case, after the kill_brick function is run, the mkdir $M0/dir1 doesn't work and hence the get_layout function test fails. So,as a workaround we tried not killing the brick and then checked the functionality of the test case, after which the dir1 did get created in all the 4 bricks, however, the test failed with the following output:- ========================= TEST 9 (line 59): ls -l /mnt/glusterfs/0 ok 9, LINENUM:59 RESULT 9: 0 getfattr: Removing leading '/' from absolute path names /d/backends/patchy3/dir1 /d/backends/patchy0/dir1 /d/backends/patchy2/dir1 /d/backends/patchy1/dir1 layout1 from 00000000 to 00000000 layout2 from 00000000 to 55555554 target for layout2 = 55555555 ========================= TEST 10 (line 72): 0 echo 1 not ok 10 Got "1" instead of "0", LINENUM:72 RESULT 10: 1 Failed 1/10 subtests ========================= But, the below patch works for the test case(only on Big Endian):- diff --git a/tests/bugs/glusterfs/bug-902610.t b/tests/bugs/glusterfs/bug-902610.t index b45e92b8a..8a8eaf7a3 100755 --- a/tests/bugs/glusterfs/bug-902610.t +++ b/tests/bugs/glusterfs/bug-902610.t @@ -2,6 +2,7 @@ . $(dirname $0)/../../include.rc . $(dirname $0)/../../volume.rc +. $(dirname $0)/../../dht.rc cleanup; @@ -11,11 +12,11 @@ function get_layout() layout1=`getfattr -n trusted.glusterfs.dht -e hex $1 2>&1|grep dht |cut -d = -f2` layout1_s=$(echo $layout1 | cut -c 19-26) layout1_e=$(echo $layout1 | cut -c 27-34) - #echo "layout1 from $layout1_s to $layout1_e" > /dev/tty + echo "layout1 from $layout1_s to $layout1_e" > /dev/tty layout2=`getfattr -n trusted.glusterfs.dht -e hex $2 2>&1|grep dht |cut -d = -f2` layout2_s=$(echo $layout2 | cut -c 19-26) layout2_e=$(echo $layout2 | cut -c 27-34) - #echo "layout2 from $layout2_s to $layout2_e" > /dev/tty + echo "layout2 from $layout2_s to $layout2_e" > /dev/tty if [ x"$layout2_s" = x"00000000" ]; then # Reverse so we only have the real logic in one place. @@ -29,7 +30,7 @@ function get_layout() # Figure out where the join point is. target=$( $PYTHON -c "print '%08x' % (0x$layout1_e + 1)") - #echo "target for layout2 = $target" > /dev/tty + echo "target for layout2 = $target" > /dev/tty # The second layout should cover everything that the first doesn't. if [ x"$layout2_s" = x"$target" -a x"$layout2_e" = x"ffffffff" ]; then @@ -41,26 +42,30 @@ function get_layout() BRICK_COUNT=4 -TEST glusterd +TEST glusterd --log-level DEBUG TEST pidof glusterd TEST $CLI volume create $V0 $H0:$B0/${V0}0 $H0:$B0/${V0}1 $H0:$B0/${V0}2 $H0:$B0/${V0}3 ## set subvols-per-dir option TEST $CLI volume set $V0 subvols-per-directory 3 TEST $CLI volume start $V0 +TEST $CLI volume set $V0 client-log-level DEBUG +TEST $CLI volume set $V0 brick-log-level DEBUG + ## Mount FUSE TEST glusterfs -s $H0 --volfile-id $V0 $M0 --entry-timeout=0 --attribute-timeout=0; TEST ls -l $M0 +#brick_loc=$(get_backend_paths $M0) ## kill 2 bricks to bring down available subvol < spread count -kill_brick $V0 $H0 $B0/${V0}2 -kill_brick $V0 $H0 $B0/${V0}3 +kill_brick $V0 $H0 $B0/${V0}0 +kill_brick $V0 $H0 $B0/${V0}1 mkdir $M0/dir1 2>/dev/null -get_layout $B0/${V0}0/dir1 $B0/${V0}1/dir1 +get_layout $B0/${V0}2/dir1 $B0/${V0}3/dir1 EXPECT "0" echo $? cleanup; >From above patch, the below output is seen:- ========================= TEST 9 (line 59): ls -l /mnt/glusterfs/0 ok 9, LINENUM:59 RESULT 9: 0 Socket=/var/run/gluster/e90af2b6fbd74dbe.socket Brick=/d/backends/patchy0 connected disconnected OK Socket=/var/run/gluster/d7212ecddcb22a08.socket Brick=/d/backends/patchy1 connected disconnected OK layout1 from 00000000 to 7ffffffe layout2 from 7fffffff to ffffffff target for layout2 = 7fffffff ========================= TEST 10 (line 72): 0 echo 0 ok 10, LINENUM:72 RESULT 10: 0 ok All tests successful. Files=1, Tests=10, 13 wallclock secs ( 0.03 usr 0.01 sys + 2.05 cusr 0.34 csys = 2.43 CPU) Result: PASS ========================= Therefore, can these changes be added in the test case with a condition for s390x separately? Also, We have a few queries on the tests behaviour. When a directory or a file gets created, according to me, it should be placed in the brick depending on its hash range and value of the file/directory. However, in the above test, as you can see, if we don't kill the bricks{2,3}, the directory gets created in all the bricks{0,1,2,3}.So, does it not consider hash values and range at this point or is it something to do with mounting FUSE? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 07:51:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 07:51:58 +0000 Subject: [Bugs] [Bug 1685023] New: FD processes for larger files are not closed soon after FOP finished Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685023 Bug ID: 1685023 Summary: FD processes for larger files are not closed soon after FOP finished Product: GlusterFS Version: mainline Status: NEW Component: bitrot Assignee: bugs at gluster.org Reporter: david.spisla at iternity.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Docs Contact: bugs at gluster.org Description of problem: I did some observations concerning the bitrot daemon. It seems to be that the bitrot signer is signing files depending on file size. I copied files with different sizes into a volume and I was wonderung because the files get their signature not the same time (I keep the expiry time default with 120). Here are some examples: 300 KB file ~2-3 m 70 MB file ~ 40 m 115 MB file ~ 1 Sh 800 MB file ~ 4,5 h There was already a bug from 2016: https://bugzilla.redhat.com/show_bug.cgi?id=1378466 I also figured out this discussion: https://lists.gluster.org/pipermail/gluster-users/2016-September/028354.html Kotresh mentioned there that the problem is because for some files, fd process are still up in the brick process list. Bitrot signer can only sign a file if the fd is closed. And according to my observations it seems to be that as bigger a file is as longer the fd is still up. I could verify this with a 500MiB file and some smaller files. After a specific time only the fd for the 500MiB was up and the file still had no signature, for the smaller files there were no fds and they already had a signature. Version-Release number of selected component (if applicable): Gluster v5.3 Actual behaviour: The fd processes are still up for a specific time (maybe depend on file size???) after FOP finished and bitd can't sign the file Expected behaviour: The fd processes should be closed soon after FOP finished so that bitd can sign the file -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Mon Mar 4 08:15:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 08:15:27 +0000 Subject: [Bugs] [Bug 1685027] New: Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685027 Bug ID: 1685027 Summary: Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range Product: GlusterFS Version: mainline Status: NEW Component: eventsapi Keywords: EasyFix, ZStream Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: dahorak at redhat.com, rhs-bugs at redhat.com, sanandpa at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com Depends On: 1600459 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1600459 +++ Description of problem: During testing of RHGS WA, I've found following traceback raised from /usr/sbin/gluster-eventsapi script: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ File "/usr/sbin/gluster-eventsapi", line 666, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 224, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 329, in run sync_to_peers(args) File "/usr/sbin/gluster-eventsapi", line 177, in sync_to_peers "{1}".format(e[0], e[2]), IndexError: tuple index out of range ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The prospective real issue is hidden beside this traceback. Version-Release number of selected component (if applicable): glusterfs-events-3.12.2-13.el7rhgs.x86_64 How reproducible: 100% if you will be able to cause the raise of GlusterCmdException Steps to Reproduce: I'm not sure, how to reproduce it from scratch, as my knowledge related to gluster-eventsapi is very limited, but the problem is quite well visible from the source code: Open /usr/sbin/gluster-eventsapi script and look for function sync_to_peers around line 171: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 171 def sync_to_peers(args): 172 if os.path.exists(WEBHOOKS_FILE): 173 try: 174 sync_file_to_peers(WEBHOOKS_FILE_TO_SYNC) 175 except GlusterCmdException as e: 176 handle_output_error("Failed to sync Webhooks file: [Error: {0}]" 177 "{1}".format(e[0], e[2]), 178 errcode=ERROR_WEBHOOK_SYNC_FAILED, 179 json_output=args.json) 180 181 if os.path.exists(CUSTOM_CONFIG_FILE): 182 try: 183 sync_file_to_peers(CUSTOM_CONFIG_FILE_TO_SYNC) 184 except GlusterCmdException as e: 185 handle_output_error("Failed to sync Config file: [Error: {0}]" 186 "{1}".format(e[0], e[2]), 187 errcode=ERROR_CONFIG_SYNC_FAILED, 188 json_output=args.json) 189 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Important lines are 177 and 186: "{1}".format(e[0], e[2]), The problem is, that the GlusterCmdException is raised this way[1]: raise GlusterCmdException((rc, out, err)) So all three parameters rc, out and err are supplied as one parameter (of type tuple). Actual results: Any problem leading to raise of GlusterCmdException is hidden beside above mentioned exception. Expected results: There shouldn't be any such traceback. Additional info: [1] file /usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py --- Additional comment from Daniel Hor?k on 2018-07-13 08:23:29 UTC --- Possible Reproduction scenario might be, to remove (rename) /var/lib/glusterd/events/ directory on one Gluster Storage Node and try to add webhook from another storage node: On Gluster node 5: # mv /var/lib/glusterd/events/ /var/lib/glusterd/events_BACKUP On Gluster node 1: # gluster-eventsapi webhook-add http://0.0.0.0:8697/test Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 666, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 224, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 329, in run sync_to_peers(args) File "/usr/sbin/gluster-eventsapi", line 177, in sync_to_peers "{1}".format(e[0], e[2]), IndexError: tuple index out of range Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1600459 [Bug 1600459] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 08:15:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 08:15:41 +0000 Subject: [Bugs] [Bug 1685027] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685027 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 09:11:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 09:11:44 +0000 Subject: [Bugs] [Bug 1685051] New: New Project create request Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Bug ID: 1685051 Summary: New Project create request Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Please create a new project under Github Gluster organization Name: devblog Description: Gluster Developer Blog posts Admins: @aravindavk Aravinda VK @amarts Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 09:13:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 09:13:44 +0000 Subject: [Bugs] [Bug 1596787] glusterfs rpc-clnt.c: error returned while attempting to connect to host: (null), port 0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1596787 --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/21895 (quotad: fix passing GF_DATA_TYPE_STR_OLD dict data to v4 protocol) merged (#10) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 09:26:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 09:26:26 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-03-04 09:26:26 --- Comment #1 from Deepshikha khandelwal --- Done. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 09:28:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 09:28:55 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |srakonde at redhat.com Flags|needinfo?(amukherj at redhat.c | |om) | --- Comment #2 from Sanju --- Root cause: Commit 5a152a changed the mechanism of computing the checksum. Because of this change, in heterogeneous cluster, glusterd in upgraded node follows new mechanism for computing the cksum and non-upgraded nodes follow old mechanism for computing the cksum. So the cksum in upgraded node doesn't match with non-upgraded nodes which results in peer rejection issue. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 09:36:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 09:36:31 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 --- Comment #2 from Aravinda VK --- Thanks -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 10:09:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 10:09:27 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com Resolution|NOTABUG |CURRENTRELEASE --- Comment #3 from M. Scherer --- Wait, what is the plan for that ? And why isn't gluster-infra in the loop sooner or with more details ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 11:42:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 11:42:45 +0000 Subject: [Bugs] [Bug 1685120] New: upgrade from 3.12, 4.1 and 5 to 6 broken Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Bug ID: 1685120 Summary: upgrade from 3.12, 4.1 and 5 to 6 broken Product: GlusterFS Version: mainline Status: NEW Whiteboard: gluster-test-day Component: core Severity: urgent Priority: high Assignee: bugs at gluster.org Reporter: srakonde at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, hgowtham at redhat.com, pasik at iki.fi, srakonde at redhat.com Depends On: 1684029 Blocks: 1672818 (glusterfs-6.0) Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1684029 +++ Description of problem: While trying to upgrade from older versions like 3.12, 4.1 and 5 to gluster 6 RC, the upgrade ends in peer rejected on one node after other. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. create a replica 3 on older versions (3, 4, or 5) 2. kill the gluster process on one node and install gluster 6 3. start glusterd Actual results: the new version gets peer rejected. and the brick processes or not started by glusterd. Expected results: peer reject should not happen. Cluster should be healthy. Additional info: Status shows the bricks on that particular node alone with N/A as status. Other nodes aren't visible. Looks like a volfile mismatch. The new volfile has "option transport.socket.ssl-enabled off" added while the old volfile misses it. The order of quick-read and open-behind are changed in the old and new versions. These changes cause the volfile mismatch and mess the cluster. --- Additional comment from Sanju on 2019-02-28 17:25:57 IST --- The peers are running inro rejected state because there is a mismatch in the volfiles. Differences are: 1. Newer volfiles are having "option transport.socket.ssl-enabled off" where older volfiles are not having this option. 2. order of quick-read and open-behind are changed commit 4e0fab4 introduced this issue. previously we didn't had any default value for the option transport.socket.ssl-enabled. So this option was not captured in the volfile. with the above commit, we are adding a default value. So this is getting captured in volfile. commit 4e0fab4 has a fix for https://bugzilla.redhat.com/show_bug.cgi?id=1651059. I feel this commit has less significance, we can revert this change. If we do so, we are out of 1st problem. not sure, why the order of quick-read and open-behind are changed. Atin, do let me know your thoughts on proposal of reverting the commit 4e0fab4. Thanks, Sanju --- Additional comment from Sanju on 2019-03-04 14:58:55 IST --- Root cause: Commit 5a152a changed the mechanism of computing the checksum. Because of this change, in heterogeneous cluster, glusterd in upgraded node follows new mechanism for computing the cksum and non-upgraded nodes follow old mechanism for computing the cksum. So the cksum in upgraded node doesn't match with non-upgraded nodes which results in peer rejection issue. Thanks, Sanju Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1684029 [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 11:42:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 11:42:45 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1685120 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 4 11:42:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 11:42:45 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1685120 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 14:30:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 14:30:06 +0000 Subject: [Bugs] [Bug 1683574] gluster-server package currently requires the older userspace-rcu against expectation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683574 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |kkeithle at redhat.com --- Comment #1 from Kaleb KEITHLEY --- The glusterfs-server-6.0-0.1.rc0.el7.x86_64.rpm on https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-6/ gives me this: ]$ rpm -qpR glusterfs-server-6.0-0.1.rc0.el7.x86_64.rpm| grep rcu liburcu-bp.so.6()(64bit) liburcu-cds.so.6()(64bit) (And the packages haven't been tagged for release yet so there isn't a glusterfs-server-6.0-0.1.rc0.el7.x86_64.rpm on http://mirror.centos.org/centos-7/7/storage/x86_64/gluster-6/ yet.) And the glusterfs.spec file has BuildRequires: userspace-rcu-devel >= 0.7 so any of a wide range of userspace-rcu packages/libs could potentially satisfy the BuildRequires. Where did you get your RPMs from? The Storage SIG builds its own userspace-rcu package because what's in RHEL/CentOS is too old. If you're building your own RPMs on your CentOS 7 box and haven't installed the userspace-rcu* RPMs from the Storage SIG then that's probably why your RPMs have the incorrect dependency on liburcu-bp.so.1 instead of liburcu-bp.so.6. The BuildRequires in the .spec should probably be updated to >= 0.10 to prevent this from happening in the future. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 14:44:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 14:44:14 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1683574, which changed state. Bug 1683574 Summary: gluster-server package currently requires the older userspace-rcu against expectation https://bugzilla.redhat.com/show_bug.cgi?id=1683574 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 14:44:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 14:44:14 +0000 Subject: [Bugs] [Bug 1683574] gluster-server package currently requires the older userspace-rcu against expectation In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683574 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-03-04 14:44:14 --- Comment #2 from Kaleb KEITHLEY --- (In reply to Kaleb KEITHLEY from comment #1) > > The BuildRequires in the .spec should probably be updated to >= 0.10 to > prevent this from happening in the future. s/should/could/ Since any version of userspace-rcu(-devel) >= 0.7, i.e. liburcu-bp.so.0.1 or later, is acceptable, it's not incorrect, to build with the older version. But if you want to build yours with the latest version, you need to install it from the Storage SIG. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 15:01:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 15:01:00 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22297 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 15:19:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 15:19:19 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22298 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 15:19:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 15:19:20 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #575 from Worker Ant --- REVIEW: https://review.gluster.org/22298 (packaging: s390x has RDMA support) posted (#1) for review on master by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 15:23:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 15:23:14 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED CC| |pgurusid at redhat.com Assignee|atumball at redhat.com |pgurusid at redhat.com Flags| |needinfo?(jsecchiero at enter. | |eu) --- Comment #5 from Poornima G --- Disabling readdir-ahead fixed the issue? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 4 15:32:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 15:32:17 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Hubert changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |revirii at googlemail.com --- Comment #6 from Hubert --- We seem to have the same problem with a fresh install of glusterfs 5.3 on a debian stretch. We migrated from an existing setup (version 4.1.6, distribute-replicate) to a new setup (version 5.3, replicate), and traffic on clients went up significantly, maybe causing massive iowait on the clients during high-traffic times. Here are some munin graphs: network traffic on high iowait client: https://abload.de/img/client-eth1-traffic76j4i.jpg network traffic on old servers: https://abload.de/img/oldservers-eth1nejzt.jpg network traffic on new servers: https://abload.de/img/newservers-eth17ojkf.jpg performance.readdir-ahead is on by default. I could deactivate it tomorrow morning (07:00 CEST), and provide tcpdump data if necessary. Regards, Hubert -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 4 16:20:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 16:20:18 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Amye Scavarda changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |MODIFIED CC| |amye at redhat.com, | |avishwan at redhat.com Resolution|CURRENTRELEASE |--- Flags| |needinfo?(avishwan at redhat.c | |om) Keywords| |Reopened --- Comment #4 from Amye Scavarda --- I have concerns. What is this for? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 4 19:30:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 19:30:50 +0000 Subject: [Bugs] [Bug 1578405] EIO errors when updating and deleting entries concurrently In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1578405 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 20025 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 4 19:31:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 04 Mar 2019 19:31:52 +0000 Subject: [Bugs] [Bug 1546649] DHT: Readdir of directory which contain directory entries is slow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1546649 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 19559 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 01:30:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 01:30:25 +0000 Subject: [Bugs] [Bug 1685337] New: Updating Fedora 28 fail with "Package glusterfs-5.4-1.fc28.x86_64.rpm is not signed" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685337 Bug ID: 1685337 Summary: Updating Fedora 28 fail with "Package glusterfs-5.4-1.fc28.x86_64.rpm is not signed" Product: GlusterFS Version: 5 Status: NEW Component: packaging Severity: high Assignee: bugs at gluster.org Reporter: nsoffer at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: dnf update fail because glusterfs package are not signed. # dnf update ... Package glusterfs-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-libs-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-server-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-fuse-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-devel-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-api-devel-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-api-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-cli-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-client-xlators-5.4-1.fc28.x86_64.rpm is not signed Package glusterfs-extra-xlators-5.4-1.fc28.x86_64.rpm is not signed Package python3-gluster-5.4-1.fc28.x86_64.rpm is not signed The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Error: GPG check FAILED "dnf clean all" and removing /var/cache/{yum,dnf,PackageKit} does not help. These are the glusterfs repos are added by ovirt-release-masster from: http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm [glusterfs-fedora] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/5/LATEST/Fedora/fedora-$releasever/$basearch/ enabled=1 gpgcheck=1 gpgkey=http://download.gluster.org/pub/gluster/glusterfs/5/rsa.pub [glusterfs-noarch-fedora] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/5/LATEST/Fedora/fedora-$releasever/noarch enabled=1 gpgcheck=1 gpgkey=http://download.gluster.org/pub/gluster/glusterfs/5/rsa.pub Version-Release number of selected component (if applicable): 5.4.1 How reproducible: Always -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 02:49:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 02:49:46 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #576 from Worker Ant --- REVIEW: https://review.gluster.org/22213 (fuse lock interrupt: fix flock_interrupt.t) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 02:50:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 02:50:39 +0000 Subject: [Bugs] [Bug 1654021] Gluster volume heal causes continuous info logging of "invalid argument" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654021 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-05 02:50:39 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22215 (core: fix volume heal to avoid \"invalid argument\") merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 03:00:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 03:00:32 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(avishwan at redhat.c | |om) | --- Comment #5 from Aravinda VK --- This project is for hosting Developer blog posts using Github pages. Developers are more familiar with Markdown format to write documentation or blog post, so this will be easy to contribute compared to using UI and write blog posts. Based on discussion with other developers, they find it difficult to set up a blog website than writing. This project aims to simplify that - Official Gluster org blog continue to exists to make announcements or release highlights or any other blog posts - This will only host developer blog posts(More technical, developer tips, feature explanation etc) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 03:16:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 03:16:14 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22300 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 03:16:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 03:16:15 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22300 (dict: handle STR_OLD data type in xdr conversions) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 04:31:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 04:31:03 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Amye Scavarda changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |ASSIGNED Flags| |needinfo?(avishwan at redhat.c | |om) --- Comment #6 from Amye Scavarda --- This is exactly what should be on gluster.org's blog! You write wherever you want, we can set WordPress to take Markdown with no issues. We should not be duplicating effort when gluster.org is a great platform to be able to create content on already. We should get a list of the people who want to write developer blogs and get them author accounts to publish directly on Gluster.org and publicize from there through social media. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 07:09:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 07:09:36 +0000 Subject: [Bugs] [Bug 1685414] New: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Bug ID: 1685414 Summary: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA Product: GlusterFS Version: mainline Status: NEW Component: glusterd Keywords: Regression Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, dahorak at redhat.com, mbukatov at redhat.com, nchilaka at redhat.com, rcyriac at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1684648 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684648 [Bug 1684648] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 07:10:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 07:10:13 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com Summary|glusterd memory usage grows |glusterd memory usage grows |at 98 MB/h while being |at 98 MB/h while running |monitored by RHGSWA |"gluster v profile" in a | |loop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 07:44:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 07:44:38 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Hubert changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |revirii at googlemail.com --- Comment #2 from Hubert --- fyi: happens too when upgrading from 5.3 to 5.4 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 08:03:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 08:03:08 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 --- Comment #1 from Mohit Agrawal --- Hi, glusterd has memory leak while "gluster v profile info" run in a loop. To reproduce the same follow below steps 1) Setup 3 1x3 volumes and start the volume 2) Start profile for all the volume 3) Run below command while [ 1 ]; do pmap -x `pgrep glusterd` | grep total; gluster v profile vol1 info > /dev/null; gluster v profile vol2 info > /dev/null; gluster v profile vol3 info > /dev/null;sleep 5; done Thanks, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 08:13:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 08:13:18 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22301 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 08:13:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 08:13:19 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22301 (glusterd: glusterd memory leak while running \"gluster v profile\" in a loop) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 09:23:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 09:23:23 +0000 Subject: [Bugs] [Bug 1648768] Tracker bug for all leases related issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648768 --- Comment #20 from Worker Ant --- REVIEW: https://review.gluster.org/22286 (afr: mark changelog_fsync as internal) merged (#4) on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 09:58:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 09:58:47 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 --- Comment #4 from Marcin --- Hello Everyone, Please, let me know if something has been agreed about the problem? Thanks in advance Regards Marcin -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 11:54:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 11:54:28 +0000 Subject: [Bugs] [Bug 1674225] flooding of "dict is NULL" logging & crash of client process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674225 Hubert changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |revirii at googlemail.com --- Comment #1 from Hubert --- up to 600.000 log entries here, in a replicate 3 setup. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 12:03:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 12:03:11 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #7 from Hubert --- i set performance.readdir-ahead to off and watched network traffic for about 2 hours now, but traffic is still as high. 5-8 times higher than it was with old 4.1.x volumes. just curious: i see hundreds of thousands of these messages: [2019-03-05 12:02:38.423299] W [dict.c:761:dict_ref] (-->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/quick-read.so(+0x6df4) [0x7f0db452edf4] -->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/io-cache.so(+0xa39d) [0x7f0db474039d] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) [0x7f0dbb7e4a38] ) 5-dict: dict is NULL [Invalid argument] see https://bugzilla.redhat.com/show_bug.cgi?id=1674225 - could this be related? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 13:56:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 13:56:00 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-05 13:56:00 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22238 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) merged (#3) on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 14:26:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 14:26:05 +0000 Subject: [Bugs] [Bug 1685576] New: DNS delegation record for rhhi-dev.gluster.org Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685576 Bug ID: 1685576 Summary: DNS delegation record for rhhi-dev.gluster.org Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: sabose at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Please create DNS delegation record for rhhi-dev.gluster.org ns-1487.awsdns-57.org. ns-626.awsdns-14.net. ns-78.awsdns-09.com. ns-1636.awsdns-12.co.uk. Version-Release number of selected component (if applicable): NA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 14:45:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 14:45:42 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 --- Comment #5 from Nithya Balachandran --- Hi Marcin, Sorry, I did not get a chance to look into this. I will try to get someone else to take a look. Regards, Nithya -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 14:52:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 14:52:57 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #56 from Nithya Balachandran --- (In reply to abhays from comment #53) > (In reply to Nithya Balachandran from comment #51) > > > > > > > > > I don't think so. I would recommend that you debug the tests on your systems > > > and post patches which will work on both. > > > > Please note what I am referring to is for you to look at the .t files and > > modify file names or remove hardcoding as required. > > Yes @Nithya, We understood that you want us to continue debugging the tests > and provide patches if fix is found. > While doing the same, we were able to fix the ./tests/bugs/nfs/bug-847622.t > with the following patch:- > > diff --git a/tests/bugs/nfs/bug-847622.t b/tests/bugs/nfs/bug-847622.t > index 3b836745a..f21884972 100755 > --- a/tests/bugs/nfs/bug-847622.t > +++ b/tests/bugs/nfs/bug-847622.t > @@ -28,7 +32,7 @@ cd $N0 > > # simple getfacl setfacl commands > TEST touch testfile > -TEST setfacl -m u:14:r testfile > +TEST setfacl -m u:14:r $B0/brick0/testfile > TEST getfacl testfile > > Please check, if the above patch can be merged. > > This fix is incorrect. The patch changes the test to modify the brick directly while the test is to check that these operations succeed on the mount. You need to see why it fails and then we can figure out the fix. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 14:56:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 14:56:46 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-05 14:56:46 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22301 (glusterd: glusterd memory leak while running \"gluster v profile\" in a loop) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 15:05:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 15:05:32 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #57 from Nithya Balachandran --- (In reply to abhays from comment #55) > (In reply to Nithya Balachandran from comment #54) > > > > > > However, the test cases are still failing and only pass if x86 hash values > > > are provided(Refer to comment#8):- > > > ./tests/bugs/glusterfs/bug-902610.t > > > ./tests/bugs/posix/bug-1619720.t > > > > Please provide more information on what changes you tried. > > For tests/bugs/glusterfs/bug-902610.t:- > In the test case, after the kill_brick function is run, the mkdir $M0/dir1 > doesn't work and hence the get_layout function test fails. So,as a > workaround we tried not killing the brick and then checked the functionality > of the test case, after which the dir1 did get created in all the 4 bricks, > however, the test failed with the following output:- The mkdir function will fail if the hashed brick of the directory being created is down. In your case, the change in hashed values means the brick that was killed is the hashed subvol for the directory. Killing a different brick should cause it to succeed. In any case this is not a feature that we support anymore so I can just remove the test case. > Therefore, can these changes be added in the test case with a condition for > s390x separately? I do not think we should separate it out like this. The better way would be to just find 2 bricks that work for both big and little endian. I will try out your changes on a big endian system and see if this combination will work there as well. > > Also, We have a few queries on the tests behaviour. > When a directory or a file gets created, according to me, it should be > placed in the brick depending on its hash range and value of the > file/directory. > However, in the above test, as you can see, if we don't kill the > bricks{2,3}, the directory gets created in all the bricks{0,1,2,3}.So, does > it not consider hash values and range at this point or is it something to do > with mounting FUSE? The way dht creates files and directories is slightly different. For files, it calculates the hash and creates it in the subvolume in whose directory layout range it falls. For directories, it first tries to create it on the hashed subvol. If for some reason that fails, it will not be created on the other bricks. In this test, for s390x, one of the bricks killed was the hashed subvol so mkdir fails. The solution here is to make sure the bricks being killed are not the hashed subvol in either big or little endian systems. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 15:06:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 15:06:42 +0000 Subject: [Bugs] [Bug 1685576] DNS delegation record for rhhi-dev.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685576 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- For the context, that's for a test instance of openshift hosted on AWS. The delegation got created, please tell me if ther eis any issue -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 15:28:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 15:28:38 +0000 Subject: [Bugs] [Bug 1685023] FD processes for larger files are not closed soon after FOP finished In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685023 david.spisla at iternity.com changed: What |Removed |Added ---------------------------------------------------------------------------- Component|bitrot |core -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Tue Mar 5 15:46:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 15:46:23 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Sanju --- upstream patch: https://review.gluster.org/#/c/glusterfs/+/22297/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 16:00:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 16:00:02 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |pgurusid at redhat.com Flags| |needinfo?(locbus at gmail.com) --- Comment #6 from Poornima G --- So the files don't appear even on the bricks is it? That's very strange. Did you check on both the servers and all the 6 bricks. We will try to check if the issue is seen even on fuse mount or is it only specific to samba access.Can you try the following steps and let us know the output: Create two temporary fuse mounts and create a file and directory from one mount point. List it on the same mount, Do you see the file and directory? List it on the other mount too, are the files visible? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 16:01:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 16:01:46 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED QA Contact| |pgurusid at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 17:49:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 17:49:41 +0000 Subject: [Bugs] [Bug 1648768] Tracker bug for all leases related issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648768 --- Comment #21 from Worker Ant --- REVIEW: https://review.gluster.org/22287 (leases: Do not process internal fops) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 5 18:16:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 18:16:55 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22302 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 18:16:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 18:16:55 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #577 from Worker Ant --- REVIEW: https://review.gluster.org/22302 (core: avoid dynamic TLS allocation when possible) posted (#2) for review on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 18:56:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 18:56:54 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Artem Russakovskii changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |archon810 at gmail.com --- Comment #3 from Artem Russakovskii --- Noticed the same when upgrading from 5.3 to 5.4, as mentioned. I'm confused though. Is actual replication affected, because the 5.4 server and the 3x 5.3 servers still show heal info as all 4 connected, and the files seem to be replicating correctly as well. So what's actually affected - just the status command? Is it fixable by tweaking transport.socket.ssl-enabled? Does upgrading all servers to 5.4 resolve it, or should we revert back to 5.3? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 5 19:09:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 05 Mar 2019 19:09:34 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 --- Comment #4 from Artem Russakovskii --- Ended up downgrading to 5.3 just in case. Peer status and volume status are OK now. zypper install --oldpackage glusterfs-5.3-lp150.100.1 Loading repository data... Reading installed packages... Resolving package dependencies... Problem: glusterfs-5.3-lp150.100.1.x86_64 requires libgfapi0 = 5.3, but this requirement cannot be provided not installable providers: libgfapi0-5.3-lp150.100.1.x86_64[glusterfs] Solution 1: Following actions will be done: downgrade of libgfapi0-5.4-lp150.100.1.x86_64 to libgfapi0-5.3-lp150.100.1.x86_64 downgrade of libgfchangelog0-5.4-lp150.100.1.x86_64 to libgfchangelog0-5.3-lp150.100.1.x86_64 downgrade of libgfrpc0-5.4-lp150.100.1.x86_64 to libgfrpc0-5.3-lp150.100.1.x86_64 downgrade of libgfxdr0-5.4-lp150.100.1.x86_64 to libgfxdr0-5.3-lp150.100.1.x86_64 downgrade of libglusterfs0-5.4-lp150.100.1.x86_64 to libglusterfs0-5.3-lp150.100.1.x86_64 Solution 2: do not install glusterfs-5.3-lp150.100.1.x86_64 Solution 3: break glusterfs-5.3-lp150.100.1.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/3/c] (c): 1 Resolving dependencies... Resolving package dependencies... The following 6 packages are going to be downgraded: glusterfs libgfapi0 libgfchangelog0 libgfrpc0 libgfxdr0 libglusterfs0 6 packages to downgrade. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 02:14:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 02:14:12 +0000 Subject: [Bugs] [Bug 1685771] New: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Bug ID: 1685771 Summary: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA Product: GlusterFS Version: 6 Status: NEW Component: glusterd Keywords: Regression Severity: high Priority: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, dahorak at redhat.com, mbukatov at redhat.com, nchilaka at redhat.com, rcyriac at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Depends On: 1684648 Blocks: 1685414 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684648 [Bug 1684648] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA https://bugzilla.redhat.com/show_bug.cgi?id=1685414 [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 02:14:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 02:14:12 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1685771 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 02:14:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 02:14:38 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 02:30:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 02:30:14 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22303 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 02:30:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 02:30:15 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22303 (glusterd: glusterd memory leak while running \"gluster v profile\" in a loop) posted (#2) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 03:16:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:16:05 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22304 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 03:16:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:16:06 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22304 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) posted (#1) for review on release-6 by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 03:18:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:18:29 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 --- Comment #6 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22304 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) posted (#2) for review on release-6 by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 03:18:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:18:30 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22304 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 03:18:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:18:31 +0000 Subject: [Bugs] [Bug 1676429] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676429 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22304 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 03:18:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:18:32 +0000 Subject: [Bugs] [Bug 1676429] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676429 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22304 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) posted (#2) for review on release-6 by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 03:24:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 03:24:43 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-03-05 13:56:00 |2019-03-06 03:24:43 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 05:49:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 05:49:03 +0000 Subject: [Bugs] [Bug 1685576] DNS delegation record for rhhi-dev.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685576 Rohan CJ changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rojoseph at redhat.com Flags| |needinfo?(mscherer at redhat.c | |om) --- Comment #2 from Rohan CJ --- The delegation doesn't seem to be working. I don't know if DNS propagation is a concern here, but I did also try directly querying ns1.redhat.com. Here is the link to the kind of delegation we want for openshift: https://github.com/openshift/installer/blob/master/docs/user/aws/route53.md#step-4b-subdomain---perform-dns-delegation $ dig rhhi-dev.gluster.org ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-13.P2.fc28 <<>> rhhi-dev.gluster.org ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18531 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;rhhi-dev.gluster.org. IN A ;; AUTHORITY SECTION: gluster.org. 278 IN SOA ns1.redhat.com. noc.redhat.com. 2019030501 3600 1800 604800 86400 ;; Query time: 81 msec ;; SERVER: 10.68.5.26#53(10.68.5.26) ;; WHEN: Wed Mar 06 11:12:41 IST 2019 ;; MSG SIZE rcvd: 103 $ dig @ns-1487.awsdns-57.org. rhhi-dev.gluster.org ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-13.P2.fc28 <<>> @ns-1487.awsdns-57.org. rhhi-dev.gluster.org ; (2 servers found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58544 ;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;rhhi-dev.gluster.org. IN A ;; AUTHORITY SECTION: rhhi-dev.gluster.org. 900 IN SOA ns-1487.awsdns-57.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;; Query time: 39 msec ;; SERVER: 205.251.197.207#53(205.251.197.207) ;; WHEN: Wed Mar 06 11:13:27 IST 2019 ;; MSG SIZE rcvd: 131 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 07:35:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 07:35:31 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sheggodu at redhat.com Depends On| |1685771, 1511779 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1511779 [Bug 1511779] Garbage collect inactive inodes in fuse-bridge https://bugzilla.redhat.com/show_bug.cgi?id=1685771 [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 07:35:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 07:35:31 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1672818 (glusterfs-6.0) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 07:50:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 07:50:10 +0000 Subject: [Bugs] [Bug 1685813] New: Not able to run centos-regression getting exception error Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685813 Bug ID: 1685813 Summary: Not able to run centos-regression getting exception error Product: GlusterFS Version: 6 Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Not able to run centos-regression build is getting an Exception error Version-Release number of selected component (if applicable): How reproducible: https://build.gluster.org/job/centos7-regression/5017/consoleFull Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 08:00:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 08:00:12 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 Marcin changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(locbus at gmail.com) | --- Comment #7 from Marcin --- In terms of the visibility of files directly on the bricks of the server, I've described this a bit imprecisely. The files aren't visible at the point of mounting the entire gluster resource at the server OS level - /glusterfs was really mounted by native client and in this way "fuse mount" has been checked as well (there is an entry in a fstab file i.e. /glusterfs and mount type is fuse.glusterfs). Of course, they are visible at the level of individual bricks. I apologize for the inaccuracy. In my spare time, I'll try to do a bit more thorough tests to show you the result. I'll be grateful for your commitment. Regards Marcin -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 09:29:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 09:29:38 +0000 Subject: [Bugs] [Bug 1685576] DNS delegation record for rhhi-dev.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685576 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(mscherer at redhat.c | |om) | --- Comment #3 from M. Scherer --- Seems there is a issue with the DNS server, as it work, but only on the internal server in the RH lan. I am slightly puzzled on that. I will have to escalate that to IT. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 09:45:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 09:45:53 +0000 Subject: [Bugs] [Bug 1685813] Not able to run centos-regression getting exception error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685813 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- Yeah, there is a patch that seems to break builders one by one. Dkhandel told me this morning that we lost lots of aws builder (8 out of 10), and upon investigation, they all ran regression tests for that change before becoming offline: https://review.gluster.org/#/c/glusterfs/+/22290/ As said on gerrit, I strongly suspect that the logic change do result in the test spawning a infinite loop, since the builder we recovered didn't show any trace of error in the log, which is the kind of symptom you get with a infinite loop (while still answering to ping, since icmp is handled in the kernel). So I would suggest to investigate ./tests/00-geo-rep/00-georep-verify-setup.t , as I see that as being the last test run before losing contact with builders. In fact, since the patch worked for the 2nd iteration, I guess the issue is the 3rd iteration of the patch. In any case, I think that's not a infra issue. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 09:54:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 09:54:26 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Jacob changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(jsecchiero at enter. | |eu) | --- Comment #8 from Jacob --- Disabling readdir-ahead doesn't change the througput. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 10:07:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 10:07:59 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #9 from Alberto Bengoa --- Neither to me. BTW, read-ahead/readdir-ahead shouldn't generate traffic in the opposite direction? ( Server -> Client) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 10:53:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 10:53:31 +0000 Subject: [Bugs] [Bug 1685813] Not able to run centos-regression getting exception error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685813 --- Comment #2 from M. Scherer --- I did reboot the broken builders and they are back. And I also looked at the patch, but didn't found something, so I suspect there is some logic that escape me. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 10:58:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 10:58:53 +0000 Subject: [Bugs] [Bug 1685813] Not able to run centos-regression getting exception error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685813 --- Comment #3 from Mohit Agrawal --- Thanks, Michael there is some issue in my patch. I will upload a new patch. You can close the bugzilla. Thanks, Mohit Agrawal -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 11:20:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 11:20:38 +0000 Subject: [Bugs] [Bug 1685944] New: WORM-XLator: Maybe integer overflow when computing new atime Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685944 Bug ID: 1685944 Summary: WORM-XLator: Maybe integer overflow when computing new atime Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: david.spisla at iternity.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: May integer overflow in WORM Xlator possibly. The structs: typedef struct { uint8_t worm : 1; uint8_t retain : 1; uint8_t legal_hold : 1; uint8_t ret_mode : 1; int64_t ret_period; int64_t auto_commit_period; } worm_reten_state_t; typedef struct { gf_boolean_t readonly_or_worm_enabled; gf_boolean_t worm_file; gf_boolean_t worm_files_deletable; int64_t reten_period; int64_t com_period; int reten_mode; time_t start_time; } read_only_priv_t; from read-only.h are using uint64_t values to store periods of retention and autocommmit. This seems to be dangerous since in worm-helper.c the function worm_set_state computes in line 97: stbuf->ia_atime = time(NULL) + retention_state->ret_period; stbuf->ia_atime is using int64_t because auf the settings of struct iattr. So if there is a very very high retention period stored , there is maybe an integer overflow. What can be the solution? Using int64_t instead if uint64_t may reduce the probability of the occurance. Version-Release number of selected component (if applicable): Gluster v5.4 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 11:28:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 11:28:23 +0000 Subject: [Bugs] [Bug 1685944] WORM-XLator: Maybe integer overflow when computing new atime In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685944 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22309 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 11:40:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 11:40:49 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #10 from Nithya Balachandran --- (In reply to Jacob from comment #4) > i'm not able to upload in the bugzilla portal due to the size of the pcap. > You can download from here: > > https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 @Poornima, the following are the calls and instances from the above: 104 proc-1 (stat) 8259 proc-11 (open) 46 proc-14 (statfs) 8239 proc-15 (flush) 8 proc-18 (getxattr) 68 proc-2 (readlink) 5576 proc-27 (lookup) 8388 proc-41 (forget) Not sure if it helps. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 12:19:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 12:19:13 +0000 Subject: [Bugs] [Bug 1663519] Memory leak when smb.conf has "store dos attributes = yes" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663519 --- Comment #3 from ryan at magenta.tv --- Hello, Could you advise if there is any update on this? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 12:24:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 12:24:10 +0000 Subject: [Bugs] [Bug 1313567] flooding of "dict is NULL" logging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1313567 --- Comment #23 from ryan at magenta.tv --- Hello, Is there any progress with this? We've had multiple systems consume the entire root volume due to the log file filling the volume. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 14:19:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 14:19:52 +0000 Subject: [Bugs] [Bug 1674225] flooding of "dict is NULL" logging & crash of client process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674225 --- Comment #2 from Hubert --- just wanted to mention that these log entries appear in a fresh 5.3 install, so no upgrade from a previous glusterfs version here. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 14:25:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 14:25:09 +0000 Subject: [Bugs] [Bug 1686009] New: gluster fuse crashed with segmentation fault possibly due to dentry not found Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686009 Bug ID: 1686009 Summary: gluster fuse crashed with segmentation fault possibly due to dentry not found Product: GlusterFS Version: mainline Status: NEW Component: core Keywords: ZStream Severity: urgent Priority: high Assignee: atumball at redhat.com Reporter: atumball at redhat.com CC: bmekala at redhat.com, bugs at gluster.org, jahernan at redhat.com, nbalacha at redhat.com, nchilaka at redhat.com, pkarampu at redhat.com, rcyriac at redhat.com, rgowdapp at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, srangana at redhat.com, vbellur at redhat.com Depends On: 1685078 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1685078 +++ Description of problem: ===================== was performing some non-functional tests for testing rpc fixes and found that one of the fuse clients crashed as below [2019-03-01 13:37:02.398653] I [dict.c:471:dict_get] (-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x6228d) [0x7f5147de128d] -->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x202f7) [0x7f5147afc2f7] -->/lib64/libglusterfs.so.0(dict_get+0x10c) [0x7f515a5b2d3c] ) 13-dict: !this || key=trusted.glusterfs.dht.mds [Invalid argument] [2019-03-01 13:37:02.711187] W [inode.c:197:__is_dentry_hashed] (-->/lib64/libglusterfs.so.0(__inode_path+0x68) [0x7f515a5cd1c8] -->/lib64/libglusterfs.so.0(+0x3add4) [0x7f515a5cadd4] -->/lib64/libglusterfs.so.0(+0x3ad7e) [0x7f515a5cad7e] ) 0-fuse: dentry not found pending frames: frame : type(1) op(UNLINK) frame : type(1) op(UNLINK) frame : type(1) op(READDIRP) frame : type(1) op(OPEN) frame : type(1) op(STAT) frame : type(0) op(0) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-03-01 13:37:02 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.12.2 /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x9d)[0x7f515a5bbb9d] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f515a5c6114] /lib64/libc.so.6(+0x36280)[0x7f5158bf8280] /lib64/libglusterfs.so.0(+0x3adc2)[0x7f515a5cadc2] /lib64/libglusterfs.so.0(__inode_path+0x68)[0x7f515a5cd1c8] /lib64/libglusterfs.so.0(inode_path+0x31)[0x7f515a5cd551] /lib64/libglusterfs.so.0(loc_touchup+0x7a)[0x7f515a5b9dba] /usr/lib64/glusterfs/3.12.2/xlator/mount/fuse.so(+0x6f8b)[0x7f515196df8b] /usr/lib64/glusterfs/3.12.2/xlator/mount/fuse.so(+0x7665)[0x7f515196e665] /usr/lib64/glusterfs/3.12.2/xlator/mount/fuse.so(+0x7bbd)[0x7f515196ebbd] /usr/lib64/glusterfs/3.12.2/xlator/mount/fuse.so(+0x7c8e)[0x7f515196ec8e] /usr/lib64/glusterfs/3.12.2/xlator/mount/fuse.so(+0x7cd0)[0x7f515196ecd0] /usr/lib64/glusterfs/3.12.2/xlator/mount/fuse.so(+0x1f702)[0x7f5151986702] /lib64/libpthread.so.0(+0x7dd5)[0x7f51593f7dd5] /lib64/libc.so.6(clone+0x6d)[0x7f5158cbfead] --------- warning: core file may not match specified executable file. Reading symbols from /usr/sbin/glusterfsd...Reading symbols from /usr/lib/debug/usr/sbin/glusterfsd.debug...done. done. Missing separate debuginfo for Try: yum --enablerepo='*debug*' install /usr/lib/debug/.build-id/73/3d1c681cfbd8bbeb11e8b7f80876a9aed6bb74 [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/sbin/glusterfs --volfile-server=my-machine.redhat.com --volf'. Program terminated with signal 11, Segmentation fault. #0 __dentry_search_arbit (inode=inode at entry=0x7f50ec000e98) at inode.c:1450 1450 list_for_each_entry (trav, &inode->dentry_list, inode_list) { Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.3.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_6.x86_64 libcom_err-1.42.9-13.el7.x86_64 libgcc-4.8.5-36.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 libuuid-2.23.2-59.el7.x86_64 openssl-libs-1.0.2k-16.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64 (gdb) bt #0 __dentry_search_arbit (inode=inode at entry=0x7f50ec000e98) at inode.c:1450 #1 0x00007f515a5cd1c8 in __inode_path (inode=inode at entry=0x7f50dc01dfc8, name=name at entry=0x0, bufp=bufp at entry=0x7f5145ed6d30) at inode.c:1551 #2 0x00007f515a5cd551 in inode_path (inode=0x7f50dc01dfc8, name=name at entry=0x0, bufp=bufp at entry=0x7f5145ed6d30) at inode.c:1642 #3 0x00007f515a5b9dba in loc_touchup (loc=0x7f5069ee43c0, name=) at xlator.c:880 #4 0x00007f515196df8b in fuse_resolve_loc_touchup (state=0x7f5069ee43a0) at fuse-resolve.c:33 #5 fuse_resolve_continue (state=0x7f5069ee43a0) at fuse-resolve.c:704 #6 0x00007f515196e665 in fuse_resolve_inode (state=0x7f5069ee43a0) at fuse-resolve.c:364 #7 0x00007f515196ebbd in fuse_resolve (state=0x7f5069ee43a0) at fuse-resolve.c:651 #8 0x00007f515196ec8e in fuse_resolve_all (state=) at fuse-resolve.c:679 #9 0x00007f515196ecd0 in fuse_resolve_and_resume (state=0x7f5069ee43a0, fn=0x7f5151972c10 ) at fuse-resolve.c:718 #10 0x00007f5151986702 in fuse_thread_proc (data=0x563689a35f10) at fuse-bridge.c:5781 #11 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #12 0x00007f5158cbfead in clone () from /lib64/libc.so.6 (gdb) t a a bt Thread 11 (Thread 0x7f515aaa6780 (LWP 4229)): #0 0x00007f51593f8f47 in pthread_join () from /lib64/libpthread.so.0 #1 0x00007f515a61af78 in event_dispatch_epoll (event_pool=0x563689a2e250) at event-epoll.c:846 #2 0x0000563687bed538 in main (argc=4, argv=) at glusterfsd.c:2692 Thread 10 (Thread 0x7f51456d6700 (LWP 4257)): #0 0x00007f51593fb965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f515197039b in timed_response_loop (data=0x563689a35f10) at fuse-bridge.c:4660 #2 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 9 (Thread 0x7f514cc97700 (LWP 4251)): #0 0x00007f5158cc0483 in epoll_wait () from /lib64/libc.so.6 #1 0x00007f515a61a6e8 in event_dispatch_epoll_worker (data=0x563689a8bd60) at event-epoll.c:749 #2 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7f514d498700 (LWP 4250)): #0 0x00007f5158cc0483 in epoll_wait () from /lib64/libc.so.6 #1 0x00007f515a61a6e8 in event_dispatch_epoll_worker (data=0x563689a8ba90) at event-epoll.c:749 #2 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7f5150163700 (LWP 4234)): #0 0x00007f51593fbd12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f515a5f7dd8 in syncenv_task (proc=proc at entry=0x563689a49b20) at syncop.c:603 #2 0x00007f515a5f8ca0 in syncenv_processor (thdata=0x563689a49b20) at syncop.c:695 #3 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f5151165700 (LWP 4231)): #0 0x00007f51593ff361 in sigwait () from /lib64/libpthread.so.0 #1 0x0000563687bf0c7b in glusterfs_sigwaiter (arg=) at glusterfsd.c:2242 #2 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f51322ff700 (LWP 22247)): #0 0x00007f51593fbd12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f515a5f7dd8 in syncenv_task (proc=proc at entry=0x563689a4a2a0) at syncop.c:603 #2 0x00007f515a5f8ca0 in syncenv_processor (thdata=0x563689a4a2a0) at syncop.c:695 #3 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f5150964700 (LWP 4232)): #0 0x00007f5158c86e2d in nanosleep () from /lib64/libc.so.6 #1 0x00007f5158c86cc4 in sleep () from /lib64/libc.so.6 #2 0x00007f515a5e4b9d in pool_sweeper (arg=) at mem-pool.c:470 #3 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7f5144ed5700 (LWP 4258)): #0 0x00007f51593fb965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f51519708d3 in notify_kernel_loop (data=) at fuse-bridge.c:4584 #2 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f5158cbfead in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7f5151966700 (LWP 4230)): #0 0x00007f51593fee3d in nanosleep () from /lib64/libpthread.so.0 #1 0x00007f515a5c9f86 in gf_timer_proc (data=0x563689a49300) at timer.c:174 #2 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f5158cbfead in clone () from /lib64/libc.so.6 ---Type to continue, or q to quit--- Thread 1 (Thread 0x7f5145ed7700 (LWP 4256)): #0 __dentry_search_arbit (inode=inode at entry=0x7f50ec000e98) at inode.c:1450 #1 0x00007f515a5cd1c8 in __inode_path (inode=inode at entry=0x7f50dc01dfc8, name=name at entry=0x0, bufp=bufp at entry=0x7f5145ed6d30) at inode.c:1551 #2 0x00007f515a5cd551 in inode_path (inode=0x7f50dc01dfc8, name=name at entry=0x0, bufp=bufp at entry=0x7f5145ed6d30) at inode.c:1642 #3 0x00007f515a5b9dba in loc_touchup (loc=0x7f5069ee43c0, name=) at xlator.c:880 #4 0x00007f515196df8b in fuse_resolve_loc_touchup (state=0x7f5069ee43a0) at fuse-resolve.c:33 #5 fuse_resolve_continue (state=0x7f5069ee43a0) at fuse-resolve.c:704 #6 0x00007f515196e665 in fuse_resolve_inode (state=0x7f5069ee43a0) at fuse-resolve.c:364 #7 0x00007f515196ebbd in fuse_resolve (state=0x7f5069ee43a0) at fuse-resolve.c:651 #8 0x00007f515196ec8e in fuse_resolve_all (state=) at fuse-resolve.c:679 #9 0x00007f515196ecd0 in fuse_resolve_and_resume (state=0x7f5069ee43a0, fn=0x7f5151972c10 ) at fuse-resolve.c:718 #10 0x00007f5151986702 in fuse_thread_proc (data=0x563689a35f10) at fuse-bridge.c:5781 #11 0x00007f51593f7dd5 in start_thread () from /lib64/libpthread.so.0 #12 0x00007f5158cbfead in clone () from /lib64/libc.so.6 (gdb) q ################# on another client too hit same crash ########## sing host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/sbin/glusterfs --acl --volfile-server=my-machine.redhat.com'. Program terminated with signal 11, Segmentation fault. #0 __dentry_search_arbit (inode=inode at entry=0x7f0a5002eab8) at inode.c:1450 1450 list_for_each_entry (trav, &inode->dentry_list, inode_list) { Missing separate debuginfos, use: debuginfo-install glusterfs-fuse-3.12.2-43.el7.x86_64 (gdb) bt #0 __dentry_search_arbit (inode=inode at entry=0x7f0a5002eab8) at inode.c:1450 #1 0x00007f0ac23a01c8 in __inode_path (inode=inode at entry=0x7f0a50009e68, name=name at entry=0x7f0a028f1b50 "fresh_top.log", bufp=bufp at entry=0x7f0985ba36e8) at inode.c:1551 #2 0x00007f0ac23a0551 in inode_path (inode=0x7f0a50009e68, name=0x7f0a028f1b50 "fresh_top.log", bufp=bufp at entry=0x7f0985ba36e8) at inode.c:1642 #3 0x00007f0ab9740489 in fuse_resolve_entry (state=0x7f0985ba3570) at fuse-resolve.c:99 #4 0x00007f0ab974162d in fuse_resolve_parent (state=state at entry=0x7f0985ba3570) at fuse-resolve.c:312 #5 0x00007f0ab9741998 in fuse_resolve (state=0x7f0985ba3570) at fuse-resolve.c:647 #6 0x00007f0ab9741c8e in fuse_resolve_all (state=) at fuse-resolve.c:679 #7 0x00007f0ab9741cd0 in fuse_resolve_and_resume (state=0x7f0985ba3570, fn=0x7f0ab9744e40 ) at fuse-resolve.c:718 ---Type to continue, or q to quit--- #8 0x00007f0ab9759702 in fuse_thread_proc (data=0x559b1ebcf0c0) at fuse-bridge.c:5781 #9 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #10 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 (gdb) t a a bt Thread 11 (Thread 0x7f0ac2879780 (LWP 4246)): #0 0x00007f0ac11cbf47 in pthread_join () from /lib64/libpthread.so.0 #1 0x00007f0ac23edf78 in event_dispatch_epoll (event_pool=0x559b1ebc7250) at event-epoll.c:846 #2 0x0000559b1de33538 in main (argc=5, argv=) at glusterfsd.c:2692 Thread 10 (Thread 0x7f0aad2ae700 (LWP 4260)): #0 0x00007f0ac11ce965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f0ab974339b in timed_response_loop (data=0x559b1ebcf0c0) at fuse-bridge.c:4660 #2 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 9 (Thread 0x7f0ab7f36700 (LWP 4251)): ---Type to continue, or q to quit--- #0 0x00007f0ac11ced12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f0ac23cadd8 in syncenv_task (proc=proc at entry=0x559b1ebe2e40) at syncop.c:603 #2 0x00007f0ac23cbca0 in syncenv_processor (thdata=0x559b1ebe2e40) at syncop.c:695 #3 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7f0ab8737700 (LWP 4250)): #0 0x00007f0ac0a59e2d in nanosleep () from /lib64/libc.so.6 #1 0x00007f0ac0a59cc4 in sleep () from /lib64/libc.so.6 #2 0x00007f0ac23b7b9d in pool_sweeper (arg=) at mem-pool.c:470 #3 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 ---Type to continue, or q to quit--- Thread 7 (Thread 0x7f0ab9739700 (LWP 4248)): #0 0x00007f0ac11d1e3d in nanosleep () from /lib64/libpthread.so.0 #1 0x00007f0ac239cf86 in gf_timer_proc (data=0x559b1ebe2620) at timer.c:174 #2 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7f0ab4a6a700 (LWP 4256)): #0 0x00007f0ac11d14ed in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007f0ac11ccdcb in _L_lock_883 () from /lib64/libpthread.so.0 #2 0x00007f0ac11ccc98 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007f0ac239eee9 in inode_unref (inode=0x7f09850f0d98) at inode.c:668 #4 0x00007f0ac238ca02 in loc_wipe (loc=loc at entry=0x7f09865af178) at xlator.c:768 #5 0x00007f0ab40231ee in client_local_wipe (local=local at entry=0x7f09865af178) at client-helpers.c:127 ---Type to continue, or q to quit--- #6 0x00007f0ab4032f0d in client3_3_lookup_cbk (req=, iov=, count=, myframe=0x7f098676f668) at client-rpc-fops.c:2872 #7 0x00007f0ac2134a00 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f097ca19cb0, pollin=pollin at entry=0x7f097ccffd50) at rpc-clnt.c:778 #8 0x00007f0ac2134d6b in rpc_clnt_notify (trans=, mydata=0x7f097ca19ce0, event=, data=0x7f097ccffd50) at rpc-clnt.c:971 #9 0x00007f0ac2130ae3 in rpc_transport_notify (this=this at entry=0x7f098e451610, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f097ccffd50) at rpc-transport.c:557 #10 0x00007f0ab6d22586 in socket_event_poll_in (this=this at entry=0x7f098e451610, notify_handled=) at socket.c:2322 #11 0x00007f0ab6d24bca in socket_event_handler (fd=33, idx=26, gen=4, data=0x7f098e451610, poll_in=, poll_out=, poll_err=0, event_thread_died=0 '\000') at socket.c:2482 #12 0x00007f0ac23ed870 in event_dispatch_epoll_handler (event=0x7f0ab4a69e70, event_pool=0x559b1ebc7250) ---Type to continue, or q to quit--- at event-epoll.c:643 #13 event_dispatch_epoll_worker (data=0x559b1ec250a0) at event-epoll.c:759 #14 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #15 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7f0ab526b700 (LWP 4255)): #0 0x00007f0ac11d14ed in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007f0ac11ccdcb in _L_lock_883 () from /lib64/libpthread.so.0 #2 0x00007f0ac11ccc98 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007f0ac23a1819 in inode_is_linked (inode=inode at entry=0x7f0a4c03b8f8) at inode.c:2490 #4 0x00007f0aafdd2693 in afr_read_subvol_select_by_policy (inode=inode at entry=0x7f0a4c03b8f8, this=this at entry=0x7f097d715a70, readable=readable at entry=0x7f0ab526a420 "\001\001\001\265\n\177", args=args at entry=0x0) at afr-common.c:1685 ---Type to continue, or q to quit--- #5 0x00007f0aafdd29d6 in afr_read_subvol_get (inode=inode at entry=0x7f0a4c03b8f8, this=0x7f097d715a70, subvol_p=subvol_p at entry=0x0, readables=readables at entry=0x0, event_p=event_p at entry=0x0, type=type at entry=AFR_DATA_TRANSACTION, args=args at entry=0x0) at afr-common.c:1771 #6 0x00007f0aafde1050 in afr_get_parent_read_subvol (readable=0x7f0ab526a540 "\001\001\001|\t\177", replies=, parent=0x7f0a4c03b8f8, this=) at afr-common.c:2167 #7 afr_lookup_done (frame=frame at entry=0x7f098664dcb8, this=this at entry=0x7f097d715a70) at afr-common.c:2319 #8 0x00007f0aafde2058 in afr_lookup_metadata_heal_check (frame=frame at entry=0x7f098664dcb8, this=this at entry=0x7f097d715a70) at afr-common.c:2771 #9 0x00007f0aafde2a5b in afr_lookup_entry_heal (frame=frame at entry=0x7f098664dcb8, this=this at entry=0x7f097d715a70) at afr-common.c:2920 #10 0x00007f0aafde2e3d in afr_lookup_cbk (frame=frame at entry=0x7f098664dcb8, cookie=, this=0x7f097d715a70, op_ret=, op_errno=, inode=inode at entry=0x7f09850f0d98, buf=buf at entry=0x7f0ab526a900, xdata=0x7f0993075e88, postparent=postparent at entry=0x7f0ab526a970) ---Type to continue, or q to quit--- at afr-common.c:2968 #11 0x00007f0ab4032efd in client3_3_lookup_cbk (req=, iov=, count=, myframe=0x7f0985df1b58) at client-rpc-fops.c:2872 #12 0x00007f0ac2134a00 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f097c49ee60, pollin=pollin at entry=0x7f09937ef6c0) at rpc-clnt.c:778 #13 0x00007f0ac2134d6b in rpc_clnt_notify (trans=, mydata=0x7f097c49ee90, event=, data=0x7f09937ef6c0) at rpc-clnt.c:971 #14 0x00007f0ac2130ae3 in rpc_transport_notify (this=this at entry=0x7f097e3ea6a0, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f09937ef6c0) at rpc-transport.c:557 #15 0x00007f0ab6d22586 in socket_event_poll_in (this=this at entry=0x7f097e3ea6a0, notify_handled=) at socket.c:2322 #16 0x00007f0ab6d24bca in socket_event_handler (fd=29, idx=20, gen=7, data=0x7f097e3ea6a0, poll_in=, poll_out=, poll_err=0, event_thread_died=0 '\000') at socket.c:2482 ---Type to continue, or q to quit--- #17 0x00007f0ac23ed870 in event_dispatch_epoll_handler (event=0x7f0ab526ae70, event_pool=0x559b1ebc7250) at event-epoll.c:643 #18 event_dispatch_epoll_worker (data=0x559b1ec24dd0) at event-epoll.c:759 #19 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #20 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7f0ab7735700 (LWP 4252)): #0 0x00007f0ac11ced12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f0ac23cadd8 in syncenv_task (proc=proc at entry=0x559b1ebe3200) at syncop.c:603 #2 0x00007f0ac23cbca0 in syncenv_processor (thdata=0x559b1ebe3200) at syncop.c:695 #3 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #4 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 ---Type to continue, or q to quit--- Thread 3 (Thread 0x7f0aacaad700 (LWP 4261)): #0 0x00007f0ac11ce965 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007f0ab97438d3 in notify_kernel_loop (data=) at fuse-bridge.c:4584 #2 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7f0ab8f38700 (LWP 4249)): #0 0x00007f0ac11d2361 in sigwait () from /lib64/libpthread.so.0 #1 0x0000559b1de36c7b in glusterfs_sigwaiter (arg=) at glusterfsd.c:2242 #2 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #3 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7f0aadaaf700 (LWP 4259)): ---Type to continue, or q to quit--- #0 __dentry_search_arbit (inode=inode at entry=0x7f0a5002eab8) at inode.c:1450 #1 0x00007f0ac23a01c8 in __inode_path (inode=inode at entry=0x7f0a50009e68, name=name at entry=0x7f0a028f1b50 "fresh_top.log", bufp=bufp at entry=0x7f0985ba36e8) at inode.c:1551 #2 0x00007f0ac23a0551 in inode_path (inode=0x7f0a50009e68, name=0x7f0a028f1b50 "fresh_top.log", bufp=bufp at entry=0x7f0985ba36e8) at inode.c:1642 #3 0x00007f0ab9740489 in fuse_resolve_entry (state=0x7f0985ba3570) at fuse-resolve.c:99 #4 0x00007f0ab974162d in fuse_resolve_parent (state=state at entry=0x7f0985ba3570) at fuse-resolve.c:312 #5 0x00007f0ab9741998 in fuse_resolve (state=0x7f0985ba3570) at fuse-resolve.c:647 #6 0x00007f0ab9741c8e in fuse_resolve_all (state=) at fuse-resolve.c:679 #7 0x00007f0ab9741cd0 in fuse_resolve_and_resume (state=0x7f0985ba3570, fn=0x7f0ab9744e40 ) at fuse-resolve.c:718 ---Type to continue, or q to quit--- #8 0x00007f0ab9759702 in fuse_thread_proc (data=0x559b1ebcf0c0) at fuse-bridge.c:5781 #9 0x00007f0ac11cadd5 in start_thread () from /lib64/libpthread.so.0 #10 0x00007f0ac0a92ead in clone () from /lib64/libc.so.6 (gdb) 1. 3x3 volume 2. IOs triggered from 8 different clients both as root and non root user for about a week, with quotas/uss enabled and set after 2 days 3. after about a week, did a add-brick with 3 replica sets to make it 6x3 and triggered rebalance and left it over weekend --- Additional comment from Amar Tumballi on 2019-03-04 11:49:51 UTC --- Checking the core. My initial suspicion was on lru-limit, but doesn't look so. --------------- (gdb) p name $22 = 0x7f0a028f1b50 "fresh_top.log" (gdb) p *inode $23 = {table = 0x7f09971d46f0, gfid = "\353\034\020'?G\266\271\034\240\031\205?L", lock = {spinlock = 0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' , __align = 0}}, nlookup = 0, fd_count = 0, active_fd_count = 0, ref = 3, ia_type = IA_IFDIR, fd_list = {next = 0x7f0a50009ec0, prev = 0x7f0a50009ec0}, dentry_list = {next = 0x7f0a4c0386f8, prev = 0x7f0a4c0386f8}, hash = {next = 0x7f097f4971e0, prev = 0x7f097f4971e0}, list = {next = 0x7f0985631490, prev = 0x7f098536bcf0}, _ctx = 0x7f0a50011380, invalidate_sent = _gf_false} (gdb) p *inode->table $24 = {lock = {__data = {__lock = 2, __count = 0, __owner = 4259, __nusers = 1, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = "\002\000\000\000\000\000\000\000\243\020\000\000\001", '\000' , __align = 2}, hashsize = 14057, name = 0x7f098fc5b040 "meta-autoload/inode", root = 0x7f097ca5a168, xl = 0x7f097e043300, lru_limit = 131072, inode_hash = 0x7f097f414d20, name_hash = 0x7f097f514d70, active = {next = 0x7f098536bcf0, prev = 0x7f097ca5a1f0}, active_size = 26, lru = {next = 0x7f0985f9ccd0, prev = 0x7f09803d4020}, lru_size = 70, purge = {next = 0x7f09971d4780, prev = 0x7f09971d4780}, purge_size = 0, inode_pool = 0x7f09971d4830, dentry_pool = 0x7f09971d48f0, fd_mem_pool = 0x7f098df5eb80, ctxcount = 33, invalidator_fn = 0x7f0ab97425d0 , invalidator_xl = 0x559b1ebcf0c0, invalidate = { next = 0x7f09971d47c8, prev = 0x7f09971d47c8}, invalidate_size = 0} (gdb) p *inode->table->root $29 = {table = 0x7f09971d46f0, gfid = '\000' , "\001", lock = {spinlock = 0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' , __align = 0}}, nlookup = 0, fd_count = 0, active_fd_count = 0, ref = 1, ia_type = IA_IFDIR, fd_list = {next = 0x7f097ca5a1c0, prev = 0x7f097ca5a1c0}, dentry_list = { next = 0x7f097ca5a1d0, prev = 0x7f097ca5a1d0}, hash = {next = 0x7f097f414d30, prev = 0x7f097f414d30}, list = {next = 0x7f09971d4750, prev = 0x7f0a50012840}, _ctx = 0x7f097c72e1a0, invalidate_sent = _gf_false} (gdb) p *((dentry_t *)inode->dentry_list) $26 = {inode_list = {next = 0x7f0a50009ed0, prev = 0x7f0a50009ed0}, hash = {next = 0x7f097f536fc0, prev = 0x7f097f536fc0}, inode = 0x7f0a50009e68, name = 0x7f0a4c00d340 "top.dir", parent = 0x7f0a5002eab8} (gdb) p *((dentry_t *)inode->dentry_list)->parent $27 = {table = 0x7f0a5004f6a8, gfid = "\000\000\000\000\000\000\000\000\310\352\002P\n\177\000", lock = {spinlock = 1342368456, mutex = {__data = { __lock = 1342368456, __count = 32522, __owner = 0, __nusers = 0, __kind = 515698880, __spins = 21915, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = "\310\352\002P\n\177\000\000\000\000\000\000\000\000\000\000\300\360\274\036\233U", '\000' , __align = 139682268768968}}, nlookup = 0, fd_count = 0, active_fd_count = 0, ref = 4294967295, ia_type = IA_INVAL, fd_list = {next = 0x0, prev = 0x0}, dentry_list = {next = 0x0, prev = 0x100000000}, hash = {next = 0x0, prev = 0x0}, list = {next = 0x0, prev = 0x0}, _ctx = 0x0, invalidate_sent = _gf_false} --------------- Notice that lru_size is no where close to lru-limit. Also the inode by itself is fine. The issue is, while everything is under lock, a parent dentry looks to be gone, or rather freed. A possible case of extra unref() ?? Looking at the info, that this is 'top.dir' mostly it should have been linked to root inode. But dentry is freed. Will look more into this, and keep this updated. Also, looks like something which would have existed forever by the first look. --- Additional comment from Amar Tumballi on 2019-03-04 12:47:00 UTC --- > [2019-03-01 13:37:02.711187] W [inode.c:197:__is_dentry_hashed] (-->/lib64/libglusterfs.so.0(__inode_path+0x68) [0x7f515a5cd1c8] -->/lib64/libglusterfs.so.0(+0x3add4) [0x7f515a5cadd4] -->/lib64/libglusterfs.so.0(+0x3ad7e) [0x7f515a5cad7e] ) 0-fuse: dentry not found > pending frames: > frame : type(1) op(UNLINK) > frame : type(1) op(UNLINK) > frame : type(1) op(READDIRP) ---- (gdb) bt #0 __dentry_search_arbit (inode=inode at entry=0x7f0a5002eab8) at inode.c:1450 (gdb) l 1445 dentry_t *trav = NULL; 1446 1447 if (!inode) 1448 return NULL; 1449 1450 list_for_each_entry (trav, &inode->dentry_list, inode_list) { 1451 if (__is_dentry_hashed (trav)) { 1452 dentry = trav; 1453 break; ---- Looks like we need to see if trav is null, and break the loop. Mainly here, __is_dentry_hashed() has given 0 output, and we still continue to traverse the list. I guess, that should have stopped. Still checking. --- Additional comment from Amar Tumballi on 2019-03-04 14:23:18 UTC --- As per comment #4, > Also, looks like something which would have existed forever by the first look. I suspect this to be a bug in code since a very long time. If in these lists, if dentry is NULL, by the execution, next iteration will definitely crash, which happened here. Need to traceback why this happened when every possible change operation in inode happens with table lock. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685078 [Bug 1685078] systemic: gluster fuse crashed with segmentation fault possibly due to dentry not found -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 14:37:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 14:37:45 +0000 Subject: [Bugs] [Bug 1686009] gluster fuse crashed with segmentation fault possibly due to dentry not found In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686009 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22311 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 14:37:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 14:37:46 +0000 Subject: [Bugs] [Bug 1686009] gluster fuse crashed with segmentation fault possibly due to dentry not found In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686009 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22311 (inode: check for instance of dentry null) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 15:13:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:13:43 +0000 Subject: [Bugs] [Bug 1686034] New: Request access to docker hub gluster organisation. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686034 Bug ID: 1686034 Summary: Request access to docker hub gluster organisation. Product: GlusterFS Version: experimental Status: NEW Component: project-infrastructure Severity: low Assignee: bugs at gluster.org Reporter: sseshasa at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: I request access to the docker hub gluster organisation in order to push and manage docker images. My docker hub user ID is: sseshasa I am not sure what to choose for "Product" and "Component" fields. Please suggest/correct accordingly if they are wrong. Version-Release number of selected component (if applicable): NA How reproducible: NA Steps to Reproduce: 1. 2. 3. Actual results: NA Expected results: NA Additional info: NA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:18:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:18:23 +0000 Subject: [Bugs] [Bug 1686034] Request access to docker hub gluster organisation. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686034 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- So, what is the exact plan ? Shouldn't the docker image be built and pushed automatically ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:25:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:25:46 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1676468 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1676468 [Bug 1676468] glusterfs-fuse client not benefiting from page cache on read after write -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:25:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:25:56 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1511779 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1511779 [Bug 1511779] Garbage collect inactive inodes in fuse-bridge -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:30:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:30:41 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |amukherj at redhat.com --- Comment #2 from Atin Mukherjee --- (In reply to Mohit Agrawal from comment #1) > Upstream patch is posted to resolve the same > https://review.gluster.org/#/c/glusterfs/+/22290/ this is an upstream bug only :-) Once the mainline patch is merged and we backport it to release-6 branch, the bug status will be corrected. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 15:33:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:33:14 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Blocks|1672818 (glusterfs-6.0) | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 15:33:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:33:14 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1684404 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 [Bug 1684404] Multiple shd processes are running on brick_mux environmet -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:33:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:33:44 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1672818 (glusterfs-6.0) | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:33:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:33:44 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1685120 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:34:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:34:02 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1684029 Depends On|1684029 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:34:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:34:02 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1685120 | Depends On| |1685120 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 15:34:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:34:38 +0000 Subject: [Bugs] [Bug 1686034] Request access to docker hub gluster organisation. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686034 --- Comment #2 from Sridhar Seshasayee --- I have built a docker image locally and pushed it to my repository on docker hub with user ID: sseshasa. However, I need to push the same image under the gluster organisation on docker hub (https://hub.docker.com/u/gluster) under gluster/gluster*. I don't know how to achieve this and imagine that I need some access privilege to push images there. Please let me know how I can go about this. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:35:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:35:16 +0000 Subject: [Bugs] [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |6 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 15:37:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:37:05 +0000 Subject: [Bugs] [Bug 1664934] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664934 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Blocks|1672818 (glusterfs-6.0) | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 6 15:37:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:37:05 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1664934 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1664934 [Bug 1664934] glusterfs-fuse client not benefiting from page cache on read after write -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 15:59:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 15:59:06 +0000 Subject: [Bugs] [Bug 1686034] Request access to docker hub gluster organisation. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686034 --- Comment #3 from M. Scherer --- Nope, we do not allow direct push (or shouldn't). If you want a new image there, you have to explain what it is, why it should be there, etc, etc. And automate the push, for example, using a jenkins job. See for example this job: https://build.gluster.org/job/glusterd2-containers/ http://git.gluster.org/cgit/build-jobs.git/tree/build-gluster-org/jobs/glusterd2-containers.yml that's managed by gerrit, like the glusterfs source code. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 18:40:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 18:40:39 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22312 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 6 18:40:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 06 Mar 2019 18:40:40 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #578 from Worker Ant --- REVIEW: https://review.gluster.org/22312 (packaging: remove unnecessary ldconfig in scriptlets) posted (#1) for review on master by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 05:01:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 05:01:44 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-07 05:01:44 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22297 (core: make compute_cksum function op_version compatible) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 05:01:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 05:01:44 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Bug 1684029 depends on bug 1685120, which changed state. Bug 1685120 Summary: upgrade from 3.12, 4.1 and 5 to 6 broken https://bugzilla.redhat.com/show_bug.cgi?id=1685120 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 05:24:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 05:24:49 +0000 Subject: [Bugs] [Bug 1686034] Request access to docker hub gluster organisation. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686034 --- Comment #4 from Sridhar Seshasayee --- Okay, thanks for the info and pointers. I will work with one of the developers and get this done. This issue may be closed. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 05:34:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 05:34:44 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #579 from Worker Ant --- REVIEW: https://review.gluster.org/22298 (packaging: s390x has RDMA support) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 06:22:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:22:52 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 --- Comment #6 from Artem Russakovskii --- Is the next release going to be an imminent hotfix, i.e. something like today/tomorrow, or are we talking weeks? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 06:26:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:26:48 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22313 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 06:26:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:26:48 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22313 (core: make compute_cksum function op_version compatible) posted (#1) for review on release-6 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 06:29:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:29:58 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22314 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 06:29:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:29:59 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22314 (core: make compute_cksum function op_version compatible) posted (#1) for review on release-5 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 06:54:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:54:30 +0000 Subject: [Bugs] [Bug 1685576] DNS delegation record for rhhi-dev.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685576 --- Comment #4 from Rohan CJ --- It's working now! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 06:55:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:55:49 +0000 Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644322 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com, | |sacharya at redhat.com, | |sunkumar at redhat.com Assignee|bugs at gluster.org |csaba at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 06:57:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 06:57:12 +0000 Subject: [Bugs] [Bug 1643716] "OSError: [Errno 40] Too many levels of symbolic links" when syncing deletion of directory hierarchy In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643716 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com, | |sunkumar at redhat.com Assignee|bugs at gluster.org |sacharya at redhat.com Severity|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 08:07:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:07:13 +0000 Subject: [Bugs] [Bug 1656348] Commit c9bde3021202f1d5c5a2d19ac05a510fc1f788ac causes ls slowdown In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656348 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-03-07 08:07:13 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 08:08:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:08:16 +0000 Subject: [Bugs] [Bug 1641226] #define GF_SQL_COMPACT_DEF GF_SQL_COMPACT_INCR makes lots of the code unneeded In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1641226 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-6.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-03-07 08:08:16 --- Comment #4 from Amar Tumballi --- Also note that, this part of the code itself is completely removed now. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 08:09:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:09:48 +0000 Subject: [Bugs] [Bug 1263231] [RFE]: Gluster should provide "share mode"/"share reservation" support In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1263231 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |NEW CC| |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 08:11:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:11:40 +0000 Subject: [Bugs] [Bug 1581554] cloudsync: make plugins configurable In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1581554 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Component|unclassified |Cloudsync Fixed In Version| |glusterfs-6.0 Resolution|--- |NEXTRELEASE Assignee|bugs at gluster.org |spalai at redhat.com Last Closed| |2019-03-07 08:11:40 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 08:13:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:13:49 +0000 Subject: [Bugs] [Bug 1670382] parallel-readdir prevents directories and files listing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670382 --- Comment #8 from Marcin --- Some of the directories created on the host resource (fuse) are not visible from the same host, while the files themselves are usually visible. On the second host that mounts this resource (fuse) directories and files created on the first host are visible. Sometimes, for a moment, directories can appear and disappear, but only on the host where they were created. Files and directories created from the host level (fuse) to which the samba client is not directly connected are usually visible on the samba resource. However, the directories that were created on the host (fuse) to which Samba connects directly (via ctdb) are partially not visible. Files created on both hosts (fuse) are generally visible on the samba resource. Most of the new files and directories created directly on the samba resource seem to be hidden from the same client and server (samba) and may be partially invisible on the host they are connected to via samba - (on fusemount). On the second host (fuse) to which the samba client is not connected, files and directories created on the samba resource are generally visible. The tests have been performed on the latest version of gluster / client (5.4). Disabling the parallel-readdir functionality immediately solves the above problems, even without an additional restart of hosts or the gluster service. As I mentioned at the very beginning in (v.4.1.7) and our current production (v.3.10.3) the problem does not occur. I hope that I haven't mixed up anything :) Regards Marcin -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 08:26:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:26:25 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.30.8 |5 Component|General |glusterd CC| |bugs at gluster.org Assignee|srakonde at redhat.com |bugs at gluster.org QA Contact|lsvaty at redhat.com | Target Milestone|ovirt-4.3.2 |--- Product|vdsm |GlusterFS Flags|needinfo?(sabose at redhat.com | |) ovirt-4.3+ | --- Comment #19 from Sahina Bose --- Changing the component - as this bug is already tracked via Bug 1677319 in oVirt -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 08:26:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:26:54 +0000 Subject: [Bugs] [Bug 1659824] Unable to mount gluster fs on glusterfs client: Transport endpoint is not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659824 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #1 from Amar Tumballi --- Sorry about getting back to you late on this. This is a case of gluster's volgen having IP/Hostname details in 'protocol/client' given during volume creation. You can check the volfile you received in client side (available on log file), and see what is the 'remote-host' option pointing at. If its a server name, then you can have a overriding DNS resolution through /etc/hosts to use public IP, and that should be enough. If you are seeing IP address in 'remote-host' option, you may have to copy the volfile, and start glusterfs process with `glusterfs -f$local-volfile $mountpoint`. This should work. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 08:27:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:27:33 +0000 Subject: [Bugs] [Bug 1659824] Unable to mount gluster fs on glusterfs client: Transport endpoint is not connected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659824 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |low --- Comment #2 from Amar Tumballi --- I would say this is not a 'Bug', but missing documentation. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 08:34:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 08:34:21 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #11 from Hubert --- i made a tcpdump as well: tcpdump -i eth1 -s 0 -w /tmp/dirls.pcap tcp and not port 2222 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 259699 packets captured 259800 packets received by filter 29 packets dropped by kernel The file is 1.1G big; gzipped and uploaded it: https://ufile.io/5h6i2 Hope this helps. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 09:00:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 09:00:12 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #12 from Hubert --- Maybe i should add that the relevant IP addresses of the gluster servers are: 192.168.0.50, 192.168.0.51, 192.168.0.52 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 09:43:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 09:43:11 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #58 from abhays --- (In reply to Nithya Balachandran from comment #56) > (In reply to abhays from comment #53) > > (In reply to Nithya Balachandran from comment #51) > > > > > > > > > > > > I don't think so. I would recommend that you debug the tests on your systems > > > > and post patches which will work on both. > > > > > > Please note what I am referring to is for you to look at the .t files and > > > modify file names or remove hardcoding as required. > > > > Yes @Nithya, We understood that you want us to continue debugging the tests > > and provide patches if fix is found. > > While doing the same, we were able to fix the ./tests/bugs/nfs/bug-847622.t > > with the following patch:- > > > > diff --git a/tests/bugs/nfs/bug-847622.t b/tests/bugs/nfs/bug-847622.t > > index 3b836745a..f21884972 100755 > > --- a/tests/bugs/nfs/bug-847622.t > > +++ b/tests/bugs/nfs/bug-847622.t > > @@ -28,7 +32,7 @@ cd $N0 > > > > # simple getfacl setfacl commands > > TEST touch testfile > > -TEST setfacl -m u:14:r testfile > > +TEST setfacl -m u:14:r $B0/brick0/testfile > > TEST getfacl testfile > > > > Please check, if the above patch can be merged. > > > > > > This fix is incorrect. The patch changes the test to modify the brick > directly while the test is to check that these operations succeed on the > mount. You need to see why it fails and then we can figure out the fix. Okay, thanks for the clarification. Below are some of the observations I made for this test case:- When brick is not changed and kept the way it is in the test case, then the below happens on s390x: getfacl /d/backends/brick0/testfile getfacl: Removing leading '/' from absolute path names # file: d/backends/brick0/testfile # owner: root # group: root user::rw- group::r-- other::r-- Whereas, on x86, getfacl /d/backends/brick0/testfile getfacl: Removing leading '/' from absolute path names # file: d/backends/brick0/testfile # owner: root # group: root user::rw- user:14:r-- group::r-- mask::r-- other::r-- Since the setfacl command fails,the above behavior is seen. When I checked the logs, On s390x, this is shown:- D [MSGID: 0] [client-rpc-fops_v2.c:887:client4_0_getxattr_cbk] 0-patchy-client-0: remote operation failed: No data available. Path: /testfile (fa921dc9-41a3-4fad-9fab-2c0933e54e38). Key: system.posix_acl_access On x86, this is shown:- D [MSGID: 0] [nfs3-helpers.c:1660:nfs3_log_fh_entry_call] 0-nfs-nfsv3: XID: a2d2141c, LOOKUP: args: FH: exportid d7f43849-b25a-49d2-8084-aefb8d7797f2, gfid 00000000-0000-0000-0000-000000000001, mountid 8d32c8d1-0000-0000-0000-000000000000, name: libacl.so.1 Therefore, I tried remounting acl in the test case and even tried adding acl in /etc/fstab in the following ways:- In the test case-------> mount -o remount,acl / In /etc/fstab----------> /dev /boot/zipl ext2 defaults,acl 0 2 However, the test case still fails. So, can you please provide us with some details as to what happens when the commands; EXPECT_WITHIN $NFS_EXPORT_TIMEOUT "1" is_nfs_export_available; TEST mount_nfs $H0:/$V0 $N0 nolock are run in the test case. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 09:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 09:44:07 +0000 Subject: [Bugs] [Bug 1672480] Bugs Test Module tests failing on s390x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672480 --- Comment #59 from abhays --- (In reply to Nithya Balachandran from comment #57) > (In reply to abhays from comment #55) > > (In reply to Nithya Balachandran from comment #54) > > > > > > > > However, the test cases are still failing and only pass if x86 hash values > > > > are provided(Refer to comment#8):- > > > > ./tests/bugs/glusterfs/bug-902610.t > > > > ./tests/bugs/posix/bug-1619720.t > > > > > > Please provide more information on what changes you tried. > > > > For tests/bugs/glusterfs/bug-902610.t:- > > In the test case, after the kill_brick function is run, the mkdir $M0/dir1 > > doesn't work and hence the get_layout function test fails. So,as a > > workaround we tried not killing the brick and then checked the functionality > > of the test case, after which the dir1 did get created in all the 4 bricks, > > however, the test failed with the following output:- > > The mkdir function will fail if the hashed brick of the directory being > created is down. In your case, the change in hashed values means the brick > that was killed is the hashed subvol for the directory. Killing a different > brick should cause it to succeed. > > In any case this is not a feature that we support anymore so I can just > remove the test case. > > > Therefore, can these changes be added in the test case with a condition for > > s390x separately? > > I do not think we should separate it out like this. The better way would be > to just find 2 bricks that work for both big and little endian. > I will try out your changes on a big endian system and see if this > combination will work there as well. > > > > > Also, We have a few queries on the tests behaviour. > > When a directory or a file gets created, according to me, it should be > > placed in the brick depending on its hash range and value of the > > file/directory. > > However, in the above test, as you can see, if we don't kill the > > bricks{2,3}, the directory gets created in all the bricks{0,1,2,3}.So, does > > it not consider hash values and range at this point or is it something to do > > with mounting FUSE? > > The way dht creates files and directories is slightly different. > > For files, it calculates the hash and creates it in the subvolume in whose > directory layout range it falls. > For directories, it first tries to create it on the hashed subvol. If for > some reason that fails, it will not be created on the other bricks. In this > test, for s390x, one of the bricks killed was the hashed subvol so mkdir > fails. > The solution here is to make sure the bricks being killed are not the hashed > subvol in either big or little endian systems. Thanks for this explanation. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:13:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:13:24 +0000 Subject: [Bugs] [Bug 1686353] New: flooding of "dict is NULL" logging Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686353 Bug ID: 1686353 Summary: flooding of "dict is NULL" logging Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: libgfapi Severity: medium Assignee: bugs at gluster.org Reporter: ryan at magenta.tv QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- External Bug ID: Red Hat Bugzilla 1313567 Classification: Community Description of problem: Same issue as in bug 1313567, but is happening with VFS clients. Version-Release number of selected component (if applicable): 4.1.7 How reproducible: Every gluster system upgraded to 4.1.7 Steps to Reproduce: 1.Upgrade 3.12 system to 4.1.7 Actual results: Log file is filled with error and eventually consumes all available storage space Expected results: Log file is not filled with error Additional info: -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:19:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:19:27 +0000 Subject: [Bugs] [Bug 1558507] Gluster allows renaming of folders, which contain WORMed/Retain or WORMed files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1558507 --- Comment #1 from Amar Tumballi --- Hi David, I know you sent couple of patches previously for WORM, and we did fix issues like delete etc. Is this still an issue in latest master? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:30:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:30:49 +0000 Subject: [Bugs] [Bug 1686353] flooding of "dict is NULL" logging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686353 --- Comment #1 from ryan at magenta.tv --- Errors in log: [2019-03-07 10:24:23.718001] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:23.719257] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:23.720557] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:23.722291] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:24.457847] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:24.457888] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:24.458852] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:24.458907] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] [2019-03-07 10:24:24.459093] W [dict.c:671:dict_ref] (-->/usr/lib64/glusterfs/4.1.7/xlator/performance/quick-read.so(+0x5db4) [0x7f8efe647db4] -->/usr/lib64/glusterfs/4.1.7/xlator/performance/io-cache.so(+0xae2e) [0x7f8efe858e2e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7f8f095adf1d] ) 0-dict: dict is NULL [Invalid argument] -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:38:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:38:51 +0000 Subject: [Bugs] [Bug 1686364] New: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 Bug ID: 1686364 Summary: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing Product: GlusterFS Version: 6 Status: NEW Component: core Keywords: Triaged Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, kdhananj at redhat.com, sabose at redhat.com Depends On: 1684385 Blocks: 1672818 (glusterfs-6.0) Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1684385 +++ Description of problem: When gluster bits were upgraded in a hyperconverged ovirt-gluster setup, one node at a time in online mode from 3.12.5 to 5.3, the following log messages were seen - [2019-02-26 16:24:25.126963] E [shard.c:556:shard_modify_size_and_block_count] (-->/usr/lib64/glusterfs/5.3/xlator/cluster/distribute.so(+0x82a45) [0x7ff71d05ea45] -->/usr/lib64/glusterfs/5.3/xlator/features/shard.so(+0x5c77) [0x7ff71cdb4c77] -->/usr/lib64/glusterfs/5.3/xlator/features/shard.so(+0x592e) [0x7ff71cdb492e] ) 0-engine-shard: Failed to get trusted.glusterfs.shard.file-size for 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7 Version-Release number of selected component (if applicable): How reproducible: 1/1 Steps to Reproduce: 1. 2. 3. Actual results: Expected results: shard.file.size xattr should always be accessible. Additional info: --- Additional comment from Krutika Dhananjay on 2019-03-01 07:13:48 UTC --- [root at tendrl25 glusterfs]# gluster v info engine Volume Name: engine Type: Replicate Volume ID: bb26f648-2842-4182-940e-6c8ede02195f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: tendrl27.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine Brick2: tendrl26.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine Brick3: tendrl25.lab.eng.blr.redhat.com:/gluster_bricks/engine/engine Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off storage.owner-uid: 36 storage.owner-gid: 36 network.ping-timeout: 30 performance.strict-o-direct: on cluster.granular-entry-heal: enable --- Additional comment from Krutika Dhananjay on 2019-03-01 07:23:02 UTC --- On further investigation, it was found that the shard xattrs were genuinely missing on all 3 replicas - [root at tendrl27 ~]# getfattr -d -m . -e hex /gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.engine-client-1=0x000000000000000000000000 trusted.afr.engine-client-2=0x000000000000000000000000 trusted.gfid=0x3ad3f0c6a4e64b17bd2997c32ecc54d7 trusted.gfid2path.5f2a4f417210b896=0x64373265323737612d353761642d343136322d613065332d6339346463316231366230322f696473 [root at localhost ~]# getfattr -d -m . -e hex /gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.engine-client-0=0x0000000e0000000000000000 trusted.afr.engine-client-2=0x000000000000000000000000 trusted.gfid=0x3ad3f0c6a4e64b17bd2997c32ecc54d7 trusted.gfid2path.5f2a4f417210b896=0x64373265323737612d353761642d343136322d613065332d6339346463316231366230322f696473 [root at tendrl25 ~]# getfattr -d -m . -e hex /gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids getfattr: Removing leading '/' from absolute path names # file: gluster_bricks/engine/engine/36ea5b11-19fb-4755-b664-088f6e5c4df2/dom_md/ids security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.afr.engine-client-0=0x000000100000000000000000 trusted.afr.engine-client-1=0x000000000000000000000000 trusted.gfid=0x3ad3f0c6a4e64b17bd2997c32ecc54d7 trusted.gfid2path.5f2a4f417210b896=0x64373265323737612d353761642d343136322d613065332d6339346463316231366230322f696473 Also from the logs, it appears the file underwent metadata self-heal moments before these errors started to appear- [2019-02-26 13:35:37.253896] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7 [2019-02-26 13:35:37.254734] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.file-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.254749] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.block-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.255777] I [MSGID: 108026] [afr-self-heal-common.c:1729:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7. sources=[0] sinks=2 [2019-02-26 13:35:37.258032] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7 [2019-02-26 13:35:37.258792] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.file-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.258807] W [MSGID: 101016] [glusterfs3.h:752:dict_to_xdr] 0-dict: key 'trusted.glusterfs.shard.block-size' is not sent on wire [Invalid argument] [2019-02-26 13:35:37.259633] I [MSGID: 108026] [afr-self-heal-common.c:1729:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on 3ad3f0c6-a4e6-4b17-bd29-97c32ecc54d7. sources=[0] sinks=2 Metadata heal as we know does three things - 1. bulk getxattr from source brick; 2. removexattr on sink bricks 3. bulk setxattr on the sink bricks But what's clear from these logs is the dict_to_xdr() messages at the time of metadata heal, indicating that the shard xattrs were possibly not "sent on wire" as part of step 3. Turns out due to the newly introduced dict_to_xdr() code in 5.3 which is absent in 3.12.5. The bricks were upgraded to 5.3 in the order tendrl25 followed by tendrl26 with tendrl27 still at 3.12.5 when this issue was hit - Tendrl25: [2019-02-26 12:47:53.595647] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 5.3 (args: /usr/sbin/glusterfsd -s tendrl25.lab.eng.blr.redhat.com --volfile-id engine.tendrl25.lab.eng.blr.redhat.com.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/tendrl25.lab.eng.blr.redhat.com-gluster_bricks-engine-engine.pid -S /var/run/gluster/aae83600c9a783dd.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=9373b871-cfce-41ba-a815-0b330f6975c8 --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153) Tendrl26: [2019-02-26 13:35:05.718052] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 5.3 (args: /usr/sbin/glusterfsd -s tendrl26.lab.eng.blr.redhat.com --volfile-id engine.tendrl26.lab.eng.blr.redhat.com.gluster_bricks-engine-engine -p /var/run/gluster/vols/engine/tendrl26.lab.eng.blr.redhat.com-gluster_bricks-engine-engine.pid -S /var/run/gluster/8010384b5524b493.socket --brick-name /gluster_bricks/engine/engine -l /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log --xlator-option *-posix.glusterd-uuid=18fa886f-8d1a-427c-a5e6-9a4e9502ef7c --process-name brick --brick-port 49153 --xlator-option engine-server.listen-port=49153) Tendrl27: [root at tendrl27 bricks]# rpm -qa | grep gluster glusterfs-fuse-3.12.15-1.el7.x86_64 glusterfs-libs-3.12.15-1.el7.x86_64 glusterfs-3.12.15-1.el7.x86_64 glusterfs-server-3.12.15-1.el7.x86_64 glusterfs-client-xlators-3.12.15-1.el7.x86_64 glusterfs-api-3.12.15-1.el7.x86_64 glusterfs-events-3.12.15-1.el7.x86_64 libvirt-daemon-driver-storage-gluster-4.5.0-10.el7_6.4.x86_64 glusterfs-gnfs-3.12.15-1.el7.x86_64 glusterfs-geo-replication-3.12.15-1.el7.x86_64 glusterfs-cli-3.12.15-1.el7.x86_64 vdsm-gluster-4.20.46-1.el7.x86_64 python2-gluster-3.12.15-1.el7.x86_64 glusterfs-rdma-3.12.15-1.el7.x86_64 And as per the metadata heal logs, the source was brick0 (corresponding to tendrl27) and sink was brick 2 (corresponding to tendrl 25). This means step 1 of metadata heal did a getxattr on tendrl27 which was still at 3.12.5 and got the dicts with a certain format which didn't have the "value" type (because it's only introduced in 5.3). And this same dict was used for setxattr in step 3 which silently fails to add "trusted.glusterfs.shard.block-size" and "trusted.glusterfs.shard.file-size" xattrs to the setxattr request because of the dict_to_xdr() conversion failure in protocol/client but succeeds the overall operation. So afr thought the heal succeeded although the xattr that needed heal was never sent over the wire. This led to one or more files ending up with shard xattrs removed on-disk failing every other operation on it pretty much. --- Additional comment from Krutika Dhananjay on 2019-03-01 07:29:29 UTC --- So the backward compatibility was broken with the introduction of the following patch - Patch that broke this compatibility - https://review.gluster.org/c/glusterfs/+/19098 commit 303cc2b54797bc5371be742543ccb289010c92f2 Author: Amar Tumballi Date: Fri Dec 22 13:12:42 2017 +0530 protocol: make on-wire-change of protocol using new XDR definition. With this patchset, some major things are changed in XDR, mainly: * Naming: Instead of gfs3/gfs4 settle for gfx_ for xdr structures * add iattx as a separate structure, and add conversion methods * the *_rsp structure is now changed, and is also reduced in number (ie, no need for different strucutes if it is similar to other response). * use proper XDR methods for sending dict on wire. Also, with the change of xdr structure, there are changes needed outside of xlator protocol layer to handle these properly. Mainly because the abstraction was broken to support 0-copy RDMA with payload for write and read FOP. This made transport layer know about the xdr payload, hence with the change of xdr payload structure, transport layer needed to know about the change. Updates #384 Change-Id: I1448fbe9deab0a1b06cb8351f2f37488cefe461f Signed-off-by: Amar Tumballi Any operation in a heterogeneous cluster which reads xattrs on-disk and subsequently writes it (like metadata heal for instance) will cause one or more on-disk xattrs to disappear. In fact logs suggest even dht on-disk layouts vanished - [2019-02-26 13:35:30.253348] I [MSGID: 109092] [dht-layout.c:744:dht_layout_dir_mismatch] 0-engine-dht: /36ea5b11-19fb-4755-b664-088f6e5c4df2: Disk layout missing, gfid = d0735acd-14ec-4ef9-8f5f-6a3c4ae12c08 --- Additional comment from Worker Ant on 2019-03-05 03:16:15 UTC --- REVIEW: https://review.gluster.org/22300 (dict: handle STR_OLD data type in xdr conversions) posted (#1) for review on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1684385 [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:38:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:38:51 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1686364 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:38:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:38:51 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1686364 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:40:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:40:15 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22316 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:40:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:40:16 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22316 (dict: handle STR_OLD data type in xdr conversions) posted (#1) for review on release-5 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:41:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:41:42 +0000 Subject: [Bugs] [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22317 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:41:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:41:44 +0000 Subject: [Bugs] [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22317 (dict: handle STR_OLD data type in xdr conversions) posted (#1) for review on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 10:53:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:53:42 +0000 Subject: [Bugs] [Bug 1686009] gluster fuse crashed with segmentation fault possibly due to dentry not found In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686009 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22318 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 10:53:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:53:43 +0000 Subject: [Bugs] [Bug 1686009] gluster fuse crashed with segmentation fault possibly due to dentry not found In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686009 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22318 (inode: handle 'dentry_list' in destroy) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 11:03:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:03:32 +0000 Subject: [Bugs] [Bug 1686371] New: Cleanup nigel access and document it Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686371 Bug ID: 1686371 Summary: Cleanup nigel access and document it Product: GlusterFS Version: 4.1 Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: mscherer at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Nigel babu left the admin team as well as Red Hat. We should clean and remove access and document that. SO far, here is what we have to do: Access to remove: - remove from github (group Github-organization-Admins) - remove ssh keys in ansible => done - remove alias from root on private repo => done - remove alias from group_vars/nagios/admins.yml => done - remove entry from jenkins (on https://build.gluster.org/configureSecurity/) => done - remove from gerrit permission => TODO - remove from gluster repo => edit ./MAINTAINERS - remove from ec2 => TODO While on it, there is a few passwords and stuff to rotate: - rotate the ansible ssh keys => done, but we need to write down the process (ideally, a ansible playbook) - change nagios password => TODO - rotate the jenkins ssh keys => TODO, write a process Maybe more need to be done -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:12:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:12:27 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22319 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 11:12:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:12:28 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22319 (core: make compute_cksum function op_version compatible) posted (#1) for review on release-6 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 11:20:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:20:17 +0000 Subject: [Bugs] [Bug 1686371] Cleanup nigel access and document it In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686371 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22320 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:20:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:20:18 +0000 Subject: [Bugs] [Bug 1686371] Cleanup nigel access and document it In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686371 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22320 (Remove Nigel, as he left the company) posted (#1) for review on master by Michael Scherer -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:20:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:20:40 +0000 Subject: [Bugs] [Bug 1686396] New: ls and rm run on contents of same directory from a single mount point results in ENOENT errors Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686396 Bug ID: 1686396 Summary: ls and rm run on contents of same directory from a single mount point results in ENOENT errors Product: GlusterFS Version: 4.1 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: This bug was reported by Nithya. Create a pure replicate volume and enable the following options: Volume Name: xvol Type: Replicate Volume ID: 095d6083-ea82-4ec9-a3a9-498fbd5f8dbe Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.122.7:/bricks/brick1/xvol-1 Brick2: 192.168.122.7:/bricks/brick1/xvol-2 Brick3: 192.168.122.7:/bricks/brick1/xvol-3 Options Reconfigured: server.event-threads: 4 client.event-threads: 4 performance.parallel-readdir: on performance.readdir-ahead: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off Fuse mount using: mount -t glusterfs -o lru-limit=500 -s 192.168.122.7:/xvol /mnt/g1 mkdir /mnt/g1/dirdd >From terminal 1: cd /mnt/g1/dirdd while (true); do ls -lR dirdd; done >From terminal 2: while true; do dd if=/dev/urandom of=/mnt/g1/dirdd/1G.file bs=1M count=1; rm -f /mnt/g1/dirdd/1G.file; done With performance.parallel-readdir on, ls runs into ESTALE errors. With performance.parallel-readdir off, no errors are seen. Note that both ls and rm are running on same mount point. Version-Release number of selected component (if applicable): How reproducible: consistently Steps to Reproduce: 1. 2. 3. Actual results: ls runs into ESTALE errors Expected results: ls shouldn't run into ESTALE errors Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:21:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:21:00 +0000 Subject: [Bugs] [Bug 1686396] ls and rm run on contents of same directory from a single mount point results in ENOENT errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686396 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |mainline -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:24:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:24:15 +0000 Subject: [Bugs] [Bug 1674412] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674412 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22321 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:24:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:24:16 +0000 Subject: [Bugs] [Bug 1674412] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674412 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22321 (performance/readdir-ahead: fix deadlock) posted (#1) for review on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:31:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:31:37 +0000 Subject: [Bugs] [Bug 1686398] New: Thin-arbiter minor fixes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 Bug ID: 1686398 Summary: Thin-arbiter minor fixes Product: GlusterFS Version: mainline Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Address post-merge review comments for commit 69532c141be160b3fea03c1579ae4ac13018dcdf -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:31:57 +0000 Subject: [Bugs] [Bug 1686398] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Triaged Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:34:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:34:23 +0000 Subject: [Bugs] [Bug 1686399] New: listing a file while writing to it causes deadlock Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 Bug ID: 1686399 Summary: listing a file while writing to it causes deadlock Product: GlusterFS Version: 6 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org Depends On: 1674412 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1674412 +++ Description of problem: Following test case was given by Nithya. Create a pure replicate volume and enable the following options: Volume Name: xvol Type: Replicate Volume ID: 095d6083-ea82-4ec9-a3a9-498fbd5f8dbe Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 192.168.122.7:/bricks/brick1/xvol-1 Brick2: 192.168.122.7:/bricks/brick1/xvol-2 Brick3: 192.168.122.7:/bricks/brick1/xvol-3 Options Reconfigured: server.event-threads: 4 client.event-threads: 4 performance.parallel-readdir: on performance.readdir-ahead: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off Fuse mount using: mount -t glusterfs -o lru-limit=500 -s 192.168.122.7:/xvol /mnt/g1 mkdir /mnt/g1/dirdd >From terminal 1: cd /mnt/g1/dirdd while (true); do ls -lR dirdd; done >From terminal 2: while true; do dd if=/dev/urandom of=/mnt/g1/dirdd/1G.file bs=1M count=1; rm -f /mnt/g1/dirdd/1G.file; done On running this test, both dd and ls hang after some time. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Raghavendra G on 2019-02-11 10:01:41 UTC --- (gdb) thr 8 [Switching to thread 8 (Thread 0x7f28072d1700 (LWP 26397))] #0 0x00007f2813a404cd in __lll_lock_wait () from /lib64/libpthread.so.0 (gdb) bt #0 0x00007f2813a404cd in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007f2813a3bdcb in _L_lock_812 () from /lib64/libpthread.so.0 #2 0x00007f2813a3bc98 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007f2805e3122f in rda_inode_ctx_get_iatt (inode=0x7f27ec0010b8, this=0x7f2800012560, attr=0x7f28072d0700) at readdir-ahead.c:286 #4 0x00007f2805e3134d in __rda_fill_readdirp (ctx=0x7f27f800f290, request_size=, entries=0x7f28072d0890, this=0x7f2800012560) at readdir-ahead.c:326 #5 __rda_serve_readdirp (this=this at entry=0x7f2800012560, ctx=ctx at entry=0x7f27f800f290, size=size at entry=4096, entries=entries at entry=0x7f28072d0890, op_errno=op_errno at entry=0x7f28072d085c) at readdir-ahead.c:353 #6 0x00007f2805e32732 in rda_fill_fd_cbk (frame=0x7f27f801c1e8, cookie=, this=0x7f2800012560, op_ret=3, op_errno=2, entries=, xdata=0x0) at readdir-ahead.c:581 #7 0x00007f2806097447 in client4_0_readdirp_cbk (req=, iov=, count=, myframe=0x7f27f800f498) at client-rpc-fops_v2.c:2339 #8 0x00007f28149a29d1 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f2800051120, pollin=pollin at entry=0x7f280006a180) at rpc-clnt.c:755 #9 0x00007f28149a2d37 in rpc_clnt_notify (trans=0x7f28000513e0, mydata=0x7f2800051150, event=, data=0x7f280006a180) at rpc-clnt.c:922 #10 0x00007f281499f5e3 in rpc_transport_notify (this=this at entry=0x7f28000513e0, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f280006a180) at rpc-transport.c:542 #11 0x00007f2808d88f77 in socket_event_poll_in (notify_handled=true, this=0x7f28000513e0) at socket.c:2522 #12 socket_event_handler (fd=, idx=, gen=, data=0x7f28000513e0, poll_in=, poll_out=, poll_err=0, event_thread_died=0 '\000') at socket.c:2924 #13 0x00007f2814c5a926 in event_dispatch_epoll_handler (event=0x7f28072d0e80, event_pool=0x90d560) at event-epoll.c:648 #14 event_dispatch_epoll_worker (data=0x96f1e0) at event-epoll.c:762 #15 0x00007f2813a39dd5 in start_thread () from /lib64/libpthread.so.0 #16 0x00007f2813302b3d in clone () from /lib64/libc.so.6 [Switching to thread 7 (Thread 0x7f2806ad0700 (LWP 26398))] #0 0x00007f2813a404cd in __lll_lock_wait () from /lib64/libpthread.so.0 (gdb) bt #0 0x00007f2813a404cd in __lll_lock_wait () from /lib64/libpthread.so.0 #1 0x00007f2813a3bdcb in _L_lock_812 () from /lib64/libpthread.so.0 #2 0x00007f2813a3bc98 in pthread_mutex_lock () from /lib64/libpthread.so.0 #3 0x00007f2805e2cd85 in rda_mark_inode_dirty (this=this at entry=0x7f2800012560, inode=0x7f27ec009da8) at readdir-ahead.c:234 #4 0x00007f2805e2f3cc in rda_writev_cbk (frame=0x7f27f800ef48, cookie=, this=0x7f2800012560, op_ret=131072, op_errno=0, prebuf=0x7f2806acf870, postbuf=0x7f2806acf910, xdata=0x0) at readdir-ahead.c:769 #5 0x00007f2806094064 in client4_0_writev_cbk (req=, iov=, count=, myframe=0x7f27f801a7f8) at client-rpc-fops_v2.c:685 #6 0x00007f28149a29d1 in rpc_clnt_handle_reply (clnt=clnt at entry=0x7f2800051120, pollin=pollin at entry=0x7f27f8008320) at rpc-clnt.c:755 #7 0x00007f28149a2d37 in rpc_clnt_notify (trans=0x7f28000513e0, mydata=0x7f2800051150, event=, data=0x7f27f8008320) at rpc-clnt.c:922 #8 0x00007f281499f5e3 in rpc_transport_notify (this=this at entry=0x7f28000513e0, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7f27f8008320) at rpc-transport.c:542 #9 0x00007f2808d88f77 in socket_event_poll_in (notify_handled=true, this=0x7f28000513e0) at socket.c:2522 #10 socket_event_handler (fd=, idx=, gen=, data=0x7f28000513e0, poll_in=, poll_out=, poll_err=0, event_thread_died=0 '\000') at socket.c:2924 #11 0x00007f2814c5a926 in event_dispatch_epoll_handler (event=0x7f2806acfe80, event_pool=0x90d560) at event-epoll.c:648 #12 event_dispatch_epoll_worker (data=0x96f4b0) at event-epoll.c:762 #13 0x00007f2813a39dd5 in start_thread () from /lib64/libpthread.so.0 #14 0x00007f2813302b3d in clone () from /lib64/libc.so.6 In writev and readdirp codepath inode and fd-ctx locks are acquired in opposite order causing a deadlock. --- Additional comment from Worker Ant on 2019-03-07 11:24:16 UTC --- REVIEW: https://review.gluster.org/22321 (performance/readdir-ahead: fix deadlock) posted (#1) for review on master by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1674412 [Bug 1674412] listing a file while writing to it causes deadlock -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:34:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:34:23 +0000 Subject: [Bugs] [Bug 1674412] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674412 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1686399 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 [Bug 1686399] listing a file while writing to it causes deadlock -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 11:39:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:39:35 +0000 Subject: [Bugs] [Bug 1686398] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22323 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 11:39:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 11:39:36 +0000 Subject: [Bugs] [Bug 1686398] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22323 (afr: thin-arbiter read txn minor fixes) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 12:04:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 12:04:45 +0000 Subject: [Bugs] [Bug 1685944] WORM-XLator: Maybe integer overflow when computing new atime In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685944 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-07 12:04:45 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22309 (WORM-Xlator: Maybe integer overflow when computing new atime) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 12:28:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 12:28:56 +0000 Subject: [Bugs] [Bug 1686371] Cleanup nigel access and document it In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686371 --- Comment #2 from M. Scherer --- Alos, removed from jenkins-admins on github. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 12:59:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 12:59:42 +0000 Subject: [Bugs] [Bug 1676400] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676400 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1686272 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686272 [Bug 1686272] fuse mount logs inundated with [dict.c:471:dict_get] (-->/usr/lib64/glusterfs/3.12.2/xlator/cluster/replicate.so(+0x6228d) [0x7f9029d8628d] -->/usr/lib64/glusterfs/3.12.2/xlator/cluster/distribute.so(+0x202f7) [0x7f9029aa12f7] -->/lib64/libglusterfs.so.0( -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 13:57:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 13:57:06 +0000 Subject: [Bugs] [Bug 1686461] New: Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686461 Bug ID: 1686461 Summary: Quotad.log filled with 0-dict is not sent on wire [Invalid argument] messages Product: GlusterFS Version: 4.1 Status: NEW Component: quota Assignee: bugs at gluster.org Reporter: ryan at magenta.tv CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: [2019-03-07 13:51:35.683031] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'max_count' is not sent on wire [Invalid argument] [2019-03-07 13:51:35.683056] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'quota-list-count' is not sent on wire [Invalid argument] [2019-03-07 13:51:35.683057] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'gfid' is not sent on wire [Invalid argument] The message "W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'default-soft-limit' is not sent on wire [Invalid argument]" repeated 2 times between [2019-03-07 13:51:35.682913] and [2019-03-07 13:51:35.683120] [2019-03-07 13:51:35.683121] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'volume-uuid' is not sent on wire [Invalid argument] [2019-03-07 13:51:35.683152] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'max_count' is not sent on wire [Invalid argument] [2019-03-07 13:51:35.683153] W [MSGID: 101016] [glusterfs3.h:743:dict_to_xdr] 0-dict: key 'quota-list-count' is not sent on wire [Invalid argument] Version-Release number of selected component (if applicable): 4.1.7 How reproducible: Everytime Steps to Reproduce: 1.Turn on quotas for volume 2. Tail quotad.log log Actual results: Log is filled with constantly repeating messages as above. System drive fills and Glusterd daemon fails Expected results: Logs are not filled with these messages Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 14:42:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 14:42:40 +0000 Subject: [Bugs] [Bug 1558507] Gluster allows renaming of folders, which contain WORMed/Retain or WORMed files In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1558507 --- Comment #2 from david.spisla at iternity.com --- Hello Amar, yes, it is still an issue. If a folder (or a subfolder of this folder) contains a WORMed file, it shouldn't be allowed to rename the folder -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 15:11:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 15:11:29 +0000 Subject: [Bugs] [Bug 1674412] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674412 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-07 15:11:29 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22321 (performance/readdir-ahead: fix deadlock) merged (#2) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 15:11:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 15:11:29 +0000 Subject: [Bugs] [Bug 1686399] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 Bug 1686399 depends on bug 1674412, which changed state. Bug 1674412 Summary: listing a file while writing to it causes deadlock https://bugzilla.redhat.com/show_bug.cgi?id=1674412 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 15:14:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 15:14:05 +0000 Subject: [Bugs] [Bug 1686399] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22322 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 15:14:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 15:14:07 +0000 Subject: [Bugs] [Bug 1686399] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22322 (performance/readdir-ahead: fix deadlock) posted (#2) for review on release-6 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 16:08:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 16:08:03 +0000 Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644322 Csaba Henk changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |lohmaier+rhbz at gmail.com Flags| |needinfo?(lohmaier+rhbz at gma | |il.com) --- Comment #1 from Csaba Henk --- Please confirm one thing. So does it happen that the glusterfs client producing the "read from /dev/fuse returned -1 (Operation not permitted)" flood recovers and gets back to normal operational state? I wonder if it's a transient overloaded state in the kernel or a non-recoverable faulty state. (As far as I understand you, it should be the former, just please let me know if my understanding is correct.) And if yes, then is there anything else that can be said about the circumstances? How often does it manage to recover, how long does the faulty state hold, is there anything that you can observe about the system state when it hits in, while it holds, when it ceases? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 17:40:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 17:40:54 +0000 Subject: [Bugs] [Bug 1686568] New: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Bug ID: 1686568 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Keywords: ZStream Severity: high Assignee: bugs at gluster.org Reporter: ksubrahm at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, ksubrahm at redhat.com, nchilaka at redhat.com, pkarampu at redhat.com, rallan at redhat.com, ravishankar at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Blocks: 1683893 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1683893 [Bug 1683893] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 17:42:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 17:42:22 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ksubrahm at redhat.com --- Comment #1 from Karthik U S --- Description of problem: ======================= While converting 2x2 to 2x(2+1) (arbiter), there was a checksum mismatch: [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/master/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 8e69e8576625d36f9ee1866c92bfb6a3 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 2fbf69488baa3ac7 [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/slave/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 53c64bd1144f6d9855f0af3edb55e614 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 3901e39cb02ad487 Everything matches except under "CHECKSUMS", Regular files and the total are a mismatch. Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.12.2-45.el7rhgs.x86_64 How reproducible: ================= 2/2 Steps to Reproduce: ==================== 1. Create and start a geo-rep session with master and slave being 2x2 2. Mount the vols and start pumping data 3. Disable and stop self healing (prior to add-brick) # gluster volume set VOLNAME cluster.data-self-heal off # gluster volume set VOLNAME cluster.metadata-self-heal off # gluster volume set VOLNAME cluster.entry-self-heal off # gluster volume set VOLNAME self-heal-daemon off 4. Add brick to the master and slave to convert them to 2x(2+1) arbiter vols 5. Start rebalance on master and slave 6. Re-enable self healing : # gluster volume set VOLNAME cluster.data-self-heal on # gluster volume set VOLNAME cluster.metadata-self-heal on # gluster volume set VOLNAME cluster.entry-self-heal on # gluster volume set VOLNAME self-heal-daemon on 7. Wait for rebalance to complete 8. Check the checksum between master and slave Actual results: =============== Checksum does not fully match Expected results: ================ Checksum should match -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 7 17:50:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 17:50:06 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 --- Comment #2 from Karthik U S --- RCA: If arbiter brick is pending data heal, then self heal will just restore the timestamps of the file and resets the pending xattrs on the source bricks. It will not send any write on the arbiter brick. Here in the add-brick scenario, it will create the entries and then restores the timestamps and other metadata of the files from the source brick. Hence the data changes will not be marked on the changelog, leading to missing data on the slave volume after sync. Possible Fixes: 1. Do not mark arbiter brick as ACTIVE, as it will not have the changelogs for the data transactions happened when it was down/faulty even after the completion of heal. 2. Send 1 byte write on the arbiter brick from self heal as we do with the normal writes from the clients. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 17:53:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 17:53:47 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22325 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 17:53:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 17:53:48 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22325 (cluster/afr: Send 1byte write on to arbiter brick from SHD) posted (#1) for review on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 04:03:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:03:51 +0000 Subject: [Bugs] [Bug 1634664] Inconsistent quorum checks during open and fd based operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1634664 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 04:03:51 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22251 (cluster/afr: Add quorum checks to open & opendir fops) merged (#6) on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 04:34:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:34:51 +0000 Subject: [Bugs] [Bug 1686034] Request access to docker hub gluster organisation. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686034 Sridhar Seshasayee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-03-08 04:34:51 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:40:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:40:48 +0000 Subject: [Bugs] [Bug 1428080] Fixes halo multi-region fail-over regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428080 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:40:48 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:41:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:41:08 +0000 Subject: [Bugs] [Bug 1428092] Another shot at stablizing halo prove tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428092 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:41:08 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:41:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:41:19 +0000 Subject: [Bugs] [Bug 1428091] Make halo prove tests less racy In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428091 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:41:19 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:41:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:41:37 +0000 Subject: [Bugs] [Bug 1428090] cluster/afr: Hybrid mounts must honor _marked_ up/down states In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428090 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:41:37 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:41:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:41:51 +0000 Subject: [Bugs] [Bug 1428089] cluster/afr: Hybrid Halo mounts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428089 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:41:51 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:42:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:42:07 +0000 Subject: [Bugs] [Bug 1428088] Fix Halo tests in v3.6.3 of GlusterFS + minor SHD bug fix In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428088 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:42:07 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:42:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:42:24 +0000 Subject: [Bugs] [Bug 1428087] Add halo-min-samples option, better swap logic, edge case fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428087 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:42:24 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:42:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:42:43 +0000 Subject: [Bugs] [Bug 1428086] Make Halo calculate & use average latencies, not realtime In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428086 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:42:43 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:42:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:42:50 +0000 Subject: [Bugs] [Bug 1428085] Add option to toggle x-halo fail-over In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428085 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:42:50 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 04:42:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 04:42:59 +0000 Subject: [Bugs] [Bug 1428084] Fix halo-enabled option In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1428084 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |ravishankar at redhat.com Resolution|--- |DEFERRED Last Closed| |2019-03-08 04:42:59 --- Comment #1 from Ravishankar N --- Basic halo replication functionality is present in master.There are no immediate plans for forward porting any pending halo related patches from the release-3.8-fb branch into master, hence closing the bug. Please feel free to re-open if there is a need for re-assessment. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 05:16:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 05:16:32 +0000 Subject: [Bugs] [Bug 1686711] New: [Thin-arbiter] : send correct error code in case of failure Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 Bug ID: 1686711 Summary: [Thin-arbiter] : send correct error code in case of failure Product: GlusterFS Version: mainline Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Handle error code properly. https://review.gluster.org/#/c/glusterfs/+/21933/6/xlators/cluster/afr/src/afr-transaction.c at 1306 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 05:25:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 05:25:53 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #580 from Worker Ant --- REVIEW: https://review.gluster.org/22312 (packaging: remove unnecessary ldconfig in scriptlets) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 05:47:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 05:47:09 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22326 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 05:47:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 05:47:10 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22326 (glusterd: change the op-version) posted (#1) for review on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 05:47:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 05:47:11 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Bug 1684029 depends on bug 1685120, which changed state. Bug 1685120 Summary: upgrade from 3.12, 4.1 and 5 to 6 broken https://bugzilla.redhat.com/show_bug.cgi?id=1685120 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 06:13:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 06:13:44 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22327 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 06:54:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 06:54:08 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22328 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 06:54:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 06:54:09 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #581 from Worker Ant --- REVIEW: https://review.gluster.org/22328 (tests/afr: add a test case for replica 4 config) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 09:18:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:18:07 +0000 Subject: [Bugs] [Bug 1686754] New: Requesting merge rights for Cloudsync Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 Bug ID: 1686754 Summary: Requesting merge rights for Cloudsync Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: atumball at redhat.com Reporter: spalai at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Requesting to provide merge rights being a maintainer for Cloudsync Xlator. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 09:21:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:21:36 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |medium Assignee|atumball at redhat.com |dkhandel at redhat.com Severity|unspecified |medium --- Comment #1 from Amar Tumballi --- https://github.com/gluster/glusterfs/blob/master/MAINTAINERS#L275 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 09:33:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:33:53 +0000 Subject: [Bugs] [Bug 1644758] CVE-2018-14660 glusterfs: Repeat use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644758 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 09:33:53 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 09:33:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:33:54 +0000 Subject: [Bugs] [Bug 1647962] CVE-2018-14660 glusterfs: Repeat use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647962 Bug 1647962 depends on bug 1644758, which changed state. Bug 1644758 Summary: CVE-2018-14660 glusterfs: Repeat use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=1644758 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 09:33:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:33:55 +0000 Subject: [Bugs] [Bug 1647972] CVE-2018-14660 glusterfs: Repeat use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647972 Bug 1647972 depends on bug 1644758, which changed state. Bug 1644758 Summary: CVE-2018-14660 glusterfs: Repeat use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=1644758 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 09:34:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:34:39 +0000 Subject: [Bugs] [Bug 1644760] CVE-2018-14654 glusterfs: "features/index" translator can create arbitrary, empty files [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644760 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 09:34:39 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 09:34:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:34:43 +0000 Subject: [Bugs] [Bug 1646200] CVE-2018-14654 glusterfs: "features/index" translator can create arbitrary, empty files [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646200 Bug 1646200 depends on bug 1644760, which changed state. Bug 1644760 Summary: CVE-2018-14654 glusterfs: "features/index" translator can create arbitrary, empty files [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=1644760 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 09:34:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:34:44 +0000 Subject: [Bugs] [Bug 1646204] CVE-2018-14654 glusterfs: "features/index" translator can create arbitrary, empty files [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646204 Bug 1646204 depends on bug 1644760, which changed state. Bug 1644760 Summary: CVE-2018-14654 glusterfs: "features/index" translator can create arbitrary, empty files [fedora-all] https://bugzilla.redhat.com/show_bug.cgi?id=1644760 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 09:39:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:39:09 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 09:39:09 --- Comment #6 from Worker Ant --- REVIEW: https://review.gluster.org/22300 (dict: handle STR_OLD data type in xdr conversions) merged (#5) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 09:39:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:39:09 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1684385, which changed state. Bug 1684385 Summary: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing https://bugzilla.redhat.com/show_bug.cgi?id=1684385 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 09:39:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 09:39:11 +0000 Subject: [Bugs] [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 Bug 1686364 depends on bug 1684385, which changed state. Bug 1684385 Summary: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing https://bugzilla.redhat.com/show_bug.cgi?id=1684385 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 11:23:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 11:23:16 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #2 from M. Scherer --- Hi, can you explain a bit more what is missing ? As i am not familliar with the ACL system of gerrit, I would like to understand the kind of access you want, and for example who have it already so I can see where this would be defined, or something like this. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 11:40:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 11:40:42 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #3 from Susant Kumar Palai --- (In reply to M. Scherer from comment #2) > Hi, can you explain a bit more what is missing ? As i am not familliar with Maintainer right is missing. This gives the ability to add +2 on a patch and merge it as well. > the ACL system of gerrit, I would like to understand the kind of access you > want, and for example who have it already so I can see where this would be You can look at Amar's profile. > defined, or something like this. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 12:29:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 12:29:09 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22329 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 12:29:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 12:29:10 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #582 from Worker Ant --- REVIEW: https://review.gluster.org/22329 (logging.c/h: aggressively remove sprintfs()) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 13:19:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 13:19:55 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #4 from M. Scherer --- So Amar has a bit more access that most people, but I suspect that we want you either in github group glusterfs-maintainers or gluster-committers, based on the project.config file that can be access using the meta/config branch, according to https://gerrit-review.googlesource.com/Documentation/access-control.html I will add you to the group once I verify your github id (I see https://github.com/spalai but since there is no information at all on the profile, I can't be sure). I would also like to make sure folks with more access to approve have 2FA turned on, so please take a look at https://help.github.com/en/articles/about-two-factor-authentication -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 14:01:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:01:10 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 --- Comment #7 from Worker Ant --- REVIEW: https://review.gluster.org/22316 (dict: handle STR_OLD data type in xdr conversions) merged (#1) on release-5 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:08:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:08:27 +0000 Subject: [Bugs] [Bug 1676429] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676429 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 14:08:27 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22304 (io-threads: Prioritize fops with NO_ROOT_SQUASH pid) merged (#2) on release-6 by Susant Palai -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 14:08:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:08:28 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1676429, which changed state. Bug 1676429 Summary: distribute: Perf regression in mkdir path https://bugzilla.redhat.com/show_bug.cgi?id=1676429 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:08:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:08:28 +0000 Subject: [Bugs] [Bug 1676430] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676430 Bug 1676430 depends on bug 1676429, which changed state. Bug 1676429 Summary: distribute: Perf regression in mkdir path https://bugzilla.redhat.com/show_bug.cgi?id=1676429 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:08:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:08:58 +0000 Subject: [Bugs] [Bug 1686399] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 14:08:58 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22322 (performance/readdir-ahead: fix deadlock) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:09:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:09:19 +0000 Subject: [Bugs] [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 14:09:19 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22317 (dict: handle STR_OLD data type in xdr conversions) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:09:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:09:19 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1686364, which changed state. Bug 1686364 Summary: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing https://bugzilla.redhat.com/show_bug.cgi?id=1686364 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:31:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:31:46 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #5 from Susant Kumar Palai --- Michael, Is there something pending on me? Susant -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 14:36:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:36:24 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #6 from M. Scherer --- Well, your github account, you need to confirm if that's https://github.com/spalai (who do not show much information such as name, company, and our internal directory do not list that as your github account so before granting privilege, I prefer to have a confirmation) Also, plaase enable 2FA. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 14:46:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:46:11 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-08 14:46:11 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22319 (core: make compute_cksum function op_version compatible) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 14:46:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:46:11 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1684029, which changed state. Bug 1684029 Summary: upgrade from 3.12, 4.1 and 5 to 6 broken https://bugzilla.redhat.com/show_bug.cgi?id=1684029 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 14:48:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:48:49 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #7 from Susant Kumar Palai --- I doubt any Maintainers using two-factor authentication. Plus I don't see India listed for SMS based 2FA. Updated the bio as you asked. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 14:52:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 14:52:23 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #8 from M. Scherer --- You can use a yubikey with u2f, or any u2f compliant device. You can use google auth, or freeotp. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 15:09:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 15:09:51 +0000 Subject: [Bugs] [Bug 1686875] New: packaging: rdma on s390x, unnecessary ldconfig scriptlets Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 Bug ID: 1686875 Summary: packaging: rdma on s390x, unnecessary ldconfig scriptlets Product: GlusterFS Version: 6 Status: NEW Component: packaging Assignee: bugs at gluster.org Reporter: kkeithle at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: rdma on s390x since f27, rhel7 since 2016 unnecessary ldconfig in scriptlets reported by fedora Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 15:21:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 15:21:39 +0000 Subject: [Bugs] [Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22330 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 15:21:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 15:21:40 +0000 Subject: [Bugs] [Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22330 (packaging: rdma on s390x, unnecessary ldconfig scriptlets) posted (#1) for review on release-6 by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 8 15:41:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 15:41:52 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 --- Comment #9 from M. Scherer --- Also, I didn't ask to change the bio, I just asked to confirm that is your account. Just telling me "yes, that's my account" would have been sufficient :/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 8 17:38:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 08 Mar 2019 17:38:37 +0000 Subject: [Bugs] [Bug 1313567] flooding of "dict is NULL" logging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1313567 --- Comment #24 from Emerson Gomes --- I confirm all issues are gone after upgrading to 5.4. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Mar 9 04:17:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 09 Mar 2019 04:17:27 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1687051 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 [Bug 1687051] gluster volume heal failed when rolling back online upgrade from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 9 08:33:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 09 Mar 2019 08:33:52 +0000 Subject: [Bugs] [Bug 1687063] New: glusterd :symbol lookup error: undefined symbol :use_spinlocks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687063 Bug ID: 1687063 Summary: glusterd :symbol lookup error: undefined symbol :use_spinlocks Product: GlusterFS Version: 5 Hardware: aarch64 OS: Linux Status: NEW Component: locks Assignee: bugs at gluster.org Reporter: 1352411423 at qq.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: after install glusterfs5.10,when ip start glusterd service failed glusterd :symbol lookup error: undefined symbol :use_spinlocks Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1.install glusterfs5.10 successfull 2.running glusterd 3.failed,glusterd :symbol lookup error: undefined symbol :use_spinlocks Actual results: glusterd :symbol lookup error: undefined symbol :use_spinlocks Expected results: successfully Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 9 10:58:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 09 Mar 2019 10:58:56 +0000 Subject: [Bugs] [Bug 1686371] Cleanup nigel access and document it In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686371 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22320 (Remove Nigel as requested by him) merged (#3) on master by Nigel Babu -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 9 16:54:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 09 Mar 2019 16:54:19 +0000 Subject: [Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and leads to "Transport endpoint is not connected" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1626085 --- Comment #14 from GCth --- Update to 5.4 fixed this issue for me. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 7 10:38:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 07 Mar 2019 10:38:51 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1686875 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 [Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 9 23:46:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 09 Mar 2019 23:46:05 +0000 Subject: [Bugs] [Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1672818 (glusterfs-6.0) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 00:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 00:31:31 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-11 00:31:31 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22314 (core: make compute_cksum function op_version compatible) merged (#5) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 02:12:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 02:12:03 +0000 Subject: [Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and leads to "Transport endpoint is not connected" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1626085 --- Comment #15 from Nithya Balachandran --- (In reply to GCth from comment #14) > Update to 5.4 fixed this issue for me. Excellent. Thanks for letting us know. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 02:22:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 02:22:50 +0000 Subject: [Bugs] [Bug 1687063] glusterd :symbol lookup error: undefined symbol :use_spinlocks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687063 --- Comment #1 from q449278118 <1352411423 at qq.com> --- i have solved it ,because new install libglusterfs.so has refer the wrong libs path, we need delete the libs in /lib64, only use in /usr/local libs. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 03:41:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 03:41:12 +0000 Subject: [Bugs] [Bug 1687248] New: Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687248 Bug ID: 1687248 Summary: Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range Product: GlusterFS Version: 6 Status: NEW Component: eventsapi Keywords: EasyFix, ZStream Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: dahorak at redhat.com, rhs-bugs at redhat.com, sanandpa at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com Depends On: 1600459, 1685027 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1685027 +++ +++ This bug was initially created as a clone of Bug #1600459 +++ Description of problem: During testing of RHGS WA, I've found following traceback raised from /usr/sbin/gluster-eventsapi script: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ File "/usr/sbin/gluster-eventsapi", line 666, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 224, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 329, in run sync_to_peers(args) File "/usr/sbin/gluster-eventsapi", line 177, in sync_to_peers "{1}".format(e[0], e[2]), IndexError: tuple index out of range ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The prospective real issue is hidden beside this traceback. Version-Release number of selected component (if applicable): glusterfs-events-3.12.2-13.el7rhgs.x86_64 How reproducible: 100% if you will be able to cause the raise of GlusterCmdException Steps to Reproduce: I'm not sure, how to reproduce it from scratch, as my knowledge related to gluster-eventsapi is very limited, but the problem is quite well visible from the source code: Open /usr/sbin/gluster-eventsapi script and look for function sync_to_peers around line 171: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 171 def sync_to_peers(args): 172 if os.path.exists(WEBHOOKS_FILE): 173 try: 174 sync_file_to_peers(WEBHOOKS_FILE_TO_SYNC) 175 except GlusterCmdException as e: 176 handle_output_error("Failed to sync Webhooks file: [Error: {0}]" 177 "{1}".format(e[0], e[2]), 178 errcode=ERROR_WEBHOOK_SYNC_FAILED, 179 json_output=args.json) 180 181 if os.path.exists(CUSTOM_CONFIG_FILE): 182 try: 183 sync_file_to_peers(CUSTOM_CONFIG_FILE_TO_SYNC) 184 except GlusterCmdException as e: 185 handle_output_error("Failed to sync Config file: [Error: {0}]" 186 "{1}".format(e[0], e[2]), 187 errcode=ERROR_CONFIG_SYNC_FAILED, 188 json_output=args.json) 189 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Important lines are 177 and 186: "{1}".format(e[0], e[2]), The problem is, that the GlusterCmdException is raised this way[1]: raise GlusterCmdException((rc, out, err)) So all three parameters rc, out and err are supplied as one parameter (of type tuple). Actual results: Any problem leading to raise of GlusterCmdException is hidden beside above mentioned exception. Expected results: There shouldn't be any such traceback. Additional info: [1] file /usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py --- Additional comment from Daniel Hor?k on 2018-07-13 08:23:29 UTC --- Possible Reproduction scenario might be, to remove (rename) /var/lib/glusterd/events/ directory on one Gluster Storage Node and try to add webhook from another storage node: On Gluster node 5: # mv /var/lib/glusterd/events/ /var/lib/glusterd/events_BACKUP On Gluster node 1: # gluster-eventsapi webhook-add http://0.0.0.0:8697/test Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 666, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 224, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 329, in run sync_to_peers(args) File "/usr/sbin/gluster-eventsapi", line 177, in sync_to_peers "{1}".format(e[0], e[2]), IndexError: tuple index out of range --- Additional comment from Worker Ant on 2019-03-04 08:17:52 UTC --- REVIEW: https://review.gluster.org/22294 (eventsapi: Fix error while handling GlusterCmdException) posted (#1) for review on master by Aravinda VK --- Additional comment from Worker Ant on 2019-03-06 13:22:53 UTC --- REVIEW: https://review.gluster.org/22294 (eventsapi: Fix error while handling GlusterCmdException) merged (#2) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1600459 [Bug 1600459] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range https://bugzilla.redhat.com/show_bug.cgi?id=1685027 [Bug 1685027] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 03:41:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 03:41:30 +0000 Subject: [Bugs] [Bug 1687248] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687248 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 03:42:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 03:42:33 +0000 Subject: [Bugs] [Bug 1687249] New: Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687249 Bug ID: 1687249 Summary: Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range Product: GlusterFS Version: 5 Status: NEW Component: eventsapi Keywords: EasyFix, ZStream Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: dahorak at redhat.com, rhs-bugs at redhat.com, sanandpa at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com Depends On: 1600459, 1685027 Blocks: 1687248 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1685027 +++ +++ This bug was initially created as a clone of Bug #1600459 +++ Description of problem: During testing of RHGS WA, I've found following traceback raised from /usr/sbin/gluster-eventsapi script: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ File "/usr/sbin/gluster-eventsapi", line 666, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 224, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 329, in run sync_to_peers(args) File "/usr/sbin/gluster-eventsapi", line 177, in sync_to_peers "{1}".format(e[0], e[2]), IndexError: tuple index out of range ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The prospective real issue is hidden beside this traceback. Version-Release number of selected component (if applicable): glusterfs-events-3.12.2-13.el7rhgs.x86_64 How reproducible: 100% if you will be able to cause the raise of GlusterCmdException Steps to Reproduce: I'm not sure, how to reproduce it from scratch, as my knowledge related to gluster-eventsapi is very limited, but the problem is quite well visible from the source code: Open /usr/sbin/gluster-eventsapi script and look for function sync_to_peers around line 171: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 171 def sync_to_peers(args): 172 if os.path.exists(WEBHOOKS_FILE): 173 try: 174 sync_file_to_peers(WEBHOOKS_FILE_TO_SYNC) 175 except GlusterCmdException as e: 176 handle_output_error("Failed to sync Webhooks file: [Error: {0}]" 177 "{1}".format(e[0], e[2]), 178 errcode=ERROR_WEBHOOK_SYNC_FAILED, 179 json_output=args.json) 180 181 if os.path.exists(CUSTOM_CONFIG_FILE): 182 try: 183 sync_file_to_peers(CUSTOM_CONFIG_FILE_TO_SYNC) 184 except GlusterCmdException as e: 185 handle_output_error("Failed to sync Config file: [Error: {0}]" 186 "{1}".format(e[0], e[2]), 187 errcode=ERROR_CONFIG_SYNC_FAILED, 188 json_output=args.json) 189 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Important lines are 177 and 186: "{1}".format(e[0], e[2]), The problem is, that the GlusterCmdException is raised this way[1]: raise GlusterCmdException((rc, out, err)) So all three parameters rc, out and err are supplied as one parameter (of type tuple). Actual results: Any problem leading to raise of GlusterCmdException is hidden beside above mentioned exception. Expected results: There shouldn't be any such traceback. Additional info: [1] file /usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py --- Additional comment from Daniel Hor?k on 2018-07-13 08:23:29 UTC --- Possible Reproduction scenario might be, to remove (rename) /var/lib/glusterd/events/ directory on one Gluster Storage Node and try to add webhook from another storage node: On Gluster node 5: # mv /var/lib/glusterd/events/ /var/lib/glusterd/events_BACKUP On Gluster node 1: # gluster-eventsapi webhook-add http://0.0.0.0:8697/test Traceback (most recent call last): File "/usr/sbin/gluster-eventsapi", line 666, in runcli() File "/usr/lib/python2.7/site-packages/gluster/cliutils/cliutils.py", line 224, in runcli cls.run(args) File "/usr/sbin/gluster-eventsapi", line 329, in run sync_to_peers(args) File "/usr/sbin/gluster-eventsapi", line 177, in sync_to_peers "{1}".format(e[0], e[2]), IndexError: tuple index out of range --- Additional comment from Worker Ant on 2019-03-04 08:17:52 UTC --- REVIEW: https://review.gluster.org/22294 (eventsapi: Fix error while handling GlusterCmdException) posted (#1) for review on master by Aravinda VK --- Additional comment from Worker Ant on 2019-03-06 13:22:53 UTC --- REVIEW: https://review.gluster.org/22294 (eventsapi: Fix error while handling GlusterCmdException) merged (#2) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1600459 [Bug 1600459] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range https://bugzilla.redhat.com/show_bug.cgi?id=1685027 [Bug 1685027] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range https://bugzilla.redhat.com/show_bug.cgi?id=1687248 [Bug 1687248] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 03:44:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 03:44:03 +0000 Subject: [Bugs] [Bug 1687249] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687249 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22332 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 03:44:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 03:44:04 +0000 Subject: [Bugs] [Bug 1687249] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687249 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22332 (eventsapi: Fix error while handling GlusterCmdException) posted (#1) for review on release-5 by Aravinda VK -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 03:44:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 03:44:48 +0000 Subject: [Bugs] [Bug 1687249] Error handling in /usr/sbin/gluster-eventsapi produces IndexError: tuple index out of range In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687249 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 04:03:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 04:03:56 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when rolling back online upgrade from 4.1.4 to 3.12.15 or online upgrade from 3.12.15 to 5.x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Version|unspecified |5 Component|glusterfs |replicate CC| |bugs at gluster.org Assignee|atumball at redhat.com |ksubrahm at redhat.com QA Contact|bmekala at redhat.com | Product|Red Hat Gluster Storage |GlusterFS -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 04:06:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 04:06:11 -0000 Subject: [Bugs] [Bug 1674389] [thin arbiter] : rpm - add thin-arbiter package In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674389 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22125 (rpm: add thin-arbiter package) merged (#15) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 06:13:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 06:13:07 +0000 Subject: [Bugs] [Bug 1685120] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-03-07 05:01:44 |2019-03-11 06:13:07 --- Comment #8 from Worker Ant --- REVIEW: https://review.gluster.org/22326 (glusterd: change the op-version) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 06:13:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 06:13:07 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Bug 1684029 depends on bug 1685120, which changed state. Bug 1685120 Summary: upgrade from 3.12, 4.1 and 5 to 6 broken https://bugzilla.redhat.com/show_bug.cgi?id=1685120 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 09:24:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 09:24:29 +0000 Subject: [Bugs] [Bug 1687326] New: [RFE] Revoke access from nodes using Certificate Revoke List in SSL Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687326 Bug ID: 1687326 Summary: [RFE] Revoke access from nodes using Certificate Revoke List in SSL Product: GlusterFS Version: mainline OS: Linux Status: NEW Component: rpc Severity: low Priority: medium Assignee: bugs at gluster.org Reporter: mchangir at redhat.com CC: amukherj at redhat.com, atumball at redhat.com, bkunal at redhat.com, bugs at gluster.org, mchangir at redhat.com, nchilaka at redhat.com, rhs-bugs at redhat.com, rik.theys at esat.kuleuven.be, sankarshan at redhat.com, sarora at redhat.com, sheggodu at redhat.com, smali at redhat.com, vbellur at redhat.com, vdas at redhat.com Blocks: 1583585 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1583585 [Bug 1583585] [RFE] Revoke access from nodes using Certificate Revoke List in SSL -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 09:24:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 09:24:49 +0000 Subject: [Bugs] [Bug 1687326] [RFE] Revoke access from nodes using Certificate Revoke List in SSL In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687326 Milind Changire changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |mchangir at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 09:58:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 09:58:34 +0000 Subject: [Bugs] [Bug 1687326] [RFE] Revoke access from nodes using Certificate Revoke List in SSL In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687326 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22334 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 09:58:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 09:58:35 +0000 Subject: [Bugs] [Bug 1687326] [RFE] Revoke access from nodes using Certificate Revoke List in SSL In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687326 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22334 (socket/ssl: fix crl handling) posted (#1) for review on master by Milind Changire -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 10:48:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 10:48:28 +0000 Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile workload tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670031 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com, | |nbalacha at redhat.com Flags| |needinfo?(atumball at redhat.c | |om) --- Comment #15 from Nithya Balachandran --- Has git bisect been used to narrow down the patches that have caused the regression? The inode code has not changed in a long time so this is unlikely to be the cause of the slowdown. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 12:00:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 12:00:55 +0000 Subject: [Bugs] [Bug 1685337] Updating Fedora 28 fail with "Package glusterfs-5.4-1.fc28.x86_64.rpm is not signed" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685337 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |kkeithle at redhat.com Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-11 12:00:55 --- Comment #1 from Kaleb KEITHLEY --- repo was recreated with signed rpms -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 12:51:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 12:51:32 +0000 Subject: [Bugs] [Bug 1686754] Requesting merge rights for Cloudsync In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686754 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-11 12:51:32 --- Comment #10 from M. Scherer --- So, that was unrelated to github in the end, I did the change (in gerrit UI), but I would still push folks to use 2FA as much as possible. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 13:33:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 13:33:17 +0000 Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile workload tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670031 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(atumball at redhat.c | |om) | --- Comment #16 from Amar Tumballi --- > Has git bisect been used to narrow down the patches that have caused the regression? The inode code has not changed in a long time so this is unlikely to be the cause of the slowdown. Ack, inode code is surely not the reason. The code/features identified as reasons for some of the regressions were: * no-root-squash PID for mkdir-layout set code in DHT (for mkdir) * gfid2path xattr setting (for rename) * ctime setting (for rmdir and few other entry ops, which seemed minor). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 13:53:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 13:53:10 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when rolling back online upgrade from 4.1.4 to 3.12.15 or online upgrade from 3.12.15 to 5.x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #3 from Amgad --- Any update -- this will impact online upgrade to 5.4 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 15:22:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 15:22:25 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when rolling back online upgrade from 4.1.4 to 3.12.15 or online upgrade from 3.12.15 to 5.x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com, | |srangana at redhat.com Flags| |needinfo?(srangana at redhat.c | |om) --- Comment #4 from Atin Mukherjee --- Considering (a) this happens during a rollback which isn't something community has tested and support & (b) there're other critical fixes waiting for users for 5.4 which is overdue, we shouldn't be blocking glusterfs-5.4 release. My proposal is to not mark this bug as a blocker to 5.4. Shyam - what do you think? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 15:38:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 15:38:41 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-11 15:38:41 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22325 (cluster/afr: Send truncate on arbiter brick from SHD) merged (#10) on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 19:30:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 19:30:28 +0000 Subject: [Bugs] [Bug 1636297] Make it easy to build / host a project which just builds glusterfs translator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1636297 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 21385 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 11 20:25:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 20:25:44 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when rolling back online upgrade from 4.1.4 to 3.12.15 or online upgrade from 3.12.15 to 5.x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #5 from Amgad --- So how do you do online upgrade - keep in mind upgrade is not complete without rollback isn't any deployment. If online upgrade/backout is not supported, reliability drops big time, especially that the cluster is used by all applications in our case! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 20:29:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 20:29:47 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when rolling back online upgrade from 4.1.4 to 3.12.15 or online upgrade from 3.12.15 to 5.x In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #6 from Amgad --- Besides online upgrade doesn't work between 3.12. and 5.3, isit working from 3.12 to 5.4? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 11 20:49:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 11 Mar 2019 20:49:41 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|gluster volume heal failed |gluster volume heal failed |when rolling back online |when online upgrading from |upgrade from 4.1.4 to |3.12 to 5.x and when |3.12.15 or online upgrade |rolling back online upgrade |from 3.12.15 to 5.x |from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 05:01:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:01:59 +0000 Subject: [Bugs] [Bug 1683815] Memory leak when peer detach fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683815 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Blocks|1672818 (glusterfs-6.0) | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:01:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:01:59 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1683815 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1683815 [Bug 1683815] Memory leak when peer detach fails -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:02:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:02:52 +0000 Subject: [Bugs] [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1672818 (glusterfs-6.0) | --- Comment #1 from Atin Mukherjee --- I don't see this happening any further on the latest testing of the release-6 branch. Will keep this bug open for sometime, but taking out the 6.0 blocker. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 05:02:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:02:52 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1679892 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:06:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:06:50 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(info at netbulae.com | |) --- Comment #20 from Atin Mukherjee --- We tried to reproduce this issue, but couldn't hit the same problem. Until and unless we have a reproducer and the complete logs especially the client, glusterd and brick log files, it'd be hard to debug. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:07:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:07:55 +0000 Subject: [Bugs] [Bug 1670718] md-cache should be loaded at a position in graph where it sees stats in write cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670718 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(rgowdapp at redhat.c | |om) --- Comment #2 from Atin Mukherjee --- Is this a blocker to release-6? Can we please re-evaluate? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 05:09:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:09:02 +0000 Subject: [Bugs] [Bug 1674364] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674364 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(rgowdapp at redhat.c | |om) --- Comment #5 from Atin Mukherjee --- Is there anything pending on this bug? I still see the bug is in POST state even though the above two patches are merged? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:10:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:10:10 +0000 Subject: [Bugs] [Bug 1679275] gluster-NFS crash while expanding volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679275 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com, | |spalai at redhat.com Flags| |needinfo?(spalai at redhat.com | |) --- Comment #3 from Atin Mukherjee --- Is there anything pending on this bug? I still see the bug is in POST state even though the above patch is merged (as the commit had 'updates' tag). -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 05:10:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:10:49 +0000 Subject: [Bugs] [Bug 1674364] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674364 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED Flags|needinfo?(rgowdapp at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:11:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:11:23 +0000 Subject: [Bugs] [Bug 1680585] remove glupy from code and build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680585 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(ndevos at redhat.com | |) --- Comment #3 from Atin Mukherjee --- Is there anything pending on this bug? I still see the bug is in POST state even though the above patch is merged (as the commit had 'updates' tag). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:16:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:16:29 +0000 Subject: [Bugs] [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676356 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |MODIFIED --- Comment #1 from Raghavendra G --- https://review.gluster.org/#/c/glusterfs/+/22230/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:17:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:17:56 +0000 Subject: [Bugs] [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676356 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 05:17:56 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:17:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:17:56 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1676356, which changed state. Bug 1676356 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1676356 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:17:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:17:57 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Bug 1667103 depends on bug 1676356, which changed state. Bug 1676356 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1676356 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:19:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:19:46 +0000 Subject: [Bugs] [Bug 1674364] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674364 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 05:19:46 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:19:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:19:47 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1674364, which changed state. Bug 1674364 Summary: glusterfs-fuse client not benefiting from page cache on read after write https://bugzilla.redhat.com/show_bug.cgi?id=1674364 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 05:24:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:24:17 +0000 Subject: [Bugs] [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Blocks| |1672818 (glusterfs-6.0) --- Comment #2 from Sanju --- I still see the assertion failure message in the glusterd.log [2019-03-12 05:19:06.206695] E [mem-pool.c:351:__gf_free] (-->/usr/local/lib/glusterfs/6.0rc0/xlator/mgmt/glusterd.so(+0x48133) [0x7f264602c133] -->/usr/local/lib/glusterfs/6.0rc0/xlator/mgmt/glusterd.so(+0x47f0a) [0x7f264602bf0a] -->/usr/local/lib/libglusterfs.so.0(__gf_free+0x22d) [0x7f265263ac9d] ) 0-: Assertion failed: mem_acct->rec[header->type].size >= header->size I will update the bug with root cause as soon as possible. Thanks, Sanju Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 05:24:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 05:24:17 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1679892 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1679892 [Bug 1679892] assertion failure log in glusterd.log file when a volume start is triggered -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 06:19:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 06:19:44 +0000 Subject: [Bugs] [Bug 1687672] New: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 Bug ID: 1687672 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Product: GlusterFS Version: 6 Status: ASSIGNED Component: geo-replication Keywords: ZStream Severity: high Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, ksubrahm at redhat.com, nchilaka at redhat.com, pkarampu at redhat.com, rallan at redhat.com, ravishankar at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1686568 Target Milestone: --- Classification: Community Description of problem: ======================= While converting 2x2 to 2x(2+1) (arbiter), there was a checksum mismatch: [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/master/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 8e69e8576625d36f9ee1866c92bfb6a3 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 2fbf69488baa3ac7 [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/slave/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 53c64bd1144f6d9855f0af3edb55e614 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 3901e39cb02ad487 Everything matches except under "CHECKSUMS", Regular files and the total are a mismatch. Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.12.2-45.el7rhgs.x86_64 How reproducible: ================= 2/2 Steps to Reproduce: ==================== 1. Create and start a geo-rep session with master and slave being 2x2 2. Mount the vols and start pumping data 3. Disable and stop self healing (prior to add-brick) # gluster volume set VOLNAME cluster.data-self-heal off # gluster volume set VOLNAME cluster.metadata-self-heal off # gluster volume set VOLNAME cluster.entry-self-heal off # gluster volume set VOLNAME self-heal-daemon off 4. Add brick to the master and slave to convert them to 2x(2+1) arbiter vols 5. Start rebalance on master and slave 6. Re-enable self healing : # gluster volume set VOLNAME cluster.data-self-heal on # gluster volume set VOLNAME cluster.metadata-self-heal on # gluster volume set VOLNAME cluster.entry-self-heal on # gluster volume set VOLNAME self-heal-daemon on 7. Wait for rebalance to complete 8. Check the checksum between master and slave Actual results: =============== Checksum does not fully match Expected results: ================ Checksum should match Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 06:19:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 06:19:44 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1687672 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 06:20:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 06:20:01 +0000 Subject: [Bugs] [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 --- Comment #1 from Karthik U S --- RCA: If arbiter brick is pending data heal, then self heal will just restore the timestamps of the file and resets the pending xattrs on the source bricks. It will not send any write on the arbiter brick. Here in the add-brick scenario, it will create the entries and then restores the timestamps and other metadata of the files from the source brick. Hence the data changes will not be marked on the changelog, leading to missing data on the slave volume after sync. Possible Fixes: 1. Do not mark arbiter brick as ACTIVE, as it will not have the changelogs for the data transactions happened when it was down/faulty even after the completion of heal. 2. Send 1 byte write on the arbiter brick from self heal as we do with the normal writes from the clients. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 06:38:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 06:38:35 +0000 Subject: [Bugs] [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |sheggodu at redhat.com Blocks| |1672818 (glusterfs-6.0) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 06:38:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 06:38:35 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1687672 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 07:14:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:14:59 +0000 Subject: [Bugs] [Bug 1687687] New: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687687 Bug ID: 1687687 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Product: GlusterFS Version: 5 Status: ASSIGNED Component: geo-replication Keywords: ZStream Severity: high Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, ksubrahm at redhat.com, nchilaka at redhat.com, pkarampu at redhat.com, rallan at redhat.com, ravishankar at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1686568 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1687672 +++ Description of problem: ======================= While converting 2x2 to 2x(2+1) (arbiter), there was a checksum mismatch: [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/master/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 8e69e8576625d36f9ee1866c92bfb6a3 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 2fbf69488baa3ac7 [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/slave/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 53c64bd1144f6d9855f0af3edb55e614 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 3901e39cb02ad487 Everything matches except under "CHECKSUMS", Regular files and the total are a mismatch. Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.12.2-45.el7rhgs.x86_64 How reproducible: ================= 2/2 Steps to Reproduce: ==================== 1. Create and start a geo-rep session with master and slave being 2x2 2. Mount the vols and start pumping data 3. Disable and stop self healing (prior to add-brick) # gluster volume set VOLNAME cluster.data-self-heal off # gluster volume set VOLNAME cluster.metadata-self-heal off # gluster volume set VOLNAME cluster.entry-self-heal off # gluster volume set VOLNAME self-heal-daemon off 4. Add brick to the master and slave to convert them to 2x(2+1) arbiter vols 5. Start rebalance on master and slave 6. Re-enable self healing : # gluster volume set VOLNAME cluster.data-self-heal on # gluster volume set VOLNAME cluster.metadata-self-heal on # gluster volume set VOLNAME cluster.entry-self-heal on # gluster volume set VOLNAME self-heal-daemon on 7. Wait for rebalance to complete 8. Check the checksum between master and slave Actual results: =============== Checksum does not fully match Expected results: ================ Checksum should match --- Additional comment from Karthik U S on 2019-03-12 06:20:01 UTC --- RCA: If arbiter brick is pending data heal, then self heal will just restore the timestamps of the file and resets the pending xattrs on the source bricks. It will not send any write on the arbiter brick. Here in the add-brick scenario, it will create the entries and then restores the timestamps and other metadata of the files from the source brick. Hence the data changes will not be marked on the changelog, leading to missing data on the slave volume after sync. Possible Fixes: 1. Do not mark arbiter brick as ACTIVE, as it will not have the changelogs for the data transactions happened when it was down/faulty even after the completion of heal. 2. Send 1 byte write on the arbiter brick from self heal as we do with the normal writes from the clients. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 07:14:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:14:59 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1687687 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687687 [Bug 1687687] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 07:29:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:29:14 +0000 Subject: [Bugs] [Bug 1679275] gluster-NFS crash while expanding volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679275 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(spalai at redhat.com | |) | --- Comment #4 from Susant Kumar Palai --- (In reply to Atin Mukherjee from comment #3) > Is there anything pending on this bug? I still see the bug is in POST state > even though the above patch is merged (as the commit had 'updates' tag). There was a crash seen dht layer in which was fixed by the above patch. But the patch was written originally for https://bugzilla.redhat.com/show_bug.cgi?id=1651439 which targetted mostly the nfs use case. Since we needed the dht fix in release-6, I guess Sunil cloned the mainline bug directly. Will change the summary to reflect dht-crash part and move the bug status to modified. Susant -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 07:30:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:30:23 +0000 Subject: [Bugs] [Bug 1679275] dht: fix double extra unref of inode at heal path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679275 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|gluster-NFS crash while |dht: fix double extra unref |expanding volume |of inode at heal path -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 07:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:30:43 +0000 Subject: [Bugs] [Bug 1679275] dht: fix double extra unref of inode at heal path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679275 Susant Kumar Palai changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 07:44:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:44:38 +0000 Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672249 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED CC| |srakonde at redhat.com Resolution|NEXTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 07:51:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:51:50 +0000 Subject: [Bugs] [Bug 1687705] New: Brick process has coredumped, when starting glusterd Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687705 Bug ID: 1687705 Summary: Brick process has coredumped, when starting glusterd Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: rpc Severity: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org, moagrawa at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sasundar at redhat.com Depends On: 1687641 Blocks: 1687671 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687641 [Bug 1687641] Brick process has coredumped, when starting glusterd https://bugzilla.redhat.com/show_bug.cgi?id=1687671 [Bug 1687671] Brick process has coredumped, when starting glusterd -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 07:52:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 07:52:30 +0000 Subject: [Bugs] [Bug 1687705] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687705 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 08:52:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 08:52:59 +0000 Subject: [Bugs] [Bug 1680585] remove glupy from code and build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680585 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED Flags|needinfo?(ndevos at redhat.com | |) | --- Comment #4 from Niels de Vos --- (In reply to Atin Mukherjee from comment #3) > Is there anything pending on this bug? I still see the bug is in POST state > even though the above patch is merged (as the commit had 'updates' tag). I think this was the only thing that needed an extra backport. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 09:50:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 09:50:14 +0000 Subject: [Bugs] [Bug 1687746] New: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687746 Bug ID: 1687746 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter Product: GlusterFS Version: 4.1 Status: ASSIGNED Component: geo-replication Keywords: ZStream Severity: high Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, ksubrahm at redhat.com, nchilaka at redhat.com, pasik at iki.fi, pkarampu at redhat.com, rallan at redhat.com, ravishankar at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1686568 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1687672 +++ Description of problem: ======================= While converting 2x2 to 2x(2+1) (arbiter), there was a checksum mismatch: [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/master/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 8e69e8576625d36f9ee1866c92bfb6a3 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 2fbf69488baa3ac7 [root at dhcp43-143 ~]# ./arequal-checksum -p /mnt/slave/ Entry counts Regular files : 10000 Directories : 2011 Symbolic links : 11900 Other : 0 Total : 23911 Metadata checksums Regular files : 5ce564791c Directories : 288ecb21ce24 Symbolic links : 3e9 Other : 3e9 Checksums Regular files : 53c64bd1144f6d9855f0af3edb55e614 Directories : 4a596e7e1e792061 Symbolic links : 756e690d61497f6a Other : 0 Total : 3901e39cb02ad487 Everything matches except under "CHECKSUMS", Regular files and the total are a mismatch. Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.12.2-45.el7rhgs.x86_64 How reproducible: ================= 2/2 Steps to Reproduce: ==================== 1. Create and start a geo-rep session with master and slave being 2x2 2. Mount the vols and start pumping data 3. Disable and stop self healing (prior to add-brick) # gluster volume set VOLNAME cluster.data-self-heal off # gluster volume set VOLNAME cluster.metadata-self-heal off # gluster volume set VOLNAME cluster.entry-self-heal off # gluster volume set VOLNAME self-heal-daemon off 4. Add brick to the master and slave to convert them to 2x(2+1) arbiter vols 5. Start rebalance on master and slave 6. Re-enable self healing : # gluster volume set VOLNAME cluster.data-self-heal on # gluster volume set VOLNAME cluster.metadata-self-heal on # gluster volume set VOLNAME cluster.entry-self-heal on # gluster volume set VOLNAME self-heal-daemon on 7. Wait for rebalance to complete 8. Check the checksum between master and slave Actual results: =============== Checksum does not fully match Expected results: ================ Checksum should match --- Additional comment from Karthik U S on 2019-03-12 06:20:01 UTC --- RCA: If arbiter brick is pending data heal, then self heal will just restore the timestamps of the file and resets the pending xattrs on the source bricks. It will not send any write on the arbiter brick. Here in the add-brick scenario, it will create the entries and then restores the timestamps and other metadata of the files from the source brick. Hence the data changes will not be marked on the changelog, leading to missing data on the slave volume after sync. Possible Fixes: 1. Do not mark arbiter brick as ACTIVE, as it will not have the changelogs for the data transactions happened when it was down/faulty even after the completion of heal. 2. Send 1 byte write on the arbiter brick from self heal as we do with the normal writes from the clients. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 09:50:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 09:50:14 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1687746 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687746 [Bug 1687746] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 10:06:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 10:06:47 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Flags| |needinfo?(amgad.saleh at nokia | |.com) --- Comment #7 from Karthik U S --- Can you please provide the following information? - gluster volume info - gluster volume status - logs from all the nodes (path: /var/log/glusterfs/) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 10:10:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 10:10:00 +0000 Subject: [Bugs] [Bug 1680585] remove glupy from code and build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680585 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 10:10:00 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 10:10:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 10:10:01 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1680585, which changed state. Bug 1680585 Summary: remove glupy from code and build https://bugzilla.redhat.com/show_bug.cgi?id=1680585 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 11:21:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 11:21:17 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22344 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 11:21:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 11:21:18 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) posted (#2) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 11:22:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 11:22:19 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22341 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 11:22:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 11:22:20 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #12 from Worker Ant --- REVIEW: https://review.gluster.org/22341 (rpm: add thin-arbiter package) posted (#1) for review on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 12:39:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 12:39:23 +0000 Subject: [Bugs] [Bug 1687811] New: core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687811 Bug ID: 1687811 Summary: core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t Product: GlusterFS Version: mainline Status: NEW Component: distribute Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: While running upstream regression, observed a coredump generated by fuse client process. Thread 1 (Thread 0x7f34ef722700 (LWP 9310)): 16:58:12 #0 0x00007f34ee70d0b9 in dht_common_mark_mdsxattr (frame=0x7f34e80a44d8, errst=0x0, mark_during_fresh_lookup=1) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/cluster/dht/src/dht-common.c:855 16:58:12 local = 0x7f34e80b37f8 16:58:12 this = 0x7f34e80252f0 16:58:12 hashed_subvol = 0x0 16:58:12 ret = 0 16:58:12 i = 2 16:58:12 xattrs = 0x0 16:58:12 gfid_local = "a8c9cc71-1430-4217-86e6-f32eeb69d4ce" 16:58:12 zero = {0} 16:58:12 conf = 0x7f34e8054600 16:58:12 layout = 0x0 16:58:12 copy_local = 0x0 16:58:12 xattr_frame = 0x0 16:58:12 vol_down = false 16:58:12 __FUNCTION__ = "dht_common_mark_mdsxattr" 16:58:12 #1 0x00007f34ee7120f7 in dht_revalidate_cbk (frame=0x7f34e80a44d8, cookie=0x7f34e8020b50, this=0x7f34e80252f0, op_ret=0, op_errno=0, inode=0x7f34dc02bd98, stbuf=0x7f34e80a15e8, xattr=0x7f34e80c5b78, postparent=0x7f34e80a1680) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/cluster/dht/src/dht-common.c:1780 16:58:12 local = 0x7f34e80b37f8 16:58:12 this_call_cnt = 0 16:58:12 prev = 0x7f34e8020b50 16:58:12 layout = 0x7f34dc011250 16:58:12 conf = 0x7f34e8054600 16:58:12 ret = 0 16:58:12 is_dir = 1 16:58:12 is_linkfile = 0 16:58:12 follow_link = 0 16:58:12 copy = 0x0 16:58:12 copy_local = 0x0 16:58:12 gfid = "a8c9cc71-1430-4217-86e6-f32eeb69d4ce" 16:58:12 vol_commit_hash = 0 16:58:12 subvol = 0x0 16:58:12 check_mds = 0 16:58:12 errst = 0 16:58:12 mds_xattr_val = {0} 16:58:12 __FUNCTION__ = "dht_revalidate_cbk" 16:58:12 #2 0x00007f34eea387a1 in afr_lookup_done (frame=0x7f34e80970b8, this=0x7f34e8020b50) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/cluster/afr/src/afr-common.c:2499 16:58:12 fn = 0x7f34ee710dde 16:58:12 _parent = 0x7f34e80a44d8 16:58:12 old_THIS = 0x7f34e8020b50 16:58:12 __local = 0x7f34e8099198 16:58:12 __this = 0x7f34e8020b50 16:58:12 __op_ret = 0 16:58:12 __op_errno = 0 16:58:12 priv = 0x7f34e805efe0 16:58:12 local = 0x7f34e8099198 16:58:12 i = 3 16:58:12 op_errno = 0 16:58:12 read_subvol = 1 16:58:12 par_read_subvol = 1 16:58:12 ret = -2 16:58:12 readable = 0x7f34ef721330 "" 16:58:12 success_replies = 0x7f34ef721310 "\001\001\001\357\064\177" 16:58:12 event = 3 16:58:12 replies = 0x7f34e80a11f0 16:58:12 read_gfid = "\250\311\314q\024\060B\027\206\346\363.\353i\324", 16:58:12 locked_entry = false 16:58:12 can_interpret = true 16:58:12 parent = 0x7f34e8001de8 16:58:12 ia_type = IA_IFDIR 16:58:12 args = {ia_type = IA_IFDIR, gfid = "\250\311\314q\024\060B\027\206\346\363.\353i\324", } 16:58:12 gfid_heal_msg = 0x0 16:58:12 __FUNCTION__ = "afr_lookup_done" 16:58:12 #3 0x00007f34eea39c93 in afr_lookup_metadata_heal_check (frame=0x7f34e80970b8, this=0x7f34e8020b50) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/cluster/afr/src/afr-common.c:2807 16:58:12 heal = 0x0 16:58:12 local = 0x7f34e8099198 16:58:12 ret = 0 16:58:12 #4 0x00007f34eea3a87b in afr_lookup_entry_heal (frame=0x7f34e80970b8, this=0x7f34e8020b50) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/cluster/afr/src/afr-common.c:2955 16:58:12 local = 0x7f34e8099198 16:58:12 priv = 0x7f34e805efe0 16:58:12 heal = 0x0 16:58:12 i = 3 16:58:12 first = 0 16:58:12 name_state_mismatch = false 16:58:12 replies = 0x7f34e80a11f0 16:58:12 ret = 0 16:58:12 par_readables = 0x7f34ef721500 "" 16:58:12 success = 0x7f34ef7214e0 "\001\001\001" 16:58:12 op_errno = 0 16:58:12 gfid = "\250\311\314q\024\060B\027\206\346\363.\353i\324", 16:58:12 #5 0x00007f34eea3aab5 in afr_lookup_cbk (frame=0x7f34e80970b8, cookie=0x1, this=0x7f34e8020b50, op_ret=0, op_errno=0, inode=0x7f34dc02bd98, buf=0x7f34ef721750, xdata=0x7f34e80c5b78, postparent=0x7f34ef7216b0) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/cluster/afr/src/afr-common.c:3002 16:58:12 local = 0x7f34e8099198 16:58:12 call_count = 0 16:58:12 child_index = 1 16:58:12 ret = 0 16:58:12 need_heal = 1 '\001' 16:58:12 #6 0x00007f34eecf5a51 in client4_0_lookup_cbk (req=0x7f34e807a038, iov=0x7f34e807a070, count=1, myframe=0x7f34e80a4238) at /home/jenkins/root/workspace/regression-on-demand-full-run/xlators/protocol/client/src/client-rpc-fops_v2.c:2641 16:58:12 fn = 0x7f34eea3a887 16:58:12 _parent = 0x7f34e80970b8 16:58:12 old_THIS = 0x7f34e800d050 16:58:12 __local = 0x7f34e80a5248 16:58:12 rsp = {op_ret = 0, op_errno = 0, xdata = {xdr_size = 436, count = 9, pairs = {pairs_len = 9, pairs_val = 0x7f34e80c65b0}}, prestat = {ia_gfid = "\250\311\314q\024\060B\027\206\346\363.\353i\324", , ia_flags = 6143, ia_ino = 9720724228569421006, ia_dev = 1792, ia_rdev = 0, ia_size = 6, ia_blocks = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_atime = 1552377223, ia_mtime = 1552377223, ia_ctime = 1552377261, ia_btime = 0, ia_atime_nsec = 225620000, ia_mtime_nsec = 225620000, ia_ctime_nsec = 95181074, ia_btime_nsec = 0, ia_nlink = 2, ia_uid = 0, ia_gid = 0, ia_blksize = 4096, mode = 16877}, poststat = {ia_gfid = '\000' , "\001", ia_flags = 6143, ia_ino = 1, ia_dev = 1792, ia_rdev = 0, ia_size = 4096, ia_blocks = 8, ia_attributes = 0, ia_attributes_mask = 0, ia_atime = 1552377217, ia_mtime = 1552377342, ia_ctime = 1552377342, ia_btime = 0, ia_atime_nsec = 889585720, ia_mtime_nsec = 159385135, ia_ctime_nsec = 159385135, ia_btime_nsec = 0, ia_nlink = 17, ia_uid = 0, ia_gid = 0, ia_blksize = 4096, mode = 16877}} 16:58:12 local = 0x7f34e80a5248 16:58:12 frame = 0x7f34e80a4238 16:58:12 ret = 0 16:58:12 stbuf = {ia_flags = 6143, ia_ino = 9720724228569421006, ia_dev = 1792, ia_rdev = 0, ia_size = 6, ia_nlink = 2, ia_uid = 0, ia_gid = 0, ia_blksize = 4096, ia_blocks = 0, ia_atime = 1552377223, ia_mtime = 1552377223, ia_ctime = 1552377261, ia_btime = 0, ia_atime_nsec = 225620000, ia_mtime_nsec = 225620000, ia_ctime_nsec = 95181074, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = "\250\311\314q\024\060B\027\206\346\363.\353i\324", , ia_type = IA_IFDIR, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}} 16:58:12 postparent = {ia_flags = 6143, ia_ino = 1, ia_dev = 1792, ia_rdev = 0, ia_size = 4096, ia_nlink = 17, ia_uid = 0, ia_gid = 0, ia_blksize = 4096, ia_blocks = 8, ia_atime = 1552377217, ia_mtime = 1552377342, ia_ctime = 1552377342, ia_btime = 0, ia_atime_nsec = 889585720, ia_mtime_nsec = 159385135, ia_ctime_nsec = 159385135, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , "\001", ia_type = IA_IFDIR, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 1 '\001', write = 1 '\001', exec = 1 '\001'}, group = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}, other = {read = 1 '\001', write = 0 '\000', exec = 1 '\001'}}} 16:58:12 op_errno = 0 16:58:12 xdata = 0x7f34e80c5b78 16:58:12 inode = 0x7f34dc02bd98 16:58:12 this = 0x7f34e800d050 16:58:12 __FUNCTION__ = "client4_0_lookup_cbk" 16:58:12 #7 0x00007f34fd4105d0 in rpc_clnt_handle_reply (clnt=0x7f34e8073ce0, pollin=0x7f34e80a4730) at /home/jenkins/root/workspace/regression-on-demand-full-run/rpc/rpc-lib/src/rpc-clnt.c:764 16:58:12 conn = 0x7f34e8073d10 16:58:12 saved_frame = 0x7f34e80a2db8 16:58:12 ret = 0 16:58:12 req = 0x7f34e807a038 16:58:12 xid = 59 16:58:12 __FUNCTION__ = "rpc_clnt_handle_reply" 16:58:12 #8 0x00007f34fd410af9 in rpc_clnt_notify (trans=0x7f34e8073ff0, mydata=0x7f34e8073d10, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f34e80a4730) at /home/jenkins/root/workspace/regression-on-demand-full-run/rpc/rpc-lib/src/rpc-clnt.c:931 16:58:12 conn = 0x7f34e8073d10 16:58:12 clnt = 0x7f34e8073ce0 16:58:12 ret = -1 16:58:12 req_info = 0x0 16:58:12 pollin = 0x7f34e80a4730 16:58:12 clnt_mydata = 0x0 16:58:12 old_THIS = 0x7f34e800d050 16:58:12 __FUNCTION__ = "rpc_clnt_notify" 16:58:12 #9 0x00007f34fd40cade in rpc_transport_notify (this=0x7f34e8073ff0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f34e80a4730) at /home/jenkins/root/workspace/regression-on-demand-full-run/rpc/rpc-lib/src/rpc-transport.c:544 16:58:12 ret = -1 16:58:12 __FUNCTION__ = "rpc_transport_notify" 16:58:12 #10 0x00007f34f17d8a06 in socket_event_poll_in_async (xl=0x7f34e800d050, async=0x7f34e80a4858) at /home/jenkins/root/workspace/regression-on-demand-full-run/rpc/rpc-transport/socket/src/socket.c:2509 16:58:12 pollin = 0x7f34e80a4730 16:58:12 this = 0x7f34e8073ff0 16:58:12 priv = 0x7f34e8074670 16:58:12 #11 0x00007f34f17d02ec in gf_async (async=0x7f34e80a4858, xl=0x7f34e800d050, cbk=0x7f34f17d89af ) at /home/jenkins/root/workspace/regression-on-demand-full-run/libglusterfs/src/glusterfs/async.h:189 16:58:12 __FUNCTION__ = "gf_async" 16:58:12 #12 0x00007f34f17d8b94 in socket_event_poll_in (this=0x7f34e8073ff0, notify_handled=true) at /home/jenkins/root/workspace/regression-on-demand-full-run/rpc/rpc-transport/socket/src/socket.c:2550 16:58:12 ret = 0 16:58:12 pollin = 0x7f34e80a4730 16:58:12 priv = 0x7f34e8074670 16:58:12 ctx = 0xbd2010 16:58:12 #13 0x00007f34f17d9b33 in socket_event_handler (fd=12, idx=3, gen=4, data=0x7f34e8073ff0, poll_in=1, poll_out=0, poll_err=0, event_thread_died=0 '\000') at /home/jenkins/root/workspace/regression-on-demand-full-run/rpc/rpc-transport/socket/src/socket.c:2941 16:58:12 this = 0x7f34e8073ff0 16:58:12 priv = 0x7f34e8074670 16:58:12 ret = 0 16:58:12 ctx = 0xbd2010 16:58:12 socket_closed = false 16:58:12 notify_handled = false 16:58:12 __FUNCTION__ = "socket_event_handler" 16:58:12 #14 0x00007f34fd6ec9f3 in event_dispatch_epoll_handler (event_pool=0xc08e90, event=0x7f34ef721e80) at /home/jenkins/root/workspace/regression-on-demand-full-run/libglusterfs/src/event-epoll.c:648 16:58:12 ev_data = 0x7f34ef721e84 16:58:12 slot = 0xc4ff20 16:58:12 handler = 0x7f34f17d968e 16:58:12 data = 0x7f34e8073ff0 16:58:12 idx = 3 16:58:12 gen = 4 16:58:12 ret = 0 16:58:12 fd = 12 16:58:12 handled_error_previously = false 16:58:12 __FUNCTION__ = "event_dispatch_epoll_handler" 16:58:12 #15 0x00007f34fd6ecf0c in event_dispatch_epoll_worker (data=0xc6c150) at /home/jenkins/root/workspace/regression-on-demand-full-run/libglusterfs/src/event-epoll.c:761 16:58:12 event = {events = 1, data = {ptr = 0x400000003, fd = 3, u32 = 3, u64 = 17179869187}} 16:58:12 ret = 1 16:58:12 ev_data = 0xc6c150 16:58:12 event_pool = 0xc08e90 16:58:12 myindex = 2 16:58:12 timetodie = 0 16:58:12 gen = 0 16:58:12 poller_death_notify = {next = 0x0, prev = 0x0} 16:58:12 slot = 0x0 16:58:12 tmp = 0x0 16:58:12 __FUNCTION__ = "event_dispatch_epoll_worker" 16:58:12 #16 0x00007f34fc498dd5 in start_thread () from /lib64/libpthread.so.0 Version-Release number of selected component (if applicable): How reproducible: very rare. Steps to Reproduce: 1.run test tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t 2. 3. Actual results: Expected results: Additional info: https://build.gluster.org/job/regression-on-demand-full-run/259/consoleFull -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 12:44:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 12:44:27 +0000 Subject: [Bugs] [Bug 1687811] core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687811 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22345 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 12:44:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 12:44:28 +0000 Subject: [Bugs] [Bug 1687811] core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687811 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22345 (dht: NULL check before setting error flag) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 14:32:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:32:31 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(srangana at redhat.c | |om) | |needinfo?(amgad.saleh at nokia | |.com) | --- Comment #8 from Amgad --- Case 1) online upgrade from 3.12.15 to 5.3 A) I have a cluster of 3 replicas: gfs-1, gfs-2, gfs-3new running 3.12.15. When online upgraded gfs-1 from 3.12.15, here are the outputs: (notice that bricks on gfs-1 are offline - both glusterd and glusterfsd are active and running) [root at gfs-1 ~]# gluster volume info Volume Name: glustervol1 Type: Replicate Volume ID: 28b16639-7c58-4f28-975b-5ea17274e87b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data1/1 Brick2: 10.76.153.213:/mnt/data1/1 Brick3: 10.76.153.207:/mnt/data1/1 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet Volume Name: glustervol2 Type: Replicate Volume ID: 8637eee7-20b7-4a88-b497-192b4626093d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data2/2 Brick2: 10.76.153.213:/mnt/data2/2 Brick3: 10.76.153.207:/mnt/data2/2 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet Volume Name: glustervol3 Type: Replicate Volume ID: f8c21e8c-0a9a-40ba-b098-931a4219de0f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data3/3 Brick2: 10.76.153.213:/mnt/data3/3 Brick3: 10.76.153.207:/mnt/data3/3 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet --- [root at gfs-1 ~]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 N/A N/A N N/A Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 24733 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 7790 Self-heal Daemon on localhost N/A N/A Y 14928 Self-heal Daemon on 10.76.153.207 N/A N/A Y 7780 Self-heal Daemon on 10.76.153.213 N/A N/A Y 24723 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 N/A N/A N N/A Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 24742 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 7800 Self-heal Daemon on localhost N/A N/A Y 14928 Self-heal Daemon on 10.76.153.207 N/A N/A Y 7780 Self-heal Daemon on 10.76.153.213 N/A N/A Y 24723 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 N/A N/A N N/A Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 24751 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 7809 Self-heal Daemon on localhost N/A N/A Y 14928 Self-heal Daemon on 10.76.153.207 N/A N/A Y 7780 Self-heal Daemon on 10.76.153.213 N/A N/A Y 24723 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 ~]# ====== Running "gluster volume heal" ==> unsuccessful [root at gfs-1 ~]# for i in `gluster volume list`; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details. [root at gfs-1 ~]# B) Reverting gfs-1 back to 3.12.15, bricks are on line and heal is successfull [root at gfs-1 log]# gluster volume info Volume Name: glustervol1 Type: Replicate Volume ID: 28b16639-7c58-4f28-975b-5ea17274e87b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data1/1 Brick2: 10.76.153.213:/mnt/data1/1 Brick3: 10.76.153.207:/mnt/data1/1 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: glustervol2 Type: Replicate Volume ID: 8637eee7-20b7-4a88-b497-192b4626093d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data2/2 Brick2: 10.76.153.213:/mnt/data2/2 Brick3: 10.76.153.207:/mnt/data2/2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: glustervol3 Type: Replicate Volume ID: f8c21e8c-0a9a-40ba-b098-931a4219de0f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data3/3 Brick2: 10.76.153.213:/mnt/data3/3 Brick3: 10.76.153.207:/mnt/data3/3 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root at gfs-1 log]# [root at gfs-1 log]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 16029 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 24733 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 7790 Self-heal Daemon on localhost N/A N/A Y 16019 Self-heal Daemon on 10.76.153.207 N/A N/A Y 7780 Self-heal Daemon on 10.76.153.213 N/A N/A Y 24723 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 16038 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 24742 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 7800 Self-heal Daemon on localhost N/A N/A Y 16019 Self-heal Daemon on 10.76.153.207 N/A N/A Y 7780 Self-heal Daemon on 10.76.153.213 N/A N/A Y 24723 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 16047 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 24751 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 7809 Self-heal Daemon on localhost N/A N/A Y 16019 Self-heal Daemon on 10.76.153.213 N/A N/A Y 24723 Self-heal Daemon on 10.76.153.207 N/A N/A Y 7780 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 log]# [root at gfs-1 log]# for i in `gluster volume list`; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. [root at gfs-1 log]# Uploading /var/log/glusterfs: - when upgraded gfs-1 to 5.3: gfs-1-logs.tgz, gfs-2-logs.tgz, and gfs-3new-logs.tgz - when reverted back to 3.12.15: gfs-1-logs-3.12.15.tgz, gfs-2-logs-3.12.15.tgz, and gfs-3new-logs-3.12.15.tgz Next comment will have the 2nd case upgrade 3.12.15 -to- 4.1.4 and rollback -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 14:42:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:42:21 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #9 from Amgad --- Created attachment 1543212 --> https://bugzilla.redhat.com/attachment.cgi?id=1543212&action=edit gfs-1 when online upgraded from 3.12.15 to 5.3 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 14:43:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:43:10 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #10 from Amgad --- Created attachment 1543214 --> https://bugzilla.redhat.com/attachment.cgi?id=1543214&action=edit gfs-2 logs when gfs-1 online upgraded from 3.12.15 to 5.3 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 14:44:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:44:52 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #11 from Amgad --- Created attachment 1543215 --> https://bugzilla.redhat.com/attachment.cgi?id=1543215&action=edit gfs-3new logs when gfs-1 online upgraded from 3.12.15 to 5.3 --- Comment #12 from Amgad --- Created attachment 1543216 --> https://bugzilla.redhat.com/attachment.cgi?id=1543216&action=edit gfs-1 logs when gfs-1 reverted back to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 14:45:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:45:23 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #13 from Amgad --- Created attachment 1543217 --> https://bugzilla.redhat.com/attachment.cgi?id=1543217&action=edit gfs-2 logs when gfs-1 reverted back to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 14:46:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:46:02 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #14 from Amgad --- Created attachment 1543219 --> https://bugzilla.redhat.com/attachment.cgi?id=1543219&action=edit gfs-3new logs when gfs-1 reverted back to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 14:58:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:58:04 +0000 Subject: [Bugs] [Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 14:58:04 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22330 (packaging: rdma on s390x, unnecessary ldconfig scriptlets) merged (#1) on release-6 by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 14:58:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 14:58:04 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1686875, which changed state. Bug 1686875 Summary: packaging: rdma on s390x, unnecessary ldconfig scriptlets https://bugzilla.redhat.com/show_bug.cgi?id=1686875 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 15:21:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 15:21:32 +0000 Subject: [Bugs] [Bug 1687705] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687705 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22339 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:20:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:20:27 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #15 from Amgad --- Case 2) online upgrade from 3.12.15 to 4.1.4 and rollback: A) I have a cluster of 3 replicas: gfs-1 (10.76.153.206), gfs-2 (10.76.153.213), and gfs-3new (10.76.153.206), running 3.12.15. When online upgraded gfs-1 from 3.12.15 to 4.1.4, heal succeeded. Continuing with gfs-2, then gfs-3new, online upgrade and heal succeeded. 1) Here're the outputs after gfs-1 was online upgraded from 3.12.15 to 4.1.4: Logs uploaded are: gfs-1-logs-gfs-1-UpgFrom3.12.15-to-4.1.4.tgz, gfs-2-logs-gfs-1-UpgFrom3.12.15-to-4.1.4.tgz, and gfs-3new-logs-gfs-1-UpgFrom3.12.15-to-4.1.4.tgz - see the latest upgrade case. [root at gfs-1 ansible1]# gluster volume info Volume Name: glustervol1 Type: Replicate Volume ID: 28b16639-7c58-4f28-975b-5ea17274e87b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data1/1 Brick2: 10.76.153.213:/mnt/data1/1 Brick3: 10.76.153.207:/mnt/data1/1 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet Volume Name: glustervol2 Type: Replicate Volume ID: 8637eee7-20b7-4a88-b497-192b4626093d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data2/2 Brick2: 10.76.153.213:/mnt/data2/2 Brick3: 10.76.153.207:/mnt/data2/2 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet Volume Name: glustervol3 Type: Replicate Volume ID: f8c21e8c-0a9a-40ba-b098-931a4219de0f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data3/3 Brick2: 10.76.153.213:/mnt/data3/3 Brick3: 10.76.153.207:/mnt/data3/3 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet [root at gfs-1 ansible1]# [root at gfs-1 ansible1]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49155 0 Y 30270 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 12726 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 26671 Self-heal Daemon on localhost N/A N/A Y 30260 Self-heal Daemon on 10.76.153.213 N/A N/A Y 12716 Self-heal Daemon on 10.76.153.207 N/A N/A Y 26661 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49156 0 Y 30279 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 12735 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 26680 Self-heal Daemon on localhost N/A N/A Y 30260 Self-heal Daemon on 10.76.153.213 N/A N/A Y 12716 Self-heal Daemon on 10.76.153.207 N/A N/A Y 26661 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49157 0 Y 30288 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 12744 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 26689 Self-heal Daemon on localhost N/A N/A Y 30260 Self-heal Daemon on 10.76.153.213 N/A N/A Y 12716 Self-heal Daemon on 10.76.153.207 N/A N/A Y 26661 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 ansible1]# for i in `gluster volume list`; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. [root at gfs-1 ansible1]# ======================= ===================== 2) Here're the outputs after all were online upgraded from 3.12.15 to 4.1.4: Logs uploaded see the logs for B) which include this case as well [root at gfs-3new ansible1]# gluster volume info Volume Name: glustervol1 Type: Replicate Volume ID: 28b16639-7c58-4f28-975b-5ea17274e87b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data1/1 Brick2: 10.76.153.213:/mnt/data1/1 Brick3: 10.76.153.207:/mnt/data1/1 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet Volume Name: glustervol2 Type: Replicate Volume ID: 8637eee7-20b7-4a88-b497-192b4626093d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data2/2 Brick2: 10.76.153.213:/mnt/data2/2 Brick3: 10.76.153.207:/mnt/data2/2 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet Volume Name: glustervol3 Type: Replicate Volume ID: f8c21e8c-0a9a-40ba-b098-931a4219de0f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data3/3 Brick2: 10.76.153.213:/mnt/data3/3 Brick3: 10.76.153.207:/mnt/data3/3 Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet [root at gfs-3new ansible1]# [root at gfs-3new ansible1]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49155 0 Y 30270 Brick 10.76.153.213:/mnt/data1/1 49155 0 Y 13874 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 28144 Self-heal Daemon on localhost N/A N/A Y 28134 Self-heal Daemon on 10.76.153.213 N/A N/A Y 13864 Self-heal Daemon on 10.76.153.206 N/A N/A Y 30260 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49156 0 Y 30279 Brick 10.76.153.213:/mnt/data2/2 49156 0 Y 13883 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 28153 Self-heal Daemon on localhost N/A N/A Y 28134 Self-heal Daemon on 10.76.153.206 N/A N/A Y 30260 Self-heal Daemon on 10.76.153.213 N/A N/A Y 13864 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49157 0 Y 30288 Brick 10.76.153.213:/mnt/data3/3 49157 0 Y 13892 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 28162 Self-heal Daemon on localhost N/A N/A Y 28134 Self-heal Daemon on 10.76.153.206 N/A N/A Y 30260 Self-heal Daemon on 10.76.153.213 N/A N/A Y 13864 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-3new ansible1]# [root at gfs-3new ansible1]# for i in `gluster volume list`; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. [root at gfs-3new ansible1]# ====== ======= B) Here're the outputs after gfs-1 was online rollbacked from 4.1.4 to 3.12.15 - rollback succeeded, but "gluster volume heal" was unsuccessful: Logs uploaded are: gfs-1-logs-gfs-1-RollbackFrom4.1.4-to-3.12.15.tgz, gfs-2-logs-gfs-1-RollbackFrom4.1.4-to-3.12.15.tgz, and gfs-3new-logs-gfs-1-RollbackFrom4.1.4-to-3.12.15.tgz - includes case 2) as well right before [root at gfs-1 ansible1]# gluster volume info Volume Name: glustervol1 Type: Replicate Volume ID: 28b16639-7c58-4f28-975b-5ea17274e87b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data1/1 Brick2: 10.76.153.213:/mnt/data1/1 Brick3: 10.76.153.207:/mnt/data1/1 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: glustervol2 Type: Replicate Volume ID: 8637eee7-20b7-4a88-b497-192b4626093d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data2/2 Brick2: 10.76.153.213:/mnt/data2/2 Brick3: 10.76.153.207:/mnt/data2/2 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off Volume Name: glustervol3 Type: Replicate Volume ID: f8c21e8c-0a9a-40ba-b098-931a4219de0f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.76.153.206:/mnt/data3/3 Brick2: 10.76.153.213:/mnt/data3/3 Brick3: 10.76.153.207:/mnt/data3/3 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off [root at gfs-1 ansible1]# [root at gfs-1 ansible1]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 32078 Brick 10.76.153.213:/mnt/data1/1 49155 0 Y 13874 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 28144 Self-heal Daemon on localhost N/A N/A Y 32068 Self-heal Daemon on 10.76.153.213 N/A N/A Y 13864 Self-heal Daemon on 10.76.153.207 N/A N/A Y 28134 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 32087 Brick 10.76.153.213:/mnt/data2/2 49156 0 Y 13883 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 28153 Self-heal Daemon on localhost N/A N/A Y 32068 Self-heal Daemon on 10.76.153.213 N/A N/A Y 13864 Self-heal Daemon on 10.76.153.207 N/A N/A Y 28134 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 32096 Brick 10.76.153.213:/mnt/data3/3 49157 0 Y 13892 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 28162 Self-heal Daemon on localhost N/A N/A Y 32068 Self-heal Daemon on 10.76.153.213 N/A N/A Y 13864 Self-heal Daemon on 10.76.153.207 N/A N/A Y 28134 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 ansible1]# for i in `gluster volume list`; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.213. Please check log file for details. Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. [root at gfs-1 ansible1]# -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:24:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:24:03 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #16 from Amgad --- Created attachment 1543260 --> https://bugzilla.redhat.com/attachment.cgi?id=1543260&action=edit gfs-1 logs when gfs-1 online upgraded from 3.12.15 to 4.1 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:24:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:24:33 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #17 from Amgad --- Created attachment 1543261 --> https://bugzilla.redhat.com/attachment.cgi?id=1543261&action=edit gfs-2 logs when gfs-1 online upgraded from 3.12.15 to 4.1 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:25:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:25:07 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #18 from Amgad --- Created attachment 1543262 --> https://bugzilla.redhat.com/attachment.cgi?id=1543262&action=edit gfs-3new logs when gfs-1 online upgraded from 3.12.15 to 4.1 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:26:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:26:12 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #19 from Amgad --- Created attachment 1543263 --> https://bugzilla.redhat.com/attachment.cgi?id=1543263&action=edit gfs-1 logs when gfs-1 online rolledback from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:26:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:26:44 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #20 from Amgad --- Created attachment 1543264 --> https://bugzilla.redhat.com/attachment.cgi?id=1543264&action=edit gfs-2 logs when gfs-1 online rolledback from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 16:27:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 16:27:29 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #21 from Amgad --- Created attachment 1543268 --> https://bugzilla.redhat.com/attachment.cgi?id=1543268&action=edit gfs-3new logs when gfs-1 online rolledback from 4.1.4 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:51:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:51:34 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 20:51:34 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22303 (glusterd: glusterd memory leak while running \"gluster v profile\" in a loop) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:51:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:51:35 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1685771, which changed state. Bug 1685771 Summary: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA https://bugzilla.redhat.com/show_bug.cgi?id=1685771 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 20:51:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:51:35 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Bug 1685414 depends on bug 1685771, which changed state. Bug 1685771 Summary: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA https://bugzilla.redhat.com/show_bug.cgi?id=1685771 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:52:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:52:17 +0000 Subject: [Bugs] [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22336 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:52:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:52:19 +0000 Subject: [Bugs] [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 20:52:19 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22336 (cluster/afr: Send truncate on arbiter brick from SHD) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:52:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:52:19 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1687672, which changed state. Bug 1687672 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter https://bugzilla.redhat.com/show_bug.cgi?id=1687672 What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 20:53:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:53:28 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-12 20:53:28 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22344 (glusterfsd: Multiple shd processes are spawned on brick_mux environment) merged (#3) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:53:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:53:29 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1683880, which changed state. Bug 1683880 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1683880 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 12 20:53:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:53:29 +0000 Subject: [Bugs] [Bug 1684404] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684404 Bug 1684404 depends on bug 1683880, which changed state. Bug 1683880 Summary: Multiple shd processes are running on brick_mux environmet https://bugzilla.redhat.com/show_bug.cgi?id=1683880 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 12 20:57:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 12 Mar 2019 20:57:28 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #13 from Worker Ant --- REVIEW: https://review.gluster.org/22250 (doc: Update release notes for Samba integration) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 01:48:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 01:48:58 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #14 from Worker Ant --- REVIEW: https://review.gluster.org/22341 (rpm: add thin-arbiter package) merged (#4) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 03:33:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 03:33:23 +0000 Subject: [Bugs] [Bug 1687811] core dump generated while running the test ./tests/00-geo-rep/georep-basic-dr-rsync-arbiter.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687811 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-13 03:33:23 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22345 (dht: NULL check before setting error flag) merged (#1) on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 04:03:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:03:59 +0000 Subject: [Bugs] [Bug 1688068] New: Proper error message needed for FUSE mount failure when /var is filled. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688068 Bug ID: 1688068 Summary: Proper error message needed for FUSE mount failure when /var is filled. Product: GlusterFS Version: 4.1 Status: NEW Component: fuse Severity: low Priority: low Assignee: atumball at redhat.com Reporter: atumball at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1685406 Description of problem: On a gluster client, if we try to mount volume when /var is filled completely FUSE mount fails with: ################################################################################ # mount.glusterfs my-node.redhat.com:/testvol_replicated/ bug/ ERROR: failed to create logfile "/var/log/glusterfs/bug-.log" (No space left on device) ERROR: failed to open logfile /var/log/glusterfs/bug-.log Mount failed. Please check the log file for more details. ################################################################################ Instead of which a proper error message should be displayed like: ################################################################################ # mount.glusterfs my-node.redhat.com:/testvol_replicated/ bug/ ERROR: failed to create logfile "/var/log/glusterfs/bug-.log" (No space left on device) ERROR: failed to open logfile /var/log/glusterfs/bug-.log Mount failed as no space left on device please free disk space. ################################################################################ As the present error message is misleading. How reproducible: 1/1 Steps to Reproduce: 1. Create one volume of any type and start it. 2. On the client node fill /var/log using fallocate. 3. Try to mount the volume on the client. --- Discussions in above bug --- > > # mount.glusterfs dhcp35-137.lab.eng.blr.redhat.com:/testvol_replicated/ bug/ > > ERROR: failed to create logfile "/var/log/glusterfs/bug-.log" (No space left on device) > > > Doesn't the above line capture the 'issue' at hand? In that case I would not want the error message "Mount failed. Please check the log file for more details." to be displayed at all. Because it would create confusion for when read by the user, as log-file creation itself failed. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 04:04:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:04:16 +0000 Subject: [Bugs] [Bug 1688068] Proper error message needed for FUSE mount failure when /var is filled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688068 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Version|4.1 |mainline -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 04:04:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:04:52 +0000 Subject: [Bugs] [Bug 1688068] Proper error message needed for FUSE mount failure when /var is filled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688068 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1685406 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1685406 [Bug 1685406] [RFE] Proper error message needed for FUSE mount failure when /var is filled. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 04:08:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:08:42 +0000 Subject: [Bugs] [Bug 1688068] Proper error message needed for FUSE mount failure when /var is filled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688068 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22346 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 04:08:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:08:43 +0000 Subject: [Bugs] [Bug 1688068] Proper error message needed for FUSE mount failure when /var is filled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688068 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22346 (mount.glusterfs: change the error message) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 04:14:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:14:38 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | Assignee|bugs at gluster.org |atumball at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 04:18:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:18:15 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22347 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 04:18:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:18:16 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22347 (inode: don't dump the whole table to CLI) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 07:13:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:13:45 +0000 Subject: [Bugs] [Bug 1688106] New: Remove implementation of number of files opened in posix xlator Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688106 Bug ID: 1688106 Summary: Remove implementation of number of files opened in posix xlator Product: GlusterFS Version: mainline Status: NEW Component: posix Assignee: bugs at gluster.org Reporter: pkarampu at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: nr_files is supposed to represent the number of files opened in posix. Present logic doesn't seem to handle anon-fds because of which the counts would always be wrong. I don't remember anyone using this value in debugging any problem probably because we always have 'ls -l /proc//fd' which not only prints the fds that are active but also prints their paths. It also handles directories and anon-fds which actually opened the file. So it is better to remove this code instead of fixing the buggy logic to have the nr_files. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 07:29:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:29:20 +0000 Subject: [Bugs] [Bug 1688115] New: Data heal not checking for locks on source & sink(s) before healing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688115 Bug ID: 1688115 Summary: Data heal not checking for locks on source & sink(s) before healing Product: GlusterFS Version: mainline Status: ASSIGNED Component: replicate Assignee: ksubrahm at redhat.com Reporter: ksubrahm at redhat.com CC: bugs at gluster.org, pkarampu at redhat.com, ravishankar at redhat.com Target Milestone: --- Classification: Community Description of problem: During data heal, we try to take locks on all the bricks, but we do not check for whether we got the lock on at least the source and one of the sink bricks before starting the heal. In function afr_selfheal_data_block(), we only check for the lock count to be equal to or greater than the number of sinks. There can be a case where we have 2 source bricks and one sink and the locking is successful on only the source brick(s). In this case we continue with the healing on sink without having a lock, which is not correct. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 07:31:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:31:35 +0000 Subject: [Bugs] [Bug 1688116] New: Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 Bug ID: 1688116 Summary: Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t Product: GlusterFS Version: mainline Status: NEW Component: tests Assignee: bugs at gluster.org Reporter: rkavunga at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Test case ./tests/bugs/glusterfs/bug-844688.t is failing quite frequently in master branch. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 07:36:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:36:26 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22348 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 07:36:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:36:27 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22348 (tests/bug-844688.t: test bug-844688.t is failing on master) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 07:37:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:37:32 +0000 Subject: [Bugs] [Bug 1688115] Data heal not checking for locks on source & sink(s) before healing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688115 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22349 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 07:37:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:37:33 +0000 Subject: [Bugs] [Bug 1688115] Data heal not checking for locks on source & sink(s) before healing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688115 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22349 (cluster/afr: Check for lock on source & sink before doing data heal) posted (#1) for review on master by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 07:38:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:38:44 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22350 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 07:38:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 07:38:45 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22350 (test: Fix a missing a '$' symbol) posted (#1) for review on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 08:29:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 08:29:45 +0000 Subject: [Bugs] [Bug 1688106] Remove implementation of number of files opened in posix xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688106 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22333 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 08:29:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 08:29:46 +0000 Subject: [Bugs] [Bug 1688106] Remove implementation of number of files opened in posix xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688106 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22333 (storage/posix: Remove nr_files usage) posted (#2) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 09:02:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 09:02:45 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-13 09:02:45 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22350 (test: Fix a missing a '$' symbol) merged (#1) on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 09:49:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 09:49:51 +0000 Subject: [Bugs] [Bug 1688148] New: ldconfig should be called on CentOS <= 7 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688148 Bug ID: 1688148 Summary: ldconfig should be called on CentOS <= 7 Product: GlusterFS Version: 6 Status: ASSIGNED Component: packaging Assignee: ndevos at redhat.com Reporter: ndevos at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Even if calling /sbin/ldconfig on recent versions of Fedora is not needed anymore, older distributions still require it, Re-adding the calls to /sbin/ldconfig within an if-statement is quite ugly. Fortunately the Fedora EPEL Packaging Guidelines describe how this can be done more elegantly with a few macros. https://fedoraproject.org/wiki/EPEL:Packaging#Shared_Libraries Version-Release number of selected component (if applicable): glusterfs 6.0 RC1 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 09:50:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 09:50:52 +0000 Subject: [Bugs] [Bug 1688150] New: ldconfig should be called on CentOS <= 7 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688150 Bug ID: 1688150 Summary: ldconfig should be called on CentOS <= 7 Product: GlusterFS Version: mainline Status: ASSIGNED Component: packaging Assignee: ndevos at redhat.com Reporter: ndevos at redhat.com CC: bugs at gluster.org Blocks: 1688148 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1688148 +++ Description of problem: Even if calling /sbin/ldconfig on recent versions of Fedora is not needed anymore, older distributions still require it, Re-adding the calls to /sbin/ldconfig within an if-statement is quite ugly. Fortunately the Fedora EPEL Packaging Guidelines describe how this can be done more elegantly with a few macros. https://fedoraproject.org/wiki/EPEL:Packaging#Shared_Libraries Version-Release number of selected component (if applicable): glusterfs 6.0 RC1 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1688148 [Bug 1688148] ldconfig should be called on CentOS <= 7 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 09:50:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 09:50:52 +0000 Subject: [Bugs] [Bug 1688148] ldconfig should be called on CentOS <= 7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688148 Niels de Vos changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1688150 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1688150 [Bug 1688150] ldconfig should be called on CentOS <= 7 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 10:17:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 10:17:14 +0000 Subject: [Bugs] [Bug 1688150] ldconfig should be called on CentOS <= 7 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688150 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22353 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 10:50:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 10:50:47 +0000 Subject: [Bugs] [Bug 1687705] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687705 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-13 10:50:47 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22339 (glusterfsd: Brick is getting crash at the time of startup) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 11:15:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 11:15:33 +0000 Subject: [Bugs] [Bug 1688218] New: Brick process has coredumped, when starting glusterd Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 Bug ID: 1688218 Summary: Brick process has coredumped, when starting glusterd Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: rpc Severity: high Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, moagrawa at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, sheggodu at redhat.com Depends On: 1687641 Blocks: 1687671, 1687705 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1687641 [Bug 1687641] Brick process has coredumped, when starting glusterd https://bugzilla.redhat.com/show_bug.cgi?id=1687671 [Bug 1687671] Brick process has coredumped, when starting glusterd https://bugzilla.redhat.com/show_bug.cgi?id=1687705 [Bug 1687705] Brick process has coredumped, when starting glusterd -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 11:15:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 11:15:33 +0000 Subject: [Bugs] [Bug 1687705] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687705 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1688218 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 [Bug 1688218] Brick process has coredumped, when starting glusterd -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 11:16:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 11:16:37 +0000 Subject: [Bugs] [Bug 1688218] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 11:19:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 11:19:22 +0000 Subject: [Bugs] [Bug 1688218] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22355 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 11:19:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 11:19:23 +0000 Subject: [Bugs] [Bug 1688218] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22355 (glusterfsd: Brick is getting crash at the time of startup) posted (#1) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 13:02:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 13:02:19 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST CC| |atumball at redhat.com Resolution|NEXTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 13:13:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 13:13:21 +0000 Subject: [Bugs] [Bug 1688287] New: ganesha crash on glusterfs with shard volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688287 Bug ID: 1688287 Summary: ganesha crash on glusterfs with shard volume Product: GlusterFS Version: mainline Status: NEW Component: sharding Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: #0 0x00007fe49c386544 in uuid_unpack () from /lib64/libuuid.so.1 [Current thread is 1 (Thread 0x7fe48a453700 (LWP 20198))] Missing separate debuginfos, use: dnf debuginfo-install dbus-libs-1.12.12-1.fc29.x86_64 glibc-2.28-26.fc29.x86_64 gssproxy-0.8.0-6.fc29.x86_64 keyutils-libs-1.5.10-8.fc29.x86_64 krb5-libs-1.16.1-25.fc29.x86_64 libacl-2.2.53-2.fc29.x86_64 libattr-2.4.48-3.fc29.x86_64 libblkid-2.32.1-1.fc29.x86_64 libcap-2.25-12.fc29.x86_64 libcom_err-1.44.4-1.fc29.x86_64 libgcc-8.3.1-2.fc29.x86_64 libgcrypt-1.8.4-1.fc29.x86_64 libgpg-error-1.33-1.fc29.x86_64 libmount-2.32.1-1.fc29.x86_64 libnfsidmap-2.3.3-1.rc2.fc29.x86_64 libnsl2-1.2.0-3.20180605git4a062cf.fc29.x86_64 librados2-12.2.11-1.fc29.x86_64 libselinux-2.8-6.fc29.x86_64 libstdc++-8.3.1-2.fc29.x86_64 libtirpc-1.1.4-2.rc2.fc29.x86_64 libuuid-2.32.1-1.fc29.x86_64 lttng-ust-2.10.1-4.fc29.x86_64 nspr-4.20.0-1.fc29.x86_64 nss-3.42.1-1.fc29.x86_64 nss-util-3.42.1-1.fc29.x86_64 openssl-libs-1.1.1b-2.fc29.x86_64 pcre2-10.32-8.fc29.x86_64 samba-client-libs-4.9.4-1.fc29.x86_64 sssd-client-2.0.0-5.fc29.x86_64 systemd-libs-239-12.git8bca462.fc29.x86_64 xz-libs-5.2.4-3.fc29.x86_64 zlib-1.2.11-14.fc29.x86_64 (gdb) bt #0 0x00007fe49c386544 in uuid_unpack () from /lib64/libuuid.so.1 #1 0x00007fe49c3865c4 in uuid_unparse_x () from /lib64/libuuid.so.1 #2 0x00007fe490caee70 in gf_uuid_unparse ( out=0x7fe474004dd0 "00000000-0000-0000-0000-", '0' , uuid=0x8 ) at compat-uuid.h:55 #3 uuid_utoa ( uuid=uuid at entry=0x8 ) at common-utils.c:2762 #4 0x00007fe48ae76b2a in shard_truncate_last_shard ( frame=frame at entry=0x7fe469452888, this=this at entry=0x7fe47c00e070, inode=) at shard.c:2006 #5 0x00007fe48ae77baf in shard_truncate_htol_cbk (frame=0x7fe469452888, cookie=, this=0x7fe47c00e070, op_ret=, op_errno=, preparent=, postparent=0x7fe3b40b7450, xdata=0x7fe3b4665488) at shard.c:2056 #6 0x00007fe48aedb00c in dht_unlink_cbk (frame=0x7fe3b4c0d3b8, cookie=, this=, op_ret=, op_errno=, preparent=0x7fe48a450b50, postparent=0x7fe48a450bf0, xdata=0x7fe3b4665488) at dht-common.c:3644 #7 0x00007fe48afaeeac in client4_0_unlink_cbk (req=, iov=, count=, myframe=0x7fe3b487b0d8) at client-rpc-fops_v2.c:466 #8 0x00007fe49269b824 in rpc_clnt_handle_reply ( clnt=clnt at entry=0x7fe47c04bd00, pollin=pollin at entry=0x7fe3b415f420) at rpc-clnt.c:755 #9 0x00007fe49269bb7f in rpc_clnt_notify (trans=0x7fe47c04c030, mydata=0x7fe47c04bd30, event=, data=0x7fe3b415f420) at rpc-clnt.c:923 #10 0x00007fe492697f7b in rpc_transport_notify ( this=this at entry=0x7fe47c04c030, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED, data=data at entry=0x7fe3b415f420) at rpc-transport.c:525 #11 0x00007fe48b8afa51 in socket_event_poll_in (notify_handled=true, this=0x7fe47c04c030) at socket.c:2506 #12 socket_event_handler (fd=fd at entry=11, idx=idx at entry=2, gen=gen at entry=1, data=data at entry=0x7fe47c04c030, poll_in=, poll_out=, poll_err=) at socket.c:2907 #13 0x00007fe490d048ff in event_dispatch_epoll_handler (event=0x7fe48a4510ac, event_pool=0x1cc0740) at event-epoll.c:591 #14 event_dispatch_epoll_worker (data=0x7fe47c04ba30) at event-epoll.c:667 #15 0x00007fe49c1f958e in start_thread () from /lib64/libpthread.so.0 #16 0x00007fe49bf7b6a3 in clone () from /lib64/libc.so.6 (gdb) frame 4 #4 0x00007fe48ae76b2a in shard_truncate_last_shard ( frame=frame at entry=0x7fe469452888, this=this at entry=0x7fe47c00e070, inode=) at shard.c:2006 2006 gf_msg_debug(this->name, 0, (gdb) p inode $1 = (gdb) l 2001 * needs to be truncated does not exist due to it lying in a hole 2002 * region. So the only thing left to do in that case would be an 2003 * update to file size xattr. 2004 */ 2005 if (!inode) { 2006 gf_msg_debug(this->name, 0, 2007 "Last shard to be truncated absent" 2008 " in backend: %s. Directly proceeding to update " 2009 "file size", 2010 uuid_utoa(inode->gfid)); (gdb) Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 13:17:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 13:17:26 +0000 Subject: [Bugs] [Bug 1688287] ganesha crash on glusterfs with shard volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688287 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22357 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 13:17:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 13:17:27 +0000 Subject: [Bugs] [Bug 1688287] ganesha crash on glusterfs with shard volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688287 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22357 (shard: get correct inode from local->loc) posted (#1) for review on master by Kinglong Mee -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 13:41:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 13:41:11 +0000 Subject: [Bugs] [Bug 1657645] [Glusterfs-server-5.1] Gluster storage domain creation fails on MountError In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657645 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED --- Comment #10 from Amar Tumballi --- Team, we just made few fixes to glusterfs-5.x series, and are in the process of next glusterfs release (5.4.1), can we upgrade to 5.4+ release and see if the issue persists? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 14:01:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 14:01:29 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #22 from Amgad --- Any update! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 13 21:05:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 21:05:03 +0000 Subject: [Bugs] [Bug 1688226] Brick Still Died After Restart Glusterd & Glusterfsd Services In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688226 Eng Khalid Jamal changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 14 04:45:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 04:45:25 +0000 Subject: [Bugs] [Bug 1688226] Brick Still Died After Restart Glusterd & Glusterfsd Services In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688226 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Component|glusterd |core Version|unspecified |4.1 Assignee|amukherj at redhat.com |bugs at gluster.org Product|Red Hat Gluster Storage |GlusterFS QA Contact|bmekala at redhat.com | Flags|rhgs-3.5.0? pm_ack? |needinfo?(engkhalid21986 at gm |devel_ack? qa_ack? |ail.com) --- Comment #2 from Atin Mukherjee --- Can you please share the glusterd and brick log? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 04:47:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 04:47:58 +0000 Subject: [Bugs] [Bug 1688116] Spurious failure in test ./tests/bugs/glusterfs/bug-844688.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688116 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22348 (tests/bug-844688.t: test bug-844688.t is failing on master) merged (#2) on master by mohammed rafi kc -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 04:48:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 04:48:36 +0000 Subject: [Bugs] [Bug 1688106] Remove implementation of number of files opened in posix xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688106 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-14 04:48:36 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22333 (storage/posix: Remove nr_files usage) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 07:34:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 07:34:23 +0000 Subject: [Bugs] [Bug 1685051] New Project create request In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685051 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high CC| |atumball at redhat.com Severity|unspecified |medium --- Comment #7 from Amar Tumballi --- Amye, > This is exactly what should be on gluster.org's blog! I guess there were a lot of question about where to write blogs from many gluster developers in office, and hence the request for this. > You write wherever you want, we can set WordPress to take Markdown with no issues. > We should not be duplicating effort when gluster.org is a great platform to be able to create content on already. > We should get a list of the people who want to write developer blogs and get them author accounts to publish directly on Gluster.org and publicize from there through social media. The way I liked the github static pages is, developers are used to local md (or hackmd way of writing), and the process of doing git push. This also allows some of us to proof read this, and merge. And considering there are already available tools/themes for this, this shouldn't be hard to setup. Whether this is going to be a long term solution? Don't know, but having this option increases possibilities of people posting is what I thought. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 08:23:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 08:23:11 +0000 Subject: [Bugs] [Bug 1688226] Brick Still Died After Restart Glusterd & Glusterfsd Services In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688226 Eng Khalid Jamal changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(engkhalid21986 at gm | |ail.com) | --- Comment #3 from Eng Khalid Jamal --- (In reply to Atin Mukherjee from comment #2) > Can you please share the glusterd and brick log? [root at gfs2 ~]# tailf -n 200 /var/log/glusterfs/glusterd.log [2019-03-11 12:55:20.603709] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped [2019-03-11 12:55:20.606580] E [MSGID: 106028] [glusterd-utils.c:8213:glusterd_brick_signal] 0-glusterd: Unable to open pidfile: /var/run/gluster/vols/gv0/gfs2-sd2-gv0.pid [No such file or directory] [2019-03-11 12:55:20.771445] I [glusterd-utils.c:6090:glusterd_brick_start] 0-management: starting a fresh brick process for brick /sd5/gv0 [2019-03-11 12:55:20.776267] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-03-11 12:55:20.776544] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2019-03-11 12:55:20.776559] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped [2019-03-11 12:55:20.776601] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-03-11 12:55:20.778284] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: glustershd already stopped [2019-03-11 12:55:20.778330] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: glustershd service is stopped [2019-03-11 12:55:20.778444] I [MSGID: 106567] [glusterd-svc-mgmt.c:203:glusterd_svc_start] 0-management: Starting glustershd service [2019-03-11 12:55:21.783574] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2019-03-11 12:55:21.783650] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped [2019-03-11 12:55:21.783757] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2019-03-11 12:55:21.783786] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped [2019-03-11 12:55:21.809237] I [socket.c:2632:socket_event_handler] 0-transport: EPOLLERR - disconnecting now [2019-03-11 12:55:21.809990] I [MSGID: 106005] [glusterd-handler.c:6131:__glusterd_brick_rpc_notify] 0-management: Brick gfs2:/sd5/gv0 has disconnected from glusterd. [2019-03-11 12:55:21.810091] E [MSGID: 101012] [common-utils.c:4010:gf_is_service_running] 0-: Unable to read pidfile: /var/run/gluster/vols/gv0/gfs2-sd5-gv0.pid [2019-03-11 12:55:21.856725] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /sd5/gv0 on port 49155 [2019-03-11 12:57:44.489957] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-11 13:15:45.522170] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-11 13:16:31.187881] I [MSGID: 106533] [glusterd-volume-ops.c:938:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume gv0 [2019-03-11 13:16:31.191518] E [MSGID: 106152] [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Commit failed on gfs1.optimum.com. Please check log file for details. [2019-03-11 13:28:08.770729] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-11 13:31:40.618390] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-11 13:38:07.458844] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-11 13:42:34.927344] I [MSGID: 106488] [glusterd-handler.c:1549:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-03-11 13:42:34.928596] I [MSGID: 106488] [glusterd-handler.c:1549:__glusterd_handle_cli_get_volume] 0-management: Received get vol req [2019-03-12 06:53:54.495956] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 06:53:54.495956] and [2019-03-12 06:53:54.496385] [2019-03-12 08:33:22.035898] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-03-12 08:33:34.928645] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 08:33:22.035898] and [2019-03-12 08:33:22.036388] [2019-03-12 10:24:38.124839] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-03-12 10:24:44.816838] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 10:24:38.124839] and [2019-03-12 10:24:38.125282] [2019-03-12 19:46:34.197405] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-03-12 19:47:22.984644] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-12 19:47:41.638494] E [MSGID: 106061] [glusterd-utils.c:10171:glusterd_max_opversion_use_rsp_dict] 0-management: Maximum supported op-version not set in destination dictionary The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 19:46:34.197405] and [2019-03-12 19:46:34.197842] [2019-03-12 19:53:40.160887] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-12 19:53:40.160887] and [2019-03-12 19:53:40.161339] [2019-03-13 10:50:07.965388] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-13 10:50:07.965388] and [2019-03-13 10:50:07.965827] [2019-03-13 11:14:52.585627] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-13 20:03:10.182845] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-03-13 20:03:50.979475] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-13 20:03:10.182845] and [2019-03-13 20:03:10.183295] [2019-03-13 20:20:24.749941] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-13 20:33:54.334392] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req [2019-03-13 20:34:10.135421] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req [2019-03-13 20:34:17.716964] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req [2019-03-13 20:39:59.874639] I [MSGID: 106487] [glusterd-handler.c:1486:__glusterd_handle_cli_list_friends] 0-glusterd: Received cli list req [2019-03-13 20:41:06.476894] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 [2019-03-14 05:46:44.179862] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-03-14 05:47:42.658812] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-14 05:46:44.179862] and [2019-03-14 05:46:44.180315] [2019-03-14 07:39:51.361002] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.6/xlator/nfs/server.so: cannot open shared object file: No such file or directory" repeated 30 times between [2019-03-14 07:39:51.361002] and [2019-03-14 07:39:51.361437] [2019-03-14 07:52:50.623420] I [MSGID: 106499] [glusterd-handler.c:4314:__glusterd_handle_status_volume] 0-management: Received status volume req for volume gv0 ----------------------------- [root at gfs2 ~]# tailf -n 200 /var/log/glusterfs/bricks/sd3-gv0.log 693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:35.781523] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:35.781587] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620726: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:37.814421] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:37.814490] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620777: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:37.816570] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:37.816638] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620778: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:37.819059] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed [Read-only file system] [2019-03-10 20:00:37.819127] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:37.819175] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620782: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:37.822139] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:37.822205] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620786: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:39.855014] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:39.855088] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620837: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:39.856901] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:39.856994] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620838: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:39.859475] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed [Read-only file system] [2019-03-10 20:00:39.859545] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:39.859593] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620842: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:39.862682] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:39.862748] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620846: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:41.893161] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:41.893227] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620902: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:41.895424] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:41.895488] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620903: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:41.897991] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed [Read-only file system] [2019-03-10 20:00:41.898058] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:41.898107] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620907: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:41.900809] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:41.900885] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620911: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:43.930204] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:43.930272] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620961: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:43.932317] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:43.932379] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620962: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:43.935121] W [MSGID: 113020] [posix-helpers.c:996:posix_gfid_set] 0-gv0-posix: setting GFID on /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox failed [Read-only file system] [2019-03-10 20:00:43.935187] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:43.935234] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620966: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:43.938270] E [MSGID: 113002] [posix-entry-ops.c:316:posix_lookup] 0-gv0-posix: buf->ia_gfid is null for /sd3/gv0/79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox [No data available] [2019-03-10 20:00:43.938332] E [MSGID: 115050] [server-rpc-fops.c:175:server_lookup_cbk] 0-gv0-server: 6620970: LOOKUP /79d7a46d-5f76-4f68-a96b-c2a2e0d6712f/dom_md/inbox (4e6945b0-eb31-4e69-805e-a247b6cc6942/inbox), client: engine-18693-2019/03/04-20:46:28:745160-gv0-client-5-0-4, error-xlator: gv0-posix [No data available] [2019-03-10 20:00:44.363460] W [MSGID: 113075] [posix-helpers.c:1895:posix_fs_health_check] 0-gv0-posix: open_for_write() on /sd3/gv0/.glusterfs/health_check returned [Read-only file system] [2019-03-10 20:00:44.363629] M [MSGID: 113075] [posix-helpers.c:1962:posix_health_check_thread_proc] 0-gv0-posix: health-check failed, going down [2019-03-10 20:00:44.363785] M [MSGID: 113075] [posix-helpers.c:1981:posix_health_check_thread_proc] 0-gv0-posix: still alive! -> SIGTERM [2019-03-10 20:01:14.364221] W [glusterfsd.c:1514:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7e25) [0x7f490bec9e25] -->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xe5) [0x5585a9df1d65] -->/usr/sbin/glusterfsd(cleanup_and_exit+0x6b) [0x5585a9df1b8b] ) 0-: received signum (15), shutting down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 12:11:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 12:11:57 +0000 Subject: [Bugs] [Bug 1686711] [Thin-arbiter] : send correct error code in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686711 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22327 (cluster/afr : TA: Return actual error code in case of failure) merged (#3) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 14:44:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 14:44:20 +0000 Subject: [Bugs] [Bug 1688833] New: geo-rep session creation fails with IPV6 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Bug ID: 1688833 Summary: geo-rep session creation fails with IPV6 Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Severity: high Priority: high Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: amukherj at redhat.com, avishwan at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sasundar at redhat.com, storage-qa-internal at redhat.com Depends On: 1688231 Blocks: 1688239 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1688231 +++ Description of problem: ----------------------- This issue is seen with the RHHI-V usecase. VM images are stored in the gluster volumes and geo-replicated to the secondary site, for DR use case. When IPv6 is used, the additional mount option is required --xlator-option=transport.address-family=inet6". But when geo-rep check for slave space with gverify.sh, these mount options are not considered and it fails to mount either master or slave volume Version-Release number of selected component (if applicable): -------------------------------------------------------------- RHGS 3.4.4 ( glusterfs-3.12.2-47 ) How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create geo-rep session from the master to slave Actual results: -------------- Creation of geo-rep session fails at gverify.sh Expected results: ----------------- Creation of geo-rep session should be successful Additional info: --- Additional comment from SATHEESARAN on 2019-03-13 11:49:02 UTC --- [root@ ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 2620:52:0:4624:5054:ff:fee9:57f8 master.lab.eng.blr.redhat.com 2620:52:0:4624:5054:ff:fe6d:d816 slave.lab.eng.blr.redhat.com [root@ ~]# gluster volume info Volume Name: master Type: Distribute Volume ID: 9cf0224f-d827-4028-8a45-37f7bfaf1c78 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: master.lab.eng.blr.redhat.com:/gluster/brick1/master Options Reconfigured: performance.client-io-threads: on server.event-threads: 4 client.event-threads: 4 user.cifs: off features.shard: on network.remote-dio: enable performance.low-prio-threads: 32 performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet6 nfs.disable: on [root at localhost ~]# gluster volume geo-replication master slave.lab.eng.blr.redhat.com::slave create push-pem Unable to mount and fetch slave volume details. Please check the log: /var/log/glusterfs/geo-replication/gverify-slavemnt.log geo-replication command failed Snip from gverify-slavemnt.log [2019-03-13 11:46:28.746494] I [MSGID: 100030] [glusterfsd.c:2646:main] 0-glusterfs: Started running glusterfs version 3.12.2 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off --volfile-server slave.lab.eng.blr.redhat.com --volfile-id slave -l /var/log/glusterfs/geo-replication/gverify-slavemnt.log /tmp/gverify.sh.y1TCoY) [2019-03-13 11:46:28.750595] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction [2019-03-13 11:46:28.753702] E [MSGID: 101075] [common-utils.c:482:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known) [2019-03-13 11:46:28.753725] E [name.c:267:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host slave.lab.eng.blr.redhat.com [2019-03-13 11:46:28.753953] I [glusterfsd-mgmt.c:2337:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: slave.lab.eng.blr.redhat.com [2019-03-13 11:46:28.753980] I [glusterfsd-mgmt.c:2358:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2019-03-13 11:46:28.753998] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0 [2019-03-13 11:46:28.754073] I [MSGID: 101190] [event-epoll.c:676:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-13 11:46:28.754154] W [glusterfsd.c:1462:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(rpc_clnt_notify+0xab) [0x7fc39d379bab] -->glusterfs(+0x11fcd) [0x56427db95fcd] -->glusterfs(cleanup_and_exit+0x6b) [0x56427db8eb2b] ) 0-: received signum (1), shutting down [2019-03-13 11:46:28.754197] I [fuse-bridge.c:6611:fini] 0-fuse: Unmounting '/tmp/gverify.sh.y1TCoY'. [2019-03-13 11:46:28.760213] I [fuse-bridge.c:6616:fini] 0-fuse: Closing fuse connection to '/tmp/gverify.sh.y1TCoY'. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1688231 [Bug 1688231] geo-rep session creation fails with IPV6 https://bugzilla.redhat.com/show_bug.cgi?id=1688239 [Bug 1688239] geo-rep session creation fails with IPV6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 14:44:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 14:44:55 +0000 Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 14 14:51:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 14:51:55 +0000 Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22363 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 14 14:51:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 14:51:56 +0000 Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22363 (WIP geo-rep: IPv6 support) posted (#1) for review on master by Aravinda VK -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 14 18:07:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 14 Mar 2019 18:07:36 +0000 Subject: [Bugs] [Bug 1688287] ganesha crash on glusterfs with shard volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688287 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22357 (shard: fix crash caused by using null inode) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 04:56:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 04:56:46 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Karthik U S changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|ksubrahm at redhat.com |srakonde at redhat.com --- Comment #23 from Karthik U S --- Hi, Sorry for the delay. In the first case of conversion from 3.12.15 to 5.3, the bricks on the upgraded nodes failed to come up. The heal command will fail if any of the bricks are not available or down. In the second case of conversion from 4.1.4 to 3.12.15 even though we have all the bricks and shd up and running I can see some errors in the glusterd logs during the commit phase of the heal command. We need to check from glusterd side why this is happening. Sanju are you aware of any such cases? Can you debug this further to see why the brick is failing to come up and why the heal commit fails? Regards, Karthik -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 07:03:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 07:03:41 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22365 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 07:03:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 07:03:42 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #583 from Worker Ant --- REVIEW: https://review.gluster.org/22365 (mount/fuse: Fix spelling mistake) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 10:55:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:55:47 +0000 Subject: [Bugs] [Bug 1689173] New: [Tracker] slow 'ls' (crawl/readdir) performance Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Bug ID: 1689173 Summary: [Tracker] slow 'ls' (crawl/readdir) performance Product: GlusterFS Version: mainline Status: NEW Component: core Keywords: Performance, Tracking Severity: high Priority: high Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: atumball at redhat.com, bugs at gluster.org, guillaume.pavese at interact-iv.com, jahernan at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1499605, 1611779, 1628807, 1635112, 1649303, 1651048 (RHGS-Slow-ls), 1644389, 1657682 Blocks: 1616206, 1668820 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1499605 [Bug 1499605] Directory listings on fuse mount are very slow due to small number of getdents() entries https://bugzilla.redhat.com/show_bug.cgi?id=1644389 [Bug 1644389] [GSS] Directory listings on fuse mount are very slow due to small number of getdents() entries https://bugzilla.redhat.com/show_bug.cgi?id=1651048 [Bug 1651048] [Tracker] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 10:55:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:55:47 +0000 Subject: [Bugs] [Bug 1499605] Directory listings on fuse mount are very slow due to small number of getdents() entries In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1499605 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1689173 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 [Bug 1689173] [Tracker] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 10:55:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:55:47 +0000 Subject: [Bugs] [Bug 1644389] [GSS] Directory listings on fuse mount are very slow due to small number of getdents() entries In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644389 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1689173 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 [Bug 1689173] [Tracker] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 10:56:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:56:50 +0000 Subject: [Bugs] [Bug 1689173] [Tracker] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | CC|guillaume.pavese at interact-i | |v.com, rhinduja at redhat.com, | |rhs-bugs at redhat.com, | |storage-qa-internal at redhat. | |com | Blocks|1616206, 1668820 | Depends On|1499605, 1611779, 1628807, | |1635112, 1649303, 1651048 | |(RHGS-Slow-ls), 1644389, | |1657682 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1499605 [Bug 1499605] Directory listings on fuse mount are very slow due to small number of getdents() entries https://bugzilla.redhat.com/show_bug.cgi?id=1644389 [Bug 1644389] [GSS] Directory listings on fuse mount are very slow due to small number of getdents() entries https://bugzilla.redhat.com/show_bug.cgi?id=1651048 [Bug 1651048] [Tracker] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 10:56:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:56:50 +0000 Subject: [Bugs] [Bug 1499605] Directory listings on fuse mount are very slow due to small number of getdents() entries In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1499605 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1689173 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 [Bug 1689173] [Tracker] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 10:56:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:56:50 +0000 Subject: [Bugs] [Bug 1644389] [GSS] Directory listings on fuse mount are very slow due to small number of getdents() entries In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644389 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1689173 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 [Bug 1689173] [Tracker] slow 'ls' (crawl/readdir) performance -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 10:57:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:57:36 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|[Tracker] slow 'ls' |slow 'ls' (crawl/readdir) |(crawl/readdir) performance |performance -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 10:58:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 10:58:52 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 11:02:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 11:02:47 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22366 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 11:02:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 11:02:48 +0000 Subject: [Bugs] [Bug 1689173] slow 'ls' (crawl/readdir) performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689173 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22366 (cluster/dht: readdirp performance improvements) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 12:46:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 12:46:03 +0000 Subject: [Bugs] [Bug 1687687] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687687 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22338 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 12:46:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 12:46:04 +0000 Subject: [Bugs] [Bug 1687687] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687687 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-15 12:46:04 --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22338 (cluster/afr: Send truncate on arbiter brick from SHD) merged (#1) on release-5 by Karthik U S -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 12:54:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 12:54:45 +0000 Subject: [Bugs] [Bug 1689214] New: GlusterFS 5.5 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689214 Bug ID: 1689214 Summary: GlusterFS 5.5 tracker Product: GlusterFS Version: 5 Status: NEW Component: core Keywords: Tracking, Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Tracker for the release 5.5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 12:57:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 12:57:30 +0000 Subject: [Bugs] [Bug 1689214] GlusterFS 5.5 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689214 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22367 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 12:57:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 12:57:31 +0000 Subject: [Bugs] [Bug 1689214] GlusterFS 5.5 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689214 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22367 (doc: Added release notes for 5.5) posted (#1) for review on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 13:20:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 13:20:43 +0000 Subject: [Bugs] [Bug 1689214] GlusterFS 5.5 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689214 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-15 13:20:43 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22367 (doc: Added release notes for 5.5) merged (#1) on release-5 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 14:05:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:05:27 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #24 from Amgad --- Thanks Karthik I can see for the first case it's because of the ?failed to dispatch handler" (Bug 1671556) which should be addressed in 5.4. The second case, is definitely an issue for rolling from an older release to a newer one. is there a "heal" incompatibility between 3.12 and later releases? becaucase this will impact 5.4 as well. Appreciate your support! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 14:10:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:10:13 +0000 Subject: [Bugs] [Bug 1689250] New: Excessive AFR messages from gluster showing in RHGSWA. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Bug ID: 1689250 Summary: Excessive AFR messages from gluster showing in RHGSWA. Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: replicate Keywords: ZStream Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org, nchilaka at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, sheggodu at redhat.com, storage-qa-internal at redhat.com Depends On: 1676495 Blocks: 1666386 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1676495 +++ +++ This bug was initially created as a clone of Bug #1666386 +++ Description of problem: See https://lists.gluster.org/pipermail/gluster-devel/2019-March/055925.html Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1666386 [Bug 1666386] Excessive AFR messages from gluster showing in RHGSWA. https://bugzilla.redhat.com/show_bug.cgi?id=1676495 [Bug 1676495] Excessive AFR messages from gluster showing in RHGSWA. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 14:10:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:10:57 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords|ZStream |Triaged Status|NEW |ASSIGNED CC|nchilaka at redhat.com, | |rhs-bugs at redhat.com, | |sankarshan at redhat.com, | |sheggodu at redhat.com, | |storage-qa-internal at redhat. | |com | Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 15 14:16:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:16:13 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22368 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 14:16:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:16:14 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22368 (gfapi: add function to set client-pid) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 14:17:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:17:19 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22369 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 14:17:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:17:20 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22369 (afr: add client-id to all gf_event() calls) posted (#1) for review on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 14:59:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 14:59:56 +0000 Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-15 14:59:56 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22363 (geo-rep: IPv6 support) merged (#3) on master by Aravinda VK -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 15 15:19:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 15 Mar 2019 15:19:58 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #584 from Worker Ant --- REVIEW: https://review.gluster.org/22365 (mount/fuse: Fix spelling mistake) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 16 07:53:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 16 Mar 2019 07:53:04 +0000 Subject: [Bugs] [Bug 1689500] New: poller thread autoscale logic is not correct for brick_mux environment Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689500 Bug ID: 1689500 Summary: poller thread autoscale logic is not correct for brick_mux environment Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Current autoscale logic of poller thread is not correct for brick_mux environment Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1) Setup a 3x3 volume 2) Enable brick_multiplex 3) Set server.event-threads to 4 for a volume 4) Check poller threads for the brick ps -T -p `pgrep glusterfsd` | grep poll | wc -l Ideally, the count of poller thread should be equal to 12(3 bricks * 4) but it is showing total poller threads are 7 Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 16 07:53:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 16 Mar 2019 07:53:24 +0000 Subject: [Bugs] [Bug 1689500] poller thread autoscale logic is not correct for brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689500 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 16 11:13:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 16 Mar 2019 11:13:31 +0000 Subject: [Bugs] [Bug 1689500] poller thread autoscale logic is not correct for brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689500 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22370 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Mar 16 11:13:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 16 Mar 2019 11:13:32 +0000 Subject: [Bugs] [Bug 1689500] poller thread autoscale logic is not correct for brick_mux environment In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689500 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22370 (core: poller thread autoscale logic is not correct for brick_mux) posted (#1) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 17 19:55:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 17 Mar 2019 19:55:31 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22371 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 17 19:55:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 17 Mar 2019 19:55:32 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #585 from Worker Ant --- REVIEW: https://review.gluster.org/22371 (rpc* : move to use dict_n functions where possible) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 06:16:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 06:16:54 +0000 Subject: [Bugs] [Bug 1683526] rebalance start command doesn't throw up error message if the command fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683526 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |srakonde at redhat.com Assignee|bugs at gluster.org |srakonde at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 06:37:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 06:37:56 +0000 Subject: [Bugs] [Bug 1665216] Databases crashes on Gluster 5 with the option performance.write-behind enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665216 --- Comment #8 from mhutter --- Hi, were you able to reproduce the issue? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 18 07:29:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 07:29:24 +0000 Subject: [Bugs] [Bug 1689799] New: [cluster/ec] : Fix handling of heal info cases without locks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 Bug ID: 1689799 Summary: [cluster/ec] : Fix handling of heal info cases without locks Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When we use heal info command it takes lot of time as some cases it takes lock on entries to find out if the entry actualy needs heal or not. There are some cases where we can avoid these locks and can conclude if the entry needs heal or not. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 08:27:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 08:27:58 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22372 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 08:27:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 08:27:59 +0000 Subject: [Bugs] [Bug 1689799] [cluster/ec] : Fix handling of heal info cases without locks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689799 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22372 (cluster/ec: Fix handling of heal info cases without locks) posted (#1) for review on master by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 09:21:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 09:21:17 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22373 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 09:21:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 09:21:18 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #586 from Worker Ant --- REVIEW: https://review.gluster.org/22373 (glusterd-locks: misc. changes.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 09:38:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 09:38:17 +0000 Subject: [Bugs] [Bug 1470040] packaging: Upgrade glusterfs-ganesha sometimes fails to semanage ganesha_use_fusefs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1470040 Bug 1470040 depends on bug 1470136, which changed state. Bug 1470136 Summary: [GANESHA] Upgrade nfs-ganesha from 3.2 to 3.3 is breaking due to selinux boolean ganesha_use_fusefs in off state https://bugzilla.redhat.com/show_bug.cgi?id=1470136 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WORKSFORME -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 18 10:30:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 10:30:58 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22374 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 10:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 10:30:59 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #15 from Worker Ant --- REVIEW: https://review.gluster.org/22374 (release-notes: add status of gd2 and a highlights section) posted (#1) for review on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 11:37:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 11:37:04 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Netbulae changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(info at netbulae.com | |) | --- Comment #21 from Netbulae --- Upgraded to node 4.3.2-2019031310. As stated earlier, there is nothing in the brick and glusterd log. I've set "diagnostics.client-log-level" to DEBUG but still see nothing. glustershd.log [2019-03-18 11:10:54.005050] D [MSGID: 114031] [client-rpc-fops.c:1508:client3_3_inodelk_cbk] 0-hdd2-client-2: remote operation failed [Resource temporarily unavailable] [2019-03-18 11:10:54.005079] D [MSGID: 0] [client-rpc-fops.c:1511:client3_3_inodelk_cbk] 0-stack-trace: stack-address: 0x7f1254002560, hdd2-client-2 returned -1 error: Resource temporarily unavailable [Resource temporarily unavailable] [2019-03-18 11:10:55.545357] D [MSGID: 0] [afr-self-heald.c:218:afr_shd_index_inode] 0-hdd2-replicate-0: glusterfs.xattrop_entry_changes_gfid dir gfid for hdd2-client-0: b8f46fe6-ef56-44ac-8586-22061b0f2d5b [2019-03-18 11:10:55.546606] D [MSGID: 0] [afr-self-heald.c:597:afr_shd_index_healer] 0-hdd2-replicate-0: finished index sweep on subvol hdd2-client-0 [2019-03-18 11:11:24.234360] D [rpc-clnt-ping.c:336:rpc_clnt_start_ping] 0-hdd2-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) [2019-03-18 11:11:24.234416] D [rpc-clnt-ping.c:336:rpc_clnt_start_ping] 0-hdd2-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2019-03-18 11:11:24.234425] D [rpc-clnt-ping.c:336:rpc_clnt_start_ping] 0-hdd2-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2019-03-18 11:12:46.240335] D [logging.c:1855:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2019-03-18 11:12:46.240388] D [logging.c:1817:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2019-03-18 11:14:09.000515] D [rpc-clnt-ping.c:99:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f12ab9f7ebb] (--> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f12ab7c228b] (--> /lib64/libgfrpc.so.0(+0x14a31)[0x7f12ab7c2a31] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x4a1)[0x7f12ab7bf651] (--> /usr/lib64/glusterfs/3.12.15/xlator/protocol/client.so(+0x102e2)[0x7f129e0e02e2] ))))) 0-: 192.168.99.14:49154: ping timer event already removed [2019-03-18 11:14:09.000619] D [MSGID: 0] [syncop-utils.c:548:syncop_is_subvol_local] 0-ssd4-client-0: subvol ssd4-client-0 is local [2019-03-18 11:14:09.000637] D [MSGID: 0] [afr-self-heald.c:580:afr_shd_index_healer] 0-ssd4-replicate-0: starting index sweep on subvol ssd4-client-0 [2019-03-18 11:14:09.000635] D [rpc-clnt-ping.c:211:rpc_clnt_ping_cbk] 0-ssd4-client-0: Ping latency is 0ms glusterfs clien log [2019-03-18 11:03:57.442215] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.3 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.14 --volfile-server=*.*.*.15 --volfile-server=*.*.*.16 --volfile-id=/ssd9 /rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9) [2019-03-18 11:03:57.472426] I [MSGID: 101190] [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-18 11:06:03.636977] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7f392add8dd5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x556b04278e75] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x556b04278ceb] ) 0-: received signum (15), shutting down [2019-03-18 11:06:03.637039] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9'. [2019-03-18 11:06:03.654263] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9'. [2019-03-18 11:13:29.415760] I [MSGID: 100030] [glusterfsd.c:2715:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.3 (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=*.*.*.14 --volfile-server=*.*.*.15 --volfile-server=*.*.*.16 --volfile-id=/ssd9 /rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9) [2019-03-18 11:13:29.444824] I [MSGID: 101190] [event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-18 11:29:01.000279] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.14 [2019-03-18 11:29:01.000330] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.15 [2019-03-18 11:29:01.002495] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fb4beddbfbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7fb4beba4e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7fb4beba4f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7fb4beba6531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7fb4beba70d8] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2019-03-18 11:13:29.445101 (xid=0x2) [2019-03-18 11:29:01.002517] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/ssd9) [2019-03-18 11:29:01.002550] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib64/libgfrpc.so.0(+0xce32) [0x7fb4beba4e32] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x841) [0x5586d9f3c361] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5586d9f34ceb] ) 0-: received signum (0), shutting down [2019-03-18 11:29:01.002578] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9'. [2019-03-18 11:29:01.009036] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to '/rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9'. [2019-03-18 11:29:01.009655] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dd5) [0x7fb4bdc3ddd5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x5586d9f34e75] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x5586d9f34ceb] ) 0-: received signum (15), shutting down supervdsm.log MainProcess|jsonrpc/5::DEBUG::2019-03-18 12:11:29,020::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call volumeInfo with (u'ssd9', u'*.*.*.14') {} MainProcess|jsonrpc/5::DEBUG::2019-03-18 12:11:29,020::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/gluster --mode=script volume info --remote-host=*.*.*.14 ssd9 --xml (cwd None) MainProcess|jsonrpc/6::DEBUG::2019-03-18 12:11:31,460::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call ksmTune with ({u'run': 0, u'merge_across_nodes': 1},) {} MainProcess|jsonrpc/6::DEBUG::2019-03-18 12:11:31,461::supervdsm_server::106::SuperVdsm.ServerCallback::(wrapper) return ksmTune with None MainProcess|jsonrpc/5::DEBUG::2019-03-18 12:13:29,227::commands::219::root::(execCmd) FAILED: = ''; = 1 MainProcess|jsonrpc/5::DEBUG::2019-03-18 12:13:29,227::logutils::319::root::(_report_stats) ThreadedHandler is ok in the last 122 seconds (max pending: 1) MainProcess|jsonrpc/5::ERROR::2019-03-18 12:13:29,227::supervdsm_server::103::SuperVdsm.ServerCallback::(wrapper) Error in volumeInfo Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 101, in wrapper res = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 529, in volumeInfo xmltree = _execGlusterXml(command) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 131, in _execGlusterXml return _getTree(rc, out, err) File "/usr/lib/python2.7/site-packages/vdsm/gluster/cli.py", line 112, in _getTree raise ge.GlusterCmdExecFailedException(rc, out, err) GlusterCmdExecFailedException: Command execution failed: rc=1 out='Error : Request timed out\n' err='' MainProcess|jsonrpc/5::DEBUG::2019-03-18 12:13:29,229::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper) call mount with (, u'*.*.*.14:/ssd9', u'/rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9') {'vfstype': u'glusterfs', 'mntOpts': u'backup-volfile-servers=*.*.*.15:*.*.*.16', 'cgroup': 'vdsm-glusterfs'} MainProcess|jsonrpc/5::DEBUG::2019-03-18 12:13:29,230::commands::198::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o backup-volfile-servers=*.*.*.15:*.*.*.16 *.*.*.14:/ssd9 /rhev/data-center/mnt/glusterSD/*.*.*.14:_ssd9 (cwd None) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 12:10:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 12:10:39 +0000 Subject: [Bugs] [Bug 1689905] New: gd2 smoke job aborts on timeout Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689905 Bug ID: 1689905 Summary: gd2 smoke job aborts on timeout Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: high Assignee: bugs at gluster.org Reporter: ykaul at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: >From https://build.gluster.org/job/gd2-smoke/4762/console : Installing vendored packages 13:25:04 Build timed out (after 30 minutes). Marking the build as aborted. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 12:54:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 12:54:43 +0000 Subject: [Bugs] [Bug 1689920] New: lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Bug ID: 1689920 Summary: lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator Product: GlusterFS Version: mainline Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When using ec, there are many messages at brick log as, E [inodelk.c:514:__inode_unlock_lock] 0-test-locks: Matching lock not found for unlock 0-9223372036854775807, lo=68e040a84b7f0000 on 0x7f208c006f78 E [MSGID: 115053] [server-rpc-fops_v2.c:280:server4_inodelk_cbk] 0-test-server: 2557439: INODELK (df4e41be-723f-4289-b7af-b4272b3e880c), client: CTX_ID:67d4a7f3-605a-4965-89a5-31309d62d1fa-GRAPH_ID:0-PID:1659-HOST:openfs-node2-PC_NAME:test-client-1-RECON_NO:-28, error-xlator: test-locks [Invalid argument] Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 12:57:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 12:57:43 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22377 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 12:57:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 12:57:44 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22377 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) posted (#1) for review on master by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 18 13:45:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 13:45:51 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #13 from Hubert --- fyi: on a test setup (debian stretch, after upgrade 5.3 -> 5.5) i did a little test: - copied 11GB of data - via rsync: rsync --bwlimit=10000 --inplace --- bandwith limit of max. 10000 KB/s - rsync pulled data over interface eth0 - rsync stats: sent 1,484,200 bytes received 11,402,695,074 bytes 5,166,106.13 bytes/sec - so external traffic average was about 5 MByte/s - result was an internal traffic up to 350 MBit/s (> 40 MByte/s) on eth1 (LAN interface) - graphic of internal traffic: https://abload.de/img/if_eth1-internal-trafdlkcy.png - graphic of external traffic: https://abload.de/img/if_eth0-external-trafrejub.png -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 18 14:50:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 18 Mar 2019 14:50:40 +0000 Subject: [Bugs] [Bug 1689981] New: OSError: [Errno 1] Operation not permitted - failing with socket files? Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689981 Bug ID: 1689981 Summary: OSError: [Errno 1] Operation not permitted - failing with socket files? Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Severity: high Assignee: bugs at gluster.org Reporter: davobbi at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: georeplciation during "History Crawl" starts failing on each of the three bricks, one after the other. I have enabled DEBUG for all the logs configurable by the geo-replication command. Running glusterfs v4.16 the behaviour is as follow: - The "History Crawl" worked fine for about one hr, it actually replicated some files and folders albeit most of them looks empty - at some point it starts becoming faulty, try to start on another brick, faulty and so on - in the logs, Python exception above mentioned is raised: [2019-03-17 18:52:49.565040] E [syncdutils(worker /var/lib/heketi/mounts/vg_b088aec908c959c75674e01fb8598c21/brick_f90f425ecb89c3eec6ef2ef4a2f0a973/brick):332:log_raise_exception] : FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in main func(args) File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in subcmd_worker local.service_loop(remote) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1291, in service_loop g3.crawlwrap(oneshot=True) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 615, in crawlwrap self.crawl() File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1569, in crawl self.changelogs_batch_process(changes) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1469, in changelogs_batch_process self.process(batch) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1304, in process self.process_change(change, done, retry) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1203, in process_change failures = self.slave.server.entry_ops(entries) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in __call__ raise res OSError: [Errno 1] Operation not permitted - The operation before the exception: [2019-03-17 18:52:49.545103] D [master(worker /var/lib/heketi/mounts/vg_b088aec908c959c75674e01fb8598c21/brick_f90f425ecb89c3eec6ef2ef4a2f0a973/brick):1186:process_change] _GMaster: entries: [{'uid': 7575, 'gfid': 'e1ad7c98-f32a-4e48-9902-cc75840de7c3', 'gid': 100, 'mode' : 49536, 'entry': '.gfid/5219e4b8-a1f3-4a4e-b9c7-c9b129abe671/.control_f7c33270dc9db9234d005406a13deb4375459715.6lvofzOuVnfAwOwY', 'op': 'MKNOD'}, {'gfid': 'e1ad7c98-f32a-4e48-9902-cc75840de7c3', 'entry': '.gfid/5219e4b8-a1f3-4a4e-b9c7-c9b129abe671/.control_f7c33270dc9db9 234d005406a13deb4375459715', 'stat': {'atime': 1552661403.3846507, 'gid': 100, 'mtime': 1552661403.3846507, 'uid': 7575, 'mode': 49536}, 'link': None, 'op': 'LINK'}, {'gfid': 'e1ad7c98-f32a-4e48-9902-cc75840de7c3', 'entry': '.gfid/5219e4b8-a1f3-4a4e-b9c7-c9b129abe671/.con trol_f7c33270dc9db9234d005406a13deb4375459715.6lvofzOuVnfAwOwY', 'op': 'UNLINK'}] [2019-03-17 18:52:49.548614] D [repce(worker /var/lib/heketi/mounts/vg_b088aec908c959c75674e01fb8598c21/brick_f90f425ecb89c3eec6ef2ef4a2f0a973/brick):179:push] RepceClient: call 56917:140179359156032:1552848769.55 entry_ops([{'uid': 7575, 'gfid': 'e1ad7c98-f32a-4e48-9902- cc75840de7c3', 'gid': 100, 'mode': 49536, 'entry': '.gfid/5219e4b8-a1f3-4a4e-b9c7-c9b129abe671/.control_f7c33270dc9db9234d005406a13deb4375459715.6lvofzOuVnfAwOwY', 'op': 'MKNOD'}, {'gfid': 'e1ad7c98-f32a-4e48-9902-cc75840de7c3', 'entry': '.gfid/5219e4b8-a1f3-4a4e-b9c7-c9b 129abe671/.control_f7c33270dc9db9234d005406a13deb4375459715', 'stat': {'atime': 1552661403.3846507, 'gid': 100, 'mtime': 1552661403.3846507, 'uid': 7575, 'mode': 49536}, 'link': None, 'op': 'LINK'}, {'gfid': 'e1ad7c98-f32a-4e48-9902-cc75840de7c3', 'entry': '.gfid/5219e4b8 -a1f3-4a4e-b9c7-c9b129abe671/.control_f7c33270dc9db9234d005406a13deb4375459715.6lvofzOuVnfAwOwY', 'op': 'UNLINK'}],) ... - The gfid highlighted, is pointing to these control files which are "unix sockets" as per below: rw------- 2 pippo users 0 Mar 14 16:32 .control_31c3a99664c1f956f949311e58434037e6a52d22 srw------- 2 pippo users 0 Mar 14 16:33 .control_a9b82937042529bca677b9f43eba9eb02ca7c5ee srw------- 2 pippo users 0 Mar 14 16:32 .control_f429221460d52570066d9f25521011fe7e081cf5 srw------- 2 pippo users 0 Mar 15 15:50 .control_f7c33270dc9db9234d005406a13deb4375459715 So it seems geo-replicaiton should be at least skipping such file rather than raising an exception? Steps to Reproduce: 1. replicate unix socket files Actual results: Os Error exception Expected results: Files to be skipped and replication continues Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 06:15:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 06:15:50 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high Flags| |needinfo?(jsecchiero at enter. | |eu) Severity|medium |high --- Comment #14 from Poornima G --- Apologies for the delay, there have been some changes done to quick-read feature, which deals with reading the content of a file in lookup fop, if the file is smaller than 64KB. I m suspecting that with 5.3 the increase in bandwidth may be due to more number of reads of small file(generated by quick-read). Please try the following: gluster vol set quick-read off gluster vol set read-ahead off gluster vol set io-cache off And let us know if the network bandwidth consumption decreases, meanwhile i will try to reproduce the same locally. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 07:18:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 07:18:29 +0000 Subject: [Bugs] [Bug 1690254] New: Volume create fails with "Commit failed" message if volumes is created using 3 nodes with glusterd restarts on 4th node. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690254 Bug ID: 1690254 Summary: Volume create fails with "Commit failed" message if volumes is created using 3 nodes with glusterd restarts on 4th node. Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: medium Assignee: bugs at gluster.org Reporter: kiyer at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem On a 4 node cluster(N1, N2, N3 and N4), create a volume using the first 3 nodes(N1, N2 and N3) and restart glusterd on N4. Volume create fails with commit failed message. Version-Release number of selected component (if applicable): Not sure whatever is there in upstream How reproducible: 4/4 Steps to Reproduce: 1. Create a cluster with 4 nodes. 2. Create volume using the first three nodes say N1, N2 and N3. 3. While the create is happening restart the fourth node N4. Actual results: Volume create fails with the error "volume create: testvol_distributed: failed: Commit failed on 172.19.2.166. Please check log file for details." Expected results: Volume should be successfully created. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 07:41:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 07:41:17 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22378 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 07:41:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 07:41:18 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #16 from Worker Ant --- REVIEW: https://review.gluster.org/22378 (release-notes/6.0: Add ctime feature changes in release notes) posted (#2) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 08:12:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 08:12:04 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #15 from Hubert --- I deactivated the 3 params and did the same test again. - same rsync params: rsync --bwlimit=10000 --inplace - rsync stats: sent 1,491,733 bytes received 11,444,330,300 bytes 6,703,263.27 bytes/sec - so ~6,7 MByte/s or ~54 MBit/s in average (peak of 60 MBit/s) over external network interface - traffic graphic of the server with rsync command: https://abload.de/img/if_eth1-internal-traf4zjow.png - so server is sending with an average of ~110 MBit/s and with peak at ~125 MBit/s over LAN interface - traffic graphic of one of the replica servers (disregard first curve: is the delete of the old data): https://abload.de/img/if_enp5s0-internal-trn5k9v.png - so one of the replicas receices data with ~55 MBit/s average and peak ~62 MBit/s - as a comparison - traffic before and after changing the 3 params (rsync server, highest curve is relevant): - https://abload.de/img/if_eth1-traffic-befortvkib.png So it looks like the traffic was reduced to about a third. Is it this what you expected? If so: traffic would be still a bit higher when i compare 4.1.6 and 5.3 - here's a graphic of one client in our live system after switching from 4.1.6 (~20 MBit/s) to 5.3. (~100 MBit/s in march): https://abload.de/img/if_eth1-comparison-gly8kyx.png So if this traffic gets reduced to 1/3: traffic would be ~33 MBit/s then. Way better, i think. And could be "normal"? Thx so far :-) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 08:20:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 08:20:12 +0000 Subject: [Bugs] [Bug 1688833] geo-rep session creation fails with IPV6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688833 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1688231 Depends On|1688231 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1688231 [Bug 1688231] geo-rep session creation fails with IPV6 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 09:03:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 09:03:59 +0000 Subject: [Bugs] [Bug 1503170] [RFE] Avoid entry re-sync during changelog reprocessing. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1503170 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rallan at redhat.com | |) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 09:18:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 09:18:59 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22377 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) merged (#1) on master by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 09:23:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 09:23:48 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(jsecchiero at enter. | |eu) | --- Comment #16 from Poornima G --- Awesome thank you for trying it out, i was able to reproduce this issue locally, one of the major culprit was the quick-read. The other two options had no effect in reducing the bandwidth consumption. So for now as a workaround, can disable quick-read: # gluster vol set quick-read off Quick-read alone reduced the bandwidth consumption by 70% for me. Debugging the rest 30% increase. Meanwhile, planning to make this bug a blocker for our next gulster-6 release. Will keep the bug updated with the progress. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 09:39:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 09:39:04 +0000 Subject: [Bugs] [Bug 1687326] [RFE] Revoke access from nodes using Certificate Revoke List in SSL In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687326 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-19 09:39:04 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22334 (socket/ssl: fix crl handling) merged (#15) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 10:07:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 10:07:35 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #17 from Hubert --- i'm running another test, just alongside... simply deleting and copying data, no big effort. Just curious :-) 2 little questions: - does disabling quick-read have any performance issues for certain setups/scenarios? - bug only blocker for v6 release? update for v5 planned? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 10:36:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 10:36:20 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #18 from Poornima G --- (In reply to Hubert from comment #17) > i'm running another test, just alongside... simply deleting and copying > data, no big effort. Just curious :-) I think if the volume hosts small files, then any kind of operation around these files will see increased bandwidth usage. > > 2 little questions: > > - does disabling quick-read have any performance issues for certain > setups/scenarios? Small file reads(files with size <= 64kb) will see reduced performance. Eg: web server use case. > - bug only blocker for v6 release? update for v5 planned? Yes there will be updated for v5, not sure when. The updates for major releases are made once in every 3 or 4 weeks not sure. For critical bugs the release will be made earlier. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 10:46:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 10:46:41 +0000 Subject: [Bugs] [Bug 1688218] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-19 10:46:41 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22355 (glusterfsd: Brick is getting crash at the time of startup) merged (#1) on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 10:46:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 10:46:42 +0000 Subject: [Bugs] [Bug 1687705] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687705 Bug 1687705 depends on bug 1688218, which changed state. Bug 1688218 Summary: Brick process has coredumped, when starting glusterd https://bugzilla.redhat.com/show_bug.cgi?id=1688218 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 11:54:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 11:54:58 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #19 from Alberto Bengoa --- Hello guys, Thanks for your update Poornima. I was already running quick-read off here so, on my case, I noticed the traffic growing consistently after enabling it. I've made some tests on my scenario, and I wasn't able to reproduce your 70% reduction results. To me, it's near 46% of traffic reduction (from around 103 Mbps to around 55 Mbps, graph attached here: https://pasteboard.co/I68s9qE.png ) What I'm doing is just running a find . type -d on a directory with loads of directories/files. Poornima, if you don't mind to answer a question, why are we seem this traffic on the inbound of gluster servers (outbound of clients)? On my particular case, the traffic should be basically on the opposite direction I think, and I'm very curious about that. Thank you, Alberto -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 13:23:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 13:23:44 +0000 Subject: [Bugs] [Bug 1437332] auth failure after upgrade to GlusterFS 3.10 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1437332 Bala Konda Reddy M changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(bmekala at redhat.co |needinfo- |m) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 13:47:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 13:47:50 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #25 from Amgad --- Any update, feedback or any investigation going on? Any idea about the root cause/fix? will it be in 5.4? I did more testing and realized that "gluster volume status" doesn't provide the right status when rolled-back the 1st server, "gfs-1" to 3.12.15, after the full upgrade (the other two replicas still on 4.1.4). When rolled-back gfs-1, I got: [root at gfs-1 ansible1]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 N/A N/A N N/A Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 N/A N/A N N/A Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 N/A N/A N N/A Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks Then when I rolled-back gfs-2 I got: ==================================== [root at gfs-2 ansible1]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 23400 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 14481 Self-heal Daemon on localhost N/A N/A Y 14472 Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 23409 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 14490 Self-heal Daemon on localhost N/A N/A Y 14472 Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 23418 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 14499 Self-heal Daemon on localhost N/A N/A Y 14472 Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks Then when rolled-back the third replica, I got the full status: ============================================================== [root at gfs-3new ansible1]# gluster volume statusStatus of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 23400 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 14481 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 13184 Self-heal Daemon on localhost N/A N/A Y 13174 Self-heal Daemon on 10.76.153.213 N/A N/A Y 14472 Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 23409 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 14490 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 13193 Self-heal Daemon on localhost N/A N/A Y 13174 Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390 Self-heal Daemon on 10.76.153.213 N/A N/A Y 14472 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 23418 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 14499 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 13202 Self-heal Daemon on localhost N/A N/A Y 13174 Self-heal Daemon on 10.76.153.206 N/A N/A Y 23390 Self-heal Daemon on 10.76.153.213 N/A N/A Y 14472 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 14:05:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:05:15 +0000 Subject: [Bugs] [Bug 1690454] New: mount-shared-storage.sh does not implement mount options Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690454 Bug ID: 1690454 Summary: mount-shared-storage.sh does not implement mount options Product: GlusterFS Version: 5 OS: Linux Status: NEW Component: posix-acl Assignee: bugs at gluster.org Reporter: mplx+redhat at donotreply.at CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: mount-shared-storage.sh does not take care of mount options specified in /etc/fstab Version-Release number of selected component (if applicable): 5.5 How reproducible: reproducible Steps to Reproduce: 1. add acl option to glusterfs entry in fstab 2. manually mount or mount via os means; acl is enabled 3. unmount 4. mount-shared-storage.sh mounts glusterfs; acl not enabled Actual results: mount-shared-storage.sh mounts without options from fstab (i.e. mount -t glusterfs /src /target) Expected results: mount-shared-storage.sh evaluates and uses options from fstab when mounting (i.e. mount --target /target) Additional info: https://github.com/gluster/glusterfs/blob/master/extras/mount-shared-storage.sh#L24 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:06:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:06:35 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22379 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:06:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:06:36 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #17 from Worker Ant --- REVIEW: https://review.gluster.org/22379 (deprecated xlator upgrade doc) posted (#1) for review on release-6 by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:13:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:13:03 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-19 14:13:03 --- Comment #18 from Worker Ant --- REVIEW: https://review.gluster.org/22374 (release-notes: add status of gd2 and a highlights section) merged (#5) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:13:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:13:26 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #19 from Worker Ant --- REVIEW: https://review.gluster.org/22378 (release-notes/6.0: Add ctime feature changes in release notes) merged (#4) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:19:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:19:04 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22380 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:19:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:19:05 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 --- Comment #20 from Worker Ant --- REVIEW: https://review.gluster.org/22380 (doc: Final version of release-6 release notes) posted (#1) for review on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:47:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:47:45 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22381 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:47:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:47:46 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #587 from Worker Ant --- REVIEW: https://review.gluster.org/22381 (Multiple files: remove HAVE_BD_XLATOR related code.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:48:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:48:45 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-03-19 14:13:03 |2019-03-19 14:48:45 --- Comment #21 from Worker Ant --- REVIEW: https://review.gluster.org/22380 (doc: Final version of release-6 release notes) merged (#1) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 14:57:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:57:36 +0000 Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671556 --- Comment #31 from Artem Russakovskii --- I upgraded the node that was crashing to 5.5 yesterday. Today, it got another crash. This is a 1x4 replicate cluster, you can find the config mentioned in my previous reports, and Amar should have it as well. Here's the log: ==> mnt-_data1.log <== The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk] 0-_data1-replicate-0: selecting local read_child _data1-client-3" repeated 4 times between [2019-03-19 14:40:50.741147] and [2019-03-19 14:40:56.874832] pending frames: frame : type(1) op(LOOKUP) frame : type(1) op(LOOKUP) frame : type(1) op(READ) frame : type(1) op(READ) frame : type(1) op(READ) frame : type(1) op(READ) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 6 time of crash: 2019-03-19 14:40:57 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.5 /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7ff841f8364c] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7ff841f8dd26] /lib64/libc.so.6(+0x36160)[0x7ff84114a160] /lib64/libc.so.6(gsignal+0x110)[0x7ff84114a0e0] /lib64/libc.so.6(abort+0x151)[0x7ff84114b6c1] /lib64/libc.so.6(+0x2e6fa)[0x7ff8411426fa] /lib64/libc.so.6(+0x2e772)[0x7ff841142772] /lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7ff8414d80b8] /usr/lib64/glusterfs/5.5/xlator/cluster/replicate.so(+0x5de3d)[0x7ff839fbae3d] /usr/lib64/glusterfs/5.5/xlator/cluster/replicate.so(+0x70d51)[0x7ff839fcdd51] /usr/lib64/glusterfs/5.5/xlator/protocol/client.so(+0x58e1f)[0x7ff83a252e1f] /usr/lib64/libgfrpc.so.0(+0xe820)[0x7ff841d4e820] /usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7ff841d4eb6f] /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7ff841d4b063] /usr/lib64/glusterfs/5.5/rpc-transport/socket.so(+0xa0ce)[0x7ff83b9690ce] /usr/lib64/libglusterfs.so.0(+0x85519)[0x7ff841fe1519] /lib64/libpthread.so.0(+0x7559)[0x7ff8414d5559] /lib64/libc.so.6(clone+0x3f)[0x7ff84120c81f] --------- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 14:58:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 14:58:18 +0000 Subject: [Bugs] [Bug 1674225] flooding of "dict is NULL" logging & crash of client process In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674225 --- Comment #3 from Artem Russakovskii --- I can confirm this seems to be fixed in 5.5. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 16:35:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 16:35:00 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #26 from Sanju --- Amgad, Thanks for sharing your test results. I will provide an update on this by the end of this week. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 19 17:28:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 17:28:39 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22382 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 17:28:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 17:28:40 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1597 from Worker Ant --- REVIEW: https://review.gluster.org/22382 (fuse : fix high sev coverity issue) posted (#1) for review on master by Sunny Kumar -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 19 19:52:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 19 Mar 2019 19:52:28 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #27 from Amgad --- Thanks Sanju. Per the release notes at: https://gluster.readthedocs.io/en/latest/release-notes/5.5/ It seems like there won't be a 5.4 because of rolling upgrade issue. I assume this is what is being addressed here. Let me know if I can help to accelerate the fix. Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 04:47:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 04:47:37 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #588 from Worker Ant --- REVIEW: https://review.gluster.org/22373 (glusterd-locks: misc. changes.) merged (#6) on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 05:40:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 05:40:40 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #589 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22381 (Multiple files: remove HAVE_BD_XLATOR related code.) posted (#2) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 05:40:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 05:40:41 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22381 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 05:40:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 05:40:43 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22381 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 05:40:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 05:40:44 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 --- Comment #21 from Worker Ant --- REVIEW: https://review.gluster.org/22381 (Multiple files: remove HAVE_BD_XLATOR related code.) posted (#2) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 06:40:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 06:40:52 +0000 Subject: [Bugs] [Bug 1689905] gd2 smoke job aborts on timeout In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689905 Deepshikha khandelwal changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |dkhandel at redhat.com Resolution|--- |NOTABUG Last Closed| |2019-03-20 06:40:52 --- Comment #1 from Deepshikha khandelwal --- It is now fixed. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 06:58:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 06:58:03 +0000 Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657744 Varsha changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-20 06:58:03 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 13 04:18:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 13 Mar 2019 04:18:16 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-20 07:16:15 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22347 (inode: don't dump the whole table to CLI) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 07:47:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 07:47:09 +0000 Subject: [Bugs] [Bug 1690753] New: Volume stop when quorum not met is successful Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690753 Bug ID: 1690753 Summary: Volume stop when quorum not met is successful Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: medium Assignee: bugs at gluster.org Reporter: kiyer at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: On a 2 node cluster(N1 &N2), create one volume of type distributed. Now set cluster.server-quorum-ratio to 90% and set cluster.server-quorum-type to server. Start the volume and stop glusterd on one of the node. Now if you try to stop the volume the volumes stops successfully but ideally it shouldn't stop. How reproducible: 5/5 Steps to Reproduce: 1. Create a cluster with 2 nodes. 2. Create a volume of type distributed. 3. Set cluster.server-quorum-ratio to 90. 4. Set server-quorum-type to server. 5. Start the volume. 6. Stop glusterd on one node. 7. Stop the volume.(Should fail!) Actual results: volume stop: testvol_distributed: success Expected results: volume stop: testvol_distributed: failed: Quorum not met. Volume operation not allowed. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 08:24:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 08:24:10 +0000 Subject: [Bugs] [Bug 1690769] New: GlusterFS 5.5 crashes in 1x4 replicate setup. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 Bug ID: 1690769 Summary: GlusterFS 5.5 crashes in 1x4 replicate setup. Product: GlusterFS Version: 5 Status: NEW Component: core Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, jahernan at redhat.com, nbalacha at redhat.com, pkarampu at redhat.com Target Milestone: --- Classification: Community Description of problem: Looks like an issue with AFR in 1x4 setup for me looking at the backtraces: (gdb) bt #0 0x00007f95a054f0e0 in raise () from /lib64/libc.so.6 #1 0x00007f95a05506c1 in abort () from /lib64/libc.so.6 #2 0x00007f95a05476fa in __assert_fail_base () from /lib64/libc.so.6 #3 0x00007f95a0547772 in __assert_fail () from /lib64/libc.so.6 #4 0x00007f95a08dd0b8 in pthread_mutex_lock () from /lib64/libpthread.so.0 #5 0x00007f95994f0c9d in afr_frame_return () from /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so #6 0x00007f9599503ba1 in afr_lookup_cbk () from /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so #7 0x00007f9599788f3f in client4_0_lookup_cbk () from /usr/lib64/glusterfs/5.3/xlator/protocol/client.so #8 0x00007f95a1153820 in rpc_clnt_handle_reply () from /usr/lib64/libgfrpc.so.0 #9 0x00007f95a1153b6f in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0 #10 0x00007f95a1150063 in rpc_transport_notify () from /usr/lib64/libgfrpc.so.0 #11 0x00007f959aea00b2 in socket_event_handler () from /usr/lib64/glusterfs/5.3/rpc-transport/socket.so #12 0x00007f95a13e64c3 in event_dispatch_epoll_worker () from /usr/lib64/libglusterfs.so.0 #13 0x00007f95a08da559 in start_thread () from /lib64/libpthread.so.0 #14 0x00007f95a061181f in clone () from /lib64/libc.so.6 (gdb) thr 14 Thread 14 (Thread 0x7f9592ec7700 (LWP 6572)): #0 0x00007f95a08e3c4d in __lll_lock_wait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f95a08e68b7 in __lll_lock_elision () from /lib64/libpthread.so.0 No symbol table info available. #2 0x00007f95994f0c9d in afr_frame_return () from /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so No symbol table info available. #3 0x00007f9599503ba1 in afr_lookup_cbk () from /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so No symbol table info available. #4 0x00007f9599788f3f in client4_0_lookup_cbk () from /usr/lib64/glusterfs/5.3/xlator/protocol/client.so No symbol table info available. #5 0x00007f95a1153820 in rpc_clnt_handle_reply () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #6 0x00007f95a1153b6f in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #7 0x00007f95a1150063 in rpc_transport_notify () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #8 0x00007f959aea00b2 in socket_event_handler () from /usr/lib64/glusterfs/5.3/rpc-transport/socket.so No symbol table info available. #9 0x00007f95a13e64c3 in event_dispatch_epoll_worker () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #10 0x00007f95a08da559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #11 0x00007f95a061181f in clone () from /lib64/libc.so.6 No symbol table info available. Version-Release number of selected component (if applicable): 5.5 (and also 5.3, not seen in 3.x) How reproducible: 100% Additional info: Please refer to https://lists.gluster.org/pipermail/gluster-users/2019-March/036048.html & https://lists.gluster.org/pipermail/gluster-users/2019-February/035871.html -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 09:18:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 09:18:57 +0000 Subject: [Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 Pranith Kumar K changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(atumball at redhat.c | |om) --- Comment #1 from Pranith Kumar K --- (In reply to Amar Tumballi from comment #0) > Description of problem: > > Looks like an issue with AFR in 1x4 setup for me looking at the backtraces: > > (gdb) bt > #0 0x00007f95a054f0e0 in raise () from /lib64/libc.so.6 > #1 0x00007f95a05506c1 in abort () from /lib64/libc.so.6 > #2 0x00007f95a05476fa in __assert_fail_base () from /lib64/libc.so.6 > #3 0x00007f95a0547772 in __assert_fail () from /lib64/libc.so.6 > #4 0x00007f95a08dd0b8 in pthread_mutex_lock () from /lib64/libpthread.so.0 > #5 0x00007f95994f0c9d in afr_frame_return () from > /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so > #6 0x00007f9599503ba1 in afr_lookup_cbk () from > /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so > #7 0x00007f9599788f3f in client4_0_lookup_cbk () from > /usr/lib64/glusterfs/5.3/xlator/protocol/client.so > #8 0x00007f95a1153820 in rpc_clnt_handle_reply () from > /usr/lib64/libgfrpc.so.0 > #9 0x00007f95a1153b6f in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0 > #10 0x00007f95a1150063 in rpc_transport_notify () from > /usr/lib64/libgfrpc.so.0 > #11 0x00007f959aea00b2 in socket_event_handler () from > /usr/lib64/glusterfs/5.3/rpc-transport/socket.so > #12 0x00007f95a13e64c3 in event_dispatch_epoll_worker () from > /usr/lib64/libglusterfs.so.0 > #13 0x00007f95a08da559 in start_thread () from /lib64/libpthread.so.0 > #14 0x00007f95a061181f in clone () from /lib64/libc.so.6 > (gdb) thr 14 > > Thread 14 (Thread 0x7f9592ec7700 (LWP 6572)): > #0 0x00007f95a08e3c4d in __lll_lock_wait () from /lib64/libpthread.so.0 > No symbol table info available. > #1 0x00007f95a08e68b7 in __lll_lock_elision () from /lib64/libpthread.so.0 > No symbol table info available. > #2 0x00007f95994f0c9d in afr_frame_return () from > /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so > No symbol table info available. > #3 0x00007f9599503ba1 in afr_lookup_cbk () from > /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so > No symbol table info available. > #4 0x00007f9599788f3f in client4_0_lookup_cbk () from > /usr/lib64/glusterfs/5.3/xlator/protocol/client.so > No symbol table info available. > #5 0x00007f95a1153820 in rpc_clnt_handle_reply () from > /usr/lib64/libgfrpc.so.0 > No symbol table info available. > #6 0x00007f95a1153b6f in rpc_clnt_notify () from /usr/lib64/libgfrpc.so.0 > No symbol table info available. > #7 0x00007f95a1150063 in rpc_transport_notify () from > /usr/lib64/libgfrpc.so.0 > No symbol table info available. > #8 0x00007f959aea00b2 in socket_event_handler () from > /usr/lib64/glusterfs/5.3/rpc-transport/socket.so > No symbol table info available. > #9 0x00007f95a13e64c3 in event_dispatch_epoll_worker () from > /usr/lib64/libglusterfs.so.0 > No symbol table info available. > #10 0x00007f95a08da559 in start_thread () from /lib64/libpthread.so.0 > No symbol table info available. > #11 0x00007f95a061181f in clone () from /lib64/libc.so.6 > No symbol table info available. > > > Version-Release number of selected component (if applicable): > 5.5 (and also 5.3, not seen in 3.x) > > How reproducible: > 100% I didn't find any steps to recreate this issue on the mail thread. I also ran some workloads on replica 4 and didn't find this issue. Do you know what steps lead to this crash? > > > Additional info: > Please refer to > https://lists.gluster.org/pipermail/gluster-users/2019-March/036048.html & > https://lists.gluster.org/pipermail/gluster-users/2019-February/035871.html -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 09:23:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 09:23:34 +0000 Subject: [Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 --- Comment #2 from Pranith Kumar K --- > > > > How reproducible: > > 100% > > I didn't find any steps to recreate this issue on the mail thread. I also > ran some workloads on replica 4 and didn't find this issue. Do you know what > steps lead to this crash? Do we have symbols for this core maybe? > > > > > > > Additional info: > > Please refer to > > https://lists.gluster.org/pipermail/gluster-users/2019-March/036048.html & > > https://lists.gluster.org/pipermail/gluster-users/2019-February/035871.html -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 11:09:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 11:09:20 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22384 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 11:09:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 11:09:21 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #590 from Worker Ant --- REVIEW: https://review.gluster.org/22384 (server.c: fix Coverity CID 1399758) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 11:34:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 11:34:26 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 --- Comment #22 from Netbulae --- (In reply to Amar Tumballi from comment #16) > Hi Netbulae, How can we proceed on this? Is there a possibility we can do > some live debug of the situation? I am 'amarts' on IRC, and we can catch up > there to discuss further. > > From latest comments I understand adding 'insecure' options (as per > comment#5) didn't help later. Yes adding insecure options and restarting the volumes didn't help -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 12:42:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 12:42:48 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22385 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 12:44:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 12:44:57 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22386 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 12:44:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 12:44:58 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22386 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) posted (#1) for review on release-5 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 13:12:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 13:12:13 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #591 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22384 (server.c: fix Coverity CID 1399758) posted (#2) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 13:12:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 13:12:15 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22384 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 13:12:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 13:12:19 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22384 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 13:25:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 13:25:27 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 --- Comment #12 from Worker Ant --- REVIEW: https://review.gluster.org/22266 (rpc/transport: Missing a ref on dict while creating transport object) merged (#8) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 13:12:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 13:12:20 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1598 from Worker Ant --- REVIEW: https://review.gluster.org/22384 (server.c: fix Coverity CID 1399758) posted (#2) for review on master by MOHIT AGRAWAL -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:04:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:04:36 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #28 from Amgad --- Is the issue addressed by the following fixes in R5.5? #1684385: [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing #1684569: Upgrade from 4.1 and 5 is broken Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 14:28:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:28:01 +0000 Subject: [Bugs] [Bug 1690950] New: lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690950 Bug ID: 1690950 Summary: lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator Product: GlusterFS Version: 6 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1689920 I am copying this bug because: Description of problem: When using ec, there are many messages at brick log as, E [inodelk.c:514:__inode_unlock_lock] 0-test-locks: Matching lock not found for unlock 0-9223372036854775807, lo=68e040a84b7f0000 on 0x7f208c006f78 E [MSGID: 115053] [server-rpc-fops_v2.c:280:server4_inodelk_cbk] 0-test-server: 2557439: INODELK (df4e41be-723f-4289-b7af-b4272b3e880c), client: CTX_ID:67d4a7f3-605a-4965-89a5-31309d62d1fa-GRAPH_ID:0-PID:1659-HOST:openfs-node2-PC_NAME:test-client-1-RECON_NO:-28, error-xlator: test-locks [Invalid argument] Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:30:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:30:44 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 --- Comment #5 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22385 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) posted (#2) for review on release-6 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:30:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:30:45 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22385 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:30:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:30:46 +0000 Subject: [Bugs] [Bug 1690950] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690950 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22385 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:30:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:30:47 +0000 Subject: [Bugs] [Bug 1690950] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690950 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22385 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) posted (#2) for review on release-6 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:33:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:33:36 +0000 Subject: [Bugs] [Bug 1690952] New: lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690952 Bug ID: 1690952 Summary: lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator Product: GlusterFS Version: 5 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: kinglongmee at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community This bug was initially created as a copy of Bug #1689920 I am copying this bug because: Description of problem: When using ec, there are many messages at brick log as, E [inodelk.c:514:__inode_unlock_lock] 0-test-locks: Matching lock not found for unlock 0-9223372036854775807, lo=68e040a84b7f0000 on 0x7f208c006f78 E [MSGID: 115053] [server-rpc-fops_v2.c:280:server4_inodelk_cbk] 0-test-server: 2557439: INODELK (df4e41be-723f-4289-b7af-b4272b3e880c), client: CTX_ID:67d4a7f3-605a-4965-89a5-31309d62d1fa-GRAPH_ID:0-PID:1659-HOST:openfs-node2-PC_NAME:test-client-1-RECON_NO:-28, error-xlator: test-locks [Invalid argument] Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:30:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:30:45 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 --- Comment #6 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22386 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) posted (#2) for review on release-5 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:35:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:35:35 +0000 Subject: [Bugs] [Bug 1689920] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689920 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22386 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:35:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:35:37 +0000 Subject: [Bugs] [Bug 1690952] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690952 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22386 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:35:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:35:38 +0000 Subject: [Bugs] [Bug 1690952] lots of "Matching lock not found for unlock xxx" when using disperse (ec) xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690952 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22386 (cluster-syncop: avoid duplicate unlock of inodelk/entrylk) posted (#2) for review on release-5 by Kinglong Mee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 14:54:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 14:54:35 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(amgad.saleh at nokia | |.com) --- Comment #29 from Sanju --- Amgad, Yes, there won't be a 5.4 as we hit upgrade blocker https://bugzilla.redhat.com/show_bug.cgi?id=1684029 The issue you are facing not same as https://bugzilla.redhat.com/show_bug.cgi?id=1684029 or https://bugzilla.redhat.com/show_bug.cgi?id=1684569. And I don't think you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1684385 as that issue is seen while upgrade from 3.12 to 5. I suspect your issue is same as https://bugzilla.redhat.com/show_bug.cgi?id=1676812. Please, let me know whether it is same or not. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 15:15:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 15:15:25 +0000 Subject: [Bugs] [Bug 1685576] DNS delegation record for rhhi-dev.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685576 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-20 15:15:25 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 16:12:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 16:12:58 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22387 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 16:12:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 16:12:59 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #592 from Worker Ant --- REVIEW: https://review.gluster.org/22387 (changelog: remove unused code.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 20 19:04:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 19:04:07 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(amgad.saleh at nokia | |.com) | --- Comment #30 from Amgad --- Thanks Sanju: I'm trying to locally build 5.5 RPMs now to test with. BTW, do you know when the Centos 5.5 RPMs will be available? Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 19:04:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 19:04:37 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #31 from Amgad --- mainly OS release 7 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 20 23:37:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 20 Mar 2019 23:37:05 +0000 Subject: [Bugs] [Bug 1684496] compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684496 Bug 1684496 depends on bug 1684500, which changed state. Bug 1684500 Summary: compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 https://bugzilla.redhat.com/show_bug.cgi?id=1684500 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |DUPLICATE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 02:20:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 02:20:27 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #32 from Sanju --- Amgad, I'm not sure but you can always write to users/devel mailing lists so that appropriate people can respond. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 02:50:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 02:50:08 +0000 Subject: [Bugs] [Bug 1691164] New: glusterd leaking memory when issued gluster vol status all tasks continuosly Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Bug ID: 1691164 Summary: glusterd leaking memory when issued gluster vol status all tasks continuosly Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Severity: high Priority: medium Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: amukherj at redhat.com, bmekala at redhat.com, bugs at gluster.org, nchilaka at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, srakonde at redhat.com, storage-qa-internal at redhat.com, vbellur at redhat.com Blocks: 1686255 Target Milestone: --- Classification: Community Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686255 [Bug 1686255] glusterd leaking memory when issued gluster vol status all tasks continuosly -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 02:50:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 02:50:33 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Comment #0 is|1 |0 private| | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 02:50:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 02:50:59 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 --- Comment #1 from Atin Mukherjee --- Description of problem: glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB. Version-Release number of selected component (if applicable): glusterfs-3.12.2-45.el7rhgs.x86_64 How reproducible: 1/1 Steps to Reproduce: 1. On a six node cluster with brick-multiplexing enabled 2. Created 150 disperse volumes and 250 replica volumes and started them 3. Taken memory footprint from all the nodes 4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds Actual results: Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB Expected results: glusterd memory shouldn't leak -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 02:53:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 02:53:45 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22388 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 02:53:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 02:53:46 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22388 (glusterd: fix txn-id mem leak) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 04:30:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:30:08 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22389 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 04:30:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:30:09 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22389 (inode: fix unused vars) posted (#1) for review on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 04:39:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:39:21 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1599 from Worker Ant --- REVIEW: https://review.gluster.org/22384 (server.c: fix Coverity CID 1399758) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 04:55:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:55:56 +0000 Subject: [Bugs] [Bug 1691187] New: fix Coverity CID 1399758 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691187 Bug ID: 1691187 Summary: fix Coverity CID 1399758 Product: GlusterFS Version: 6 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: moagrawa at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: In the commit c7a582818db71d50548a2cfce72ce9402ef5599a Coverity 1399758 introduced Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 04:56:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:56:12 +0000 Subject: [Bugs] [Bug 1691187] fix Coverity CID 1399758 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691187 Mohit Agrawal changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |moagrawa at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 04:58:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:58:12 +0000 Subject: [Bugs] [Bug 1691187] fix Coverity CID 1399758 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691187 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22390 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 04:58:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 04:58:13 +0000 Subject: [Bugs] [Bug 1691187] fix Coverity CID 1399758 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691187 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22390 (server.c: fix Coverity CID 1399758) posted (#1) for review on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 05:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 05:33:11 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22391 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 05:33:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 05:33:12 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #593 from Worker Ant --- REVIEW: https://review.gluster.org/22391 (build: link libgfrpc with MATH_LIB (libm, -lm)) posted (#1) for review on master by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 09:08:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 09:08:29 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1600 from Worker Ant --- REVIEW: https://review.gluster.org/22382 (fuse : fix high sev coverity issue) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 10:35:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 10:35:23 +0000 Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672249 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22337 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 10:35:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 10:35:24 +0000 Subject: [Bugs] [Bug 1672249] quorum count value not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672249 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22337 (Revert \"libglusterfs/common-utils.c: Fix buffer size for checksum computation\") merged (#3) on release-4.1 by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 10:37:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 10:37:27 +0000 Subject: [Bugs] [Bug 1687746] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687746 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22340 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 10:37:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 10:37:30 +0000 Subject: [Bugs] [Bug 1687746] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687746 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-21 10:37:30 --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22340 (cluster/afr: Send truncate on arbiter brick from SHD) merged (#2) on release-4.1 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 10:39:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 10:39:08 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(srangana at redhat.c | |om) --- Comment #33 from Sanju --- (In reply to Amgad from comment #30) > Thanks Sanju: > I'm trying to locally build 5.5 RPMs now to test with. BTW, do you know when > the Centos 5.5 RPMs will be available? @Shyam, can you please answer this? > > Regards, > Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 10:53:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 10:53:57 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(srangana at redhat.c | |om) | --- Comment #34 from Shyamsundar --- (In reply to Sanju from comment #33) > (In reply to Amgad from comment #30) > > Thanks Sanju: > > I'm trying to locally build 5.5 RPMs now to test with. BTW, do you know when > > the Centos 5.5 RPMs will be available? > > @Shyam, can you please answer this? > > > > Regards, > > Amgad 5.5 CentOS storage SIG packages have landed on the test repository as of a day or 2 back, and I am smoke testing the same now. Test packages can be found and installed like so, # yum install centos-release-gluster # yum install --enablerepo=centos-gluster5-test glusterfs-server If my "smoke" testing does not break anything, then packages would be forthcoming later this week or by Monday next week. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 11:04:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:04:58 +0000 Subject: [Bugs] [Bug 1691292] New: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Bug ID: 1691292 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: write-behind Severity: urgent Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: bugs at gluster.org, guillaume.pavese at interact-iv.com, sabose at redhat.com Depends On: 1671556, 1674406 Blocks: 1677319 (Gluster_5_Affecting_oVirt_4.3), 1678570, 1667103 (glusterfs-5.4), 1676356 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1674406 +++ +++ This bug was initially created as a clone of Bug #1671556 +++ This is a re-post of my FUSE crash report from BZ1651246. That issue is for a crash in the FUSE client. Mine is too, but I was asked in that bug to open a new issue, so here you go. :) My servers (two, in a 'replica 2' setup) publish two volumes. One is Web site content, about 110GB; the other is Web config files, only a few megabytes. (Wasn't worth building extra servers for that second volume.) FUSE clients have been crashing on the larger volume every three or four days. I can't reproduce this on-demand, unfortunately, but I've got several cores from previous crashes that may be of value to you. I'm using Gluster 5.3 from the RPMs provided by the CentOS Storage SIG, on a Red Hat Enterprise Linux 7.x system. The client's logs show many hundreds of instances of this (I don't know if it's related): [2019-01-29 08:14:16.542674] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7384) [0x7fa171ead384] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xae3e) [0x7fa1720bee3e] -->/lib64/libglusterfs.so.0(dict_ref+0x5d) [0x7fa1809cc2ad] ) 0-dict: dict is NULL [Invalid argument] Then, when the client's glusterfs process crashes, this is logged: The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 871 times between [2019-01-29 08:12:48.390535] and [2019-01-29 08:14:17.100279] pending frames: frame : type(1) op(LOOKUP) frame : type(1) op(LOOKUP) frame : type(0) op(0) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 11 time of crash: 2019-01-29 08:14:17 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.3 /lib64/libglusterfs.so.0(+0x26610)[0x7fa1809d8610] /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fa1809e2b84] /lib64/libc.so.6(+0x36280)[0x7fa17f03c280] /lib64/libglusterfs.so.0(+0x3586d)[0x7fa1809e786d] /lib64/libglusterfs.so.0(+0x370a2)[0x7fa1809e90a2] /lib64/libglusterfs.so.0(inode_forget_with_unref+0x46)[0x7fa1809e9f96] /usr/lib64/glusterfs/5.3/xlator/mount/fuse.so(+0x85bd)[0x7fa177dae5bd] /usr/lib64/glusterfs/5.3/xlator/mount/fuse.so(+0x1fd7a)[0x7fa177dc5d7a] /lib64/libpthread.so.0(+0x7dd5)[0x7fa17f83bdd5] /lib64/libc.so.6(clone+0x6d)[0x7fa17f103ead] --------- Info on the volumes themselves, gathered from one of my servers: [davidsmith at wuit-s-10889 ~]$ sudo gluster volume info all Volume Name: web-config Type: Replicate Volume ID: 6c5dce6e-e64e-4a6d-82b3-f526744b463d Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 172.23.128.26:/data/web-config Brick2: 172.23.128.27:/data/web-config Options Reconfigured: performance.client-io-threads: off nfs.disable: on transport.address-family: inet server.event-threads: 4 client.event-threads: 4 cluster.min-free-disk: 1 cluster.quorum-count: 2 cluster.quorum-type: fixed network.ping-timeout: 10 auth.allow: * performance.readdir-ahead: on Volume Name: web-content Type: Replicate Volume ID: fcabc15f-0cec-498f-93c4-2d75ad915730 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 172.23.128.26:/data/web-content Brick2: 172.23.128.27:/data/web-content Options Reconfigured: network.ping-timeout: 10 cluster.quorum-type: fixed cluster.quorum-count: 2 performance.readdir-ahead: on auth.allow: * cluster.min-free-disk: 1 client.event-threads: 4 server.event-threads: 4 transport.address-family: inet nfs.disable: on performance.client-io-threads: off performance.cache-size: 4GB gluster> volume status all detail Status of volume: web-config ------------------------------------------------------------------------------ Brick : Brick 172.23.128.26:/data/web-config TCP Port : 49152 RDMA Port : 0 Online : Y Pid : 5612 File System : ext3 Device : /dev/sdb1 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 135.9GB Total Disk Space : 246.0GB Inode Count : 16384000 Free Inodes : 14962279 ------------------------------------------------------------------------------ Brick : Brick 172.23.128.27:/data/web-config TCP Port : 49152 RDMA Port : 0 Online : Y Pid : 5540 File System : ext3 Device : /dev/sdb1 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 135.9GB Total Disk Space : 246.0GB Inode Count : 16384000 Free Inodes : 14962277 Status of volume: web-content ------------------------------------------------------------------------------ Brick : Brick 172.23.128.26:/data/web-content TCP Port : 49153 RDMA Port : 0 Online : Y Pid : 5649 File System : ext3 Device : /dev/sdb1 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 135.9GB Total Disk Space : 246.0GB Inode Count : 16384000 Free Inodes : 14962279 ------------------------------------------------------------------------------ Brick : Brick 172.23.128.27:/data/web-content TCP Port : 49153 RDMA Port : 0 Online : Y Pid : 5567 File System : ext3 Device : /dev/sdb1 Mount Options : rw,seclabel,relatime,data=ordered Inode Size : 256 Disk Space Free : 135.9GB Total Disk Space : 246.0GB Inode Count : 16384000 Free Inodes : 14962277 I'll attach a couple of the core files generated by the crashing glusterfs instances, size limits permitting (they range from 3 to 8 GB). If I can't attach them, I'll find somewhere to host them. --- Additional comment from Artem Russakovskii on 2019-01-31 22:26:25 UTC --- Also reposting my comment from https://bugzilla.redhat.com/show_bug.cgi?id=1651246. I wish I saw this bug report before I updated from rock solid 4.1 to 5.3. Less than 24 hours after upgrading, I already got a crash and had to unmount, kill gluster, and remount: [2019-01-31 09:38:04.317604] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fcccafcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument] [2019-01-31 09:38:04.319308] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fcccafcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument] [2019-01-31 09:38:04.320047] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fcccafcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument] [2019-01-31 09:38:04.320677] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fcccafcd329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fcccb1deaf5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fccd705b218] ) 2-dict: dict is NULL [Invalid argument] The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk] 2-SITE_data1-replicate-0: selecting local read_child SITE_data1-client-3" repeated 5 times between [2019-01-31 09:37:54.751905] and [2019-01-31 09:38:03.958061] The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 2-epoll: Failed to dispatch handler" repeated 72 times between [2019-01-31 09:37:53.746741] and [2019-01-31 09:38:04.696993] pending frames: frame : type(1) op(READ) frame : type(1) op(OPEN) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 6 time of crash: 2019-01-31 09:38:04 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.3 /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fccd706664c] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fccd7070cb6] /lib64/libc.so.6(+0x36160)[0x7fccd622d160] /lib64/libc.so.6(gsignal+0x110)[0x7fccd622d0e0] /lib64/libc.so.6(abort+0x151)[0x7fccd622e6c1] /lib64/libc.so.6(+0x2e6fa)[0x7fccd62256fa] /lib64/libc.so.6(+0x2e772)[0x7fccd6225772] /lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fccd65bb0b8] /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x32c4d)[0x7fcccbb01c4d] /usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x65778)[0x7fcccbdd1778] /usr/lib64/libgfrpc.so.0(+0xe820)[0x7fccd6e31820] /usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fccd6e31b6f] /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fccd6e2e063] /usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fccd0b7e0b2] /usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fccd70c44c3] /lib64/libpthread.so.0(+0x7559)[0x7fccd65b8559] /lib64/libc.so.6(clone+0x3f)[0x7fccd62ef81f] --------- Do the pending patches fix the crash or only the repeated warnings? I'm running glusterfs on OpenSUSE 15.0 installed via http://download.opensuse.org/repositories/home:/glusterfs:/Leap15-5/openSUSE_Leap_15.0/, not too sure how to make it core dump. If it's not fixed by the patches above, has anyone already opened a ticket for the crashes that I can join and monitor? This is going to create a massive problem for us since production systems are crashing. Thanks. --- Additional comment from David E. Smith on 2019-01-31 22:31:47 UTC --- Actually, I ran the cores through strings and grepped for a few things like passwords -- as you'd expect from a memory dump from a Web server, there's a log of sensitive information in there. Is there a safe/acceptable way to send the cores only to developers that can use them, or otherwise not have to make them publicly available while still letting the Gluster devs benefit from analyzing them? --- Additional comment from Ravishankar N on 2019-02-01 05:51:19 UTC --- (In reply to David E. Smith from comment #2) > Actually, I ran the cores through strings and grepped for a few things like > passwords -- as you'd expect from a memory dump from a Web server, there's a > log of sensitive information in there. Is there a safe/acceptable way to > send the cores only to developers that can use them, or otherwise not have > to make them publicly available while still letting the Gluster devs benefit > from analyzing them? Perhaps you could upload it to a shared Dropbox folder with view/download access to the red hat email IDs I've CC'ed to this email (including me) to begin with. Note: I upgraded a 1x2 replica volume with 1 fuse client from v4.1.7 to v5.3 and did some basic I/O (kernel untar and iozone) and did not observe any crashes, so maybe this this something that is hit under extreme I/O or memory pressure. :-( --- Additional comment from Artem Russakovskii on 2019-02-02 20:17:15 UTC --- The fuse crash happened again yesterday, to another volume. Are there any mount options that could help mitigate this? In the meantime, I set up a monit (https://mmonit.com/monit/) task to watch and restart the mount, which works and recovers the mount point within a minute. Not ideal, but a temporary workaround. By the way, the way to reproduce this "Transport endpoint is not connected" condition for testing purposes is to kill -9 the right "glusterfs --process-name fuse" process. monit check: check filesystem glusterfs_data1 with path /mnt/glusterfs_data1 start program = "/bin/mount /mnt/glusterfs_data1" stop program = "/bin/umount /mnt/glusterfs_data1" if space usage > 90% for 5 times within 15 cycles then alert else if succeeded for 10 cycles then alert stack trace: [2019-02-01 23:22:00.312894] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fa0249e4329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument] [2019-02-01 23:22:00.314051] W [dict.c:761:dict_ref] (-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7329) [0x7fa0249e4329] -->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaaf5) [0x7fa024bf5af5] -->/usr/lib64/libglusterfs.so.0(dict_ref+0x58) [0x7fa02cf5b218] ) 0-dict: dict is NULL [Invalid argument] The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch handler" repeated 26 times between [2019-02-01 23:21:20.857333] and [2019-02-01 23:21:56.164427] The message "I [MSGID: 108031] [afr-common.c:2543:afr_local_discovery_cbk] 0-SITE_data3-replicate-0: selecting local read_child SITE_data3-client-3" repeated 27 times between [2019-02-01 23:21:11.142467] and [2019-02-01 23:22:03.474036] pending frames: frame : type(1) op(LOOKUP) frame : type(0) op(0) patchset: git://git.gluster.org/glusterfs.git signal received: 6 time of crash: 2019-02-01 23:22:03 configuration details: argp 1 backtrace 1 dlfcn 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 5.3 /usr/lib64/libglusterfs.so.0(+0x2764c)[0x7fa02cf6664c] /usr/lib64/libglusterfs.so.0(gf_print_trace+0x306)[0x7fa02cf70cb6] /lib64/libc.so.6(+0x36160)[0x7fa02c12d160] /lib64/libc.so.6(gsignal+0x110)[0x7fa02c12d0e0] /lib64/libc.so.6(abort+0x151)[0x7fa02c12e6c1] /lib64/libc.so.6(+0x2e6fa)[0x7fa02c1256fa] /lib64/libc.so.6(+0x2e772)[0x7fa02c125772] /lib64/libpthread.so.0(pthread_mutex_lock+0x228)[0x7fa02c4bb0b8] /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x5dc9d)[0x7fa025543c9d] /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x70ba1)[0x7fa025556ba1] /usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x58f3f)[0x7fa0257dbf3f] /usr/lib64/libgfrpc.so.0(+0xe820)[0x7fa02cd31820] /usr/lib64/libgfrpc.so.0(+0xeb6f)[0x7fa02cd31b6f] /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fa02cd2e063] /usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa0b2)[0x7fa02694e0b2] /usr/lib64/libglusterfs.so.0(+0x854c3)[0x7fa02cfc44c3] /lib64/libpthread.so.0(+0x7559)[0x7fa02c4b8559] /lib64/libc.so.6(clone+0x3f)[0x7fa02c1ef81f] --- Additional comment from David E. Smith on 2019-02-05 02:59:24 UTC --- I've added the five of you to our org's Box account; all of you should have invitations to a shared folder, and I'm uploading a few of the cores now. I hope they're of value to you. The binaries are all from the CentOS Storage SIG repo at https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/ . They're all current as of a few days ago: [davidsmith at wuit-s-10882 ~]$ rpm -qa | grep gluster glusterfs-5.3-1.el7.x86_64 glusterfs-client-xlators-5.3-1.el7.x86_64 glusterfs-fuse-5.3-1.el7.x86_64 glusterfs-libs-5.3-1.el7.x86_64 --- Additional comment from Nithya Balachandran on 2019-02-05 11:00:04 UTC --- (In reply to David E. Smith from comment #5) > I've added the five of you to our org's Box account; all of you should have > invitations to a shared folder, and I'm uploading a few of the cores now. I > hope they're of value to you. > > The binaries are all from the CentOS Storage SIG repo at > https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/ . They're > all current as of a few days ago: > > [davidsmith at wuit-s-10882 ~]$ rpm -qa | grep gluster > glusterfs-5.3-1.el7.x86_64 > glusterfs-client-xlators-5.3-1.el7.x86_64 > glusterfs-fuse-5.3-1.el7.x86_64 > glusterfs-libs-5.3-1.el7.x86_64 Thanks. We will take a look and get back to you. --- Additional comment from Nithya Balachandran on 2019-02-05 16:43:45 UTC --- David, Can you try mounting the volume with the option lru-limit=0 and let us know if you still see the crashes? Regards, Nithya --- Additional comment from Nithya Balachandran on 2019-02-06 07:23:49 UTC --- Initial analysis of one of the cores: [root at rhgs313-7 gluster-5.3]# gdb -c core.6014 /usr/sbin/glusterfs [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/sbin/glusterfs --direct-io-mode=disable --fuse-mountopts=noatime,context="'. Program terminated with signal 11, Segmentation fault. #0 __inode_ctx_free (inode=inode at entry=0x7fa0d0349af8) at inode.c:410 410 if (!xl->call_cleanup && xl->cbks->forget) (gdb) bt #0 __inode_ctx_free (inode=inode at entry=0x7fa0d0349af8) at inode.c:410 #1 0x00007fa1809e90a2 in __inode_destroy (inode=0x7fa0d0349af8) at inode.c:432 #2 inode_table_prune (table=table at entry=0x7fa15800c3c0) at inode.c:1696 #3 0x00007fa1809e9f96 in inode_forget_with_unref (inode=0x7fa0d0349af8, nlookup=128) at inode.c:1273 #4 0x00007fa177dae4e1 in do_forget (this=, unique=, nodeid=, nlookup=) at fuse-bridge.c:726 #5 0x00007fa177dae5bd in fuse_forget (this=, finh=0x7fa0a41da500, msg=, iobuf=) at fuse-bridge.c:741 #6 0x00007fa177dc5d7a in fuse_thread_proc (data=0x557a0e8ffe20) at fuse-bridge.c:5125 #7 0x00007fa17f83bdd5 in start_thread () from /lib64/libpthread.so.0 #8 0x00007fa17f103ead in msync () from /lib64/libc.so.6 #9 0x0000000000000000 in ?? () (gdb) f 0 #0 __inode_ctx_free (inode=inode at entry=0x7fa0d0349af8) at inode.c:410 410 if (!xl->call_cleanup && xl->cbks->forget) (gdb) l 405 for (index = 0; index < inode->table->xl->graph->xl_count; index++) { 406 if (inode->_ctx[index].value1 || inode->_ctx[index].value2) { 407 xl = (xlator_t *)(long)inode->_ctx[index].xl_key; 408 old_THIS = THIS; 409 THIS = xl; 410 if (!xl->call_cleanup && xl->cbks->forget) 411 xl->cbks->forget(xl, inode); 412 THIS = old_THIS; 413 } 414 } (gdb) p *xl Cannot access memory at address 0x0 (gdb) p index $1 = 6 (gdb) p inode->table->xl->graph->xl_count $3 = 13 (gdb) p inode->_ctx[index].value1 $4 = 0 (gdb) p inode->_ctx[index].value2 $5 = 140327960119304 (gdb) p/x inode->_ctx[index].value2 $6 = 0x7fa0a6370808 Based on the graph, the xlator with index = 6 is (gdb) p ((xlator_t*) inode->table->xl->graph->top)->next->next->next->next->next->next->next->name $31 = 0x7fa16c0122e0 "web-content-read-ahead" (gdb) p ((xlator_t*) inode->table->xl->graph->top)->next->next->next->next->next->next->next->xl_id $32 = 6 But read-ahead does not update the inode_ctx at all. There seems to be some sort of memory corruption happening here but that needs further analysis. --- Additional comment from David E. Smith on 2019-02-07 17:41:17 UTC --- As of this morning, I've added the lru-limit mount option to /etc/fstab on my servers. Was on vacation, didn't see the request until this morning. For the sake of reference, here's the full fstab lines, edited only to remove hostnames and add placeholders. (Note that I've never had a problem with the 'web-config' volume, which is very low-traffic and only a few megabytes in size; the problems always are the much more heavily-used 'web-content' volume.) gluster-server-1:/web-config /etc/httpd/conf.d glusterfs defaults,_netdev,noatime,context=unconfined_u:object_r:httpd_config_t:s0,backupvolfile-server=gluster-server-2,direct-io-mode=disable,lru-limit=0 0 0 gluster-server-1:/web-content /var/www/html glusterfs defaults,_netdev,noatime,context=unconfined_u:object_r:httpd_sys_rw_content_t:s0,backupvolfile-server=gluster-server-2,direct-io-mode=disable,lru-limit=0 0 0 --- Additional comment from David E. Smith on 2019-02-07 17:58:26 UTC --- Ran a couple of the glusterfs logs through the print-backtrace script. They all start with what you'd normally expect (clone, start_thread) and all end with (_gf_msg_backtrace_nomem) but they're all doing different things in the middle. It looks sorta like a memory leak or other memory corruption. Since it started happening on both of my servers after upgrading to 5.2 (and continued with 5.3), I really doubt it's a hardware issue -- the FUSE clients are both VMs, on hosts a few miles apart, so the odds of host RAM going wonky in both places at exactly that same time are ridiculous. Bit of a stretch, but do you think there would be value in my rebuilding the RPMs locally, to try to rule out anything on CentOS' end? /lib64/libglusterfs.so.0(+0x26610)[0x7fa1809d8610] _gf_msg_backtrace_nomem ??:0 /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fa1809e2b84] gf_print_trace ??:0 /lib64/libc.so.6(+0x36280)[0x7fa17f03c280] __restore_rt ??:0 /lib64/libglusterfs.so.0(+0x3586d)[0x7fa1809e786d] __inode_ctx_free ??:0 /lib64/libglusterfs.so.0(+0x370a2)[0x7fa1809e90a2] inode_table_prune ??:0 /lib64/libglusterfs.so.0(inode_forget_with_unref+0x46)[0x7fa1809e9f96] inode_forget_with_unref ??:0 /usr/lib64/glusterfs/5.3/xlator/mount/fuse.so(+0x85bd)[0x7fa177dae5bd] fuse_forget ??:0 /usr/lib64/glusterfs/5.3/xlator/mount/fuse.so(+0x1fd7a)[0x7fa177dc5d7a] fuse_thread_proc ??:0 /lib64/libpthread.so.0(+0x7dd5)[0x7fa17f83bdd5] start_thread ??:0 /lib64/libc.so.6(clone+0x6d)[0x7fa17f103ead] __clone ??:0 /lib64/libglusterfs.so.0(+0x26610)[0x7f36aff72610] _gf_msg_backtrace_nomem ??:0 /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f36aff7cb84] gf_print_trace ??:0 /lib64/libc.so.6(+0x36280)[0x7f36ae5d6280] __restore_rt ??:0 /lib64/libglusterfs.so.0(+0x36779)[0x7f36aff82779] __inode_unref ??:0 /lib64/libglusterfs.so.0(inode_unref+0x23)[0x7f36aff83203] inode_unref ??:0 /lib64/libglusterfs.so.0(gf_dirent_entry_free+0x2b)[0x7f36aff9ec4b] gf_dirent_entry_free ??:0 /lib64/libglusterfs.so.0(gf_dirent_free+0x2b)[0x7f36aff9ecab] gf_dirent_free ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x7480)[0x7f36a215b480] afr_readdir_cbk ??:0 /usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x60bca)[0x7f36a244dbca] client4_0_readdirp_cbk ??:0 /lib64/libgfrpc.so.0(+0xec70)[0x7f36afd3ec70] rpc_clnt_handle_reply ??:0 /lib64/libgfrpc.so.0(+0xf043)[0x7f36afd3f043] rpc_clnt_notify ??:0 /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f36afd3af23] rpc_transport_notify ??:0 /usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa37b)[0x7f36a492737b] socket_event_handler ??:0 /lib64/libglusterfs.so.0(+0x8aa49)[0x7f36affd6a49] event_dispatch_epoll_worker ??:0 /lib64/libpthread.so.0(+0x7dd5)[0x7f36aedd5dd5] start_thread ??:0 /lib64/libc.so.6(clone+0x6d)[0x7f36ae69dead] __clone ??:0 /lib64/libglusterfs.so.0(+0x26610)[0x7f7e13de0610] _gf_msg_backtrace_nomem ??:0 /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f7e13deab84] gf_print_trace ??:0 /lib64/libc.so.6(+0x36280)[0x7f7e12444280] __restore_rt ??:0 /lib64/libpthread.so.0(pthread_mutex_lock+0x0)[0x7f7e12c45c30] pthread_mutex_lock ??:0 /lib64/libglusterfs.so.0(__gf_free+0x12c)[0x7f7e13e0bc3c] __gf_free ??:0 /lib64/libglusterfs.so.0(+0x368ed)[0x7f7e13df08ed] __dentry_unset ??:0 /lib64/libglusterfs.so.0(+0x36b2b)[0x7f7e13df0b2b] __inode_retire ??:0 /lib64/libglusterfs.so.0(+0x36885)[0x7f7e13df0885] __inode_unref ??:0 /lib64/libglusterfs.so.0(inode_forget_with_unref+0x36)[0x7f7e13df1f86] inode_forget_with_unref ??:0 /usr/lib64/glusterfs/5.3/xlator/mount/fuse.so(+0x857a)[0x7f7e0b1b657a] fuse_batch_forget ??:0 /usr/lib64/glusterfs/5.3/xlator/mount/fuse.so(+0x1fd7a)[0x7f7e0b1cdd7a] fuse_thread_proc ??:0 /lib64/libpthread.so.0(+0x7dd5)[0x7f7e12c43dd5] start_thread ??:0 /lib64/libc.so.6(clone+0x6d)[0x7f7e1250bead] __clone ??:0 --- Additional comment from Nithya Balachandran on 2019-02-08 03:03:20 UTC --- (In reply to David E. Smith from comment #10) > Ran a couple of the glusterfs logs through the print-backtrace script. They > all start with what you'd normally expect (clone, start_thread) and all end > with (_gf_msg_backtrace_nomem) but they're all doing different things in the > middle. It looks sorta like a memory leak or other memory corruption. Since > it started happening on both of my servers after upgrading to 5.2 (and > continued with 5.3), I really doubt it's a hardware issue -- the FUSE > clients are both VMs, on hosts a few miles apart, so the odds of host RAM > going wonky in both places at exactly that same time are ridiculous. > > Bit of a stretch, but do you think there would be value in my rebuilding the > RPMs locally, to try to rule out anything on CentOS' end? I don't think so. My guess is there is an error somewhere in the client code when handling inodes. It was never hit earlier because we never freed the inodes before 5.3. With the new inode invalidation feature, we appear to be accessing inodes that were already freed. Did you see the same crashes in 5.2? If yes, something else might be going wrong. I had a look at the coredumps you sent - most don't have any symbols (strangely). Of the ones that do, it looks like memory corruption and accessing already freed inodes. There are a few people looking at it but this going to take a while to figure out. In the meantime, let me know if you still see crashes with the lru-limit option. --- Additional comment from Nithya Balachandran on 2019-02-08 03:18:00 UTC --- Another user has just reported that the lru-limit did not help with the crashes - let me know if that is your experience as well. --- Additional comment from Nithya Balachandran on 2019-02-08 12:57:50 UTC --- We have found the cause of one crash but that has a different backtrace. Unfortunately we have not managed to reproduce the one you reported so we don't know if it is the same cause. Can you disable write-behind on the volume and let us know if it solves the problem? If yes, it is likely to be the same issue. --- Additional comment from David E. Smith on 2019-02-09 16:07:08 UTC --- I did have some crashes with 5.2. (I went from 3.something straight to 5.2, so I'm not going to be too helpful in terms of narrowing down exactly when this issue first appeared, sorry.) I'll see if I still have any of those cores; they all were from several weeks ago, so I may have already cleaned them up. This morning, one of my clients core dumped with the lru-limit option. It looks like it might be a different crash (in particular, this morning's crash was a SIGABRT, whereas previous crashes were SIGSEGV). I've uploaded that core to the same Box folder, in case it's useful. I'll paste its backtrace in below. For the write-behind request, do you want me to set 'performance.flush-behind off' or so you mean something else? --- Additional comment from David E. Smith on 2019-02-09 16:07:49 UTC --- Backtrace for 2/9/19 crash (as promised above, put it in a separate comment for clarity): /lib64/libglusterfs.so.0(+0x26610)[0x7f3b31456610] _gf_msg_backtrace_nomem ??:0 /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f3b31460b84] gf_print_trace ??:0 /lib64/libc.so.6(+0x36280)[0x7f3b2faba280] __restore_rt ??:0 /lib64/libc.so.6(gsignal+0x37)[0x7f3b2faba207] raise ??:0 /lib64/libc.so.6(abort+0x148)[0x7f3b2fabb8f8] abort ??:0 /lib64/libc.so.6(+0x78d27)[0x7f3b2fafcd27] __libc_message ??:0 /lib64/libc.so.6(+0x81489)[0x7f3b2fb05489] _int_free ??:0 /lib64/libglusterfs.so.0(+0x1a6e9)[0x7f3b3144a6e9] dict_destroy ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/distribute.so(+0x8cf9)[0x7f3b23388cf9] dht_local_wipe ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/distribute.so(+0x4ab90)[0x7f3b233cab90] dht_revalidate_cbk ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x709e5)[0x7f3b236a89e5] afr_lookup_done ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x71198)[0x7f3b236a9198] afr_lookup_metadata_heal_check ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x71cbb)[0x7f3b236a9cbb] afr_lookup_entry_heal ??:0 /usr/lib64/glusterfs/5.3/xlator/cluster/replicate.so(+0x71f99)[0x7f3b236a9f99] afr_lookup_cbk ??:0 /usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x616d2)[0x7f3b239326d2] client4_0_lookup_cbk ??:0 /lib64/libgfrpc.so.0(+0xec70)[0x7f3b31222c70] rpc_clnt_handle_reply ??:0 /lib64/libgfrpc.so.0(+0xf043)[0x7f3b31223043] rpc_clnt_notify ??:0 /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f3b3121ef23] rpc_transport_notify ??:0 /usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa37b)[0x7f3b25e0b37b] socket_event_handler ??:0 /lib64/libglusterfs.so.0(+0x8aa49)[0x7f3b314baa49] event_dispatch_epoll_worker ??:0 /lib64/libpthread.so.0(+0x7dd5)[0x7f3b302b9dd5] start_thread ??:0 /lib64/libc.so.6(clone+0x6d)[0x7f3b2fb81ead] __clone ??:0 [d --- Additional comment from Raghavendra G on 2019-02-09 17:15:55 UTC --- (In reply to David E. Smith from comment #14) > I did have some crashes with 5.2. (I went from 3.something straight to 5.2, > so I'm not going to be too helpful in terms of narrowing down exactly when > this issue first appeared, sorry.) I'll see if I still have any of those > cores; they all were from several weeks ago, so I may have already cleaned > them up. > > This morning, one of my clients core dumped with the lru-limit option. It > looks like it might be a different crash (in particular, this morning's > crash was a SIGABRT, whereas previous crashes were SIGSEGV). I've uploaded > that core to the same Box folder, in case it's useful. I'll paste its > backtrace in below. > > For the write-behind request, do you want me to set > 'performance.flush-behind off' or so you mean something else? gluster volume set performance.write-behind off --- Additional comment from Nithya Balachandran on 2019-02-11 04:44:08 UTC --- Thanks David. I'm going to hold off on looking at the coredump until we hear back from you on whether disabling performance.write-behind works. The different backtraces could be symptoms of the same underlying issue where gluster tries to access already freed memory. --- Additional comment from Worker Ant on 2019-02-11 09:53:16 UTC --- REVIEW: https://review.gluster.org/22189 (performance/write-behind: fix use-after-free in readdirp) posted (#1) for review on master by Raghavendra G --- Additional comment from Worker Ant on 2019-02-19 02:40:41 UTC --- REVIEW: https://review.gluster.org/22227 (performance/write-behind: handle call-stub leaks) posted (#1) for review on master by Raghavendra G --- Additional comment from Worker Ant on 2019-02-19 05:53:46 UTC --- REVIEW: https://review.gluster.org/22189 (performance/write-behind: fix use-after-free in readdirp) merged (#10) on master by Raghavendra G --- Additional comment from Worker Ant on 2019-02-19 05:54:08 UTC --- REVIEW: https://review.gluster.org/22227 (performance/write-behind: handle call-stub leaks) merged (#2) on master by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 [Bug 1667103] GlusterFS 5.4 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1671556 [Bug 1671556] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1674406 [Bug 1674406] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1676356 [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1677319 [Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3 https://bugzilla.redhat.com/show_bug.cgi?id=1678570 [Bug 1678570] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:04:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:04:58 +0000 Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671556 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1691292 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 11:04:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:04:58 +0000 Subject: [Bugs] [Bug 1674406] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674406 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1691292 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:04:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:04:58 +0000 Subject: [Bugs] [Bug 1678570] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678570 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1691292 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:04:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:04:58 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1691292 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:04:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:04:58 +0000 Subject: [Bugs] [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676356 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1691292 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:11:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:11:30 +0000 Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22393 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:11:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:11:31 +0000 Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22393 (performance/write-behind: fix use after free in readdirp_cbk) posted (#1) for review on release-4.1 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 11:33:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:33:19 +0000 Subject: [Bugs] [Bug 1686568] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686568 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED CC| |sheggodu at redhat.com Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #5 from Sunil Kumar Acharya --- Issue is not fixed yet, moving the bug to assigned state. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 11:33:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:33:21 +0000 Subject: [Bugs] [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 Bug 1687672 depends on bug 1686568, which changed state. Bug 1686568 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter https://bugzilla.redhat.com/show_bug.cgi?id=1686568 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 11:33:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:33:22 +0000 Subject: [Bugs] [Bug 1687687] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687687 Bug 1687687 depends on bug 1686568, which changed state. Bug 1686568 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter https://bugzilla.redhat.com/show_bug.cgi?id=1686568 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 11:33:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 11:33:23 +0000 Subject: [Bugs] [Bug 1687746] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687746 Bug 1687746 depends on bug 1686568, which changed state. Bug 1686568 Summary: [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter https://bugzilla.redhat.com/show_bug.cgi?id=1686568 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |ASSIGNED Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 13:24:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 13:24:20 +0000 Subject: [Bugs] [Bug 1691357] New: core archive link from regression jobs throw not found error Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691357 Bug ID: 1691357 Summary: core archive link from regression jobs throw not found error Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: amukherj at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: If I try to download the core files from https://build.gluster.org/job/centos7-regression/5193/console which points me to https://logs.aws.gluster.org/centos7-regression-5193.tgz , such link doesn't exists. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 13:24:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 13:24:33 +0000 Subject: [Bugs] [Bug 1691357] core archive link from regression jobs throw not found error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691357 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Severity|unspecified |urgent -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 13:59:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 13:59:38 +0000 Subject: [Bugs] [Bug 1691357] core archive link from regression jobs throw not found error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691357 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- I see a tar.gz on https://build.gluster.org/job/centos7-regression/5193/, and there is a 450 Mo archives, so where does it point you to logs.aws ? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 15:17:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:17:43 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #35 from Amgad --- Thanks Sanju and Shyam. I went ahead and built the 5.5 RPMS and re-did the online upgrade/rollback tests from 3.12.15 to 5.5, and back. I got the same issue with online rollback. Here is the data (logs are attached as well): Case 1) online upgrade from 3.12.15 to 5.5 - upgrades stared right after: Thu Mar 21 14:01:06 UTC 2019 ========================================== A) I have same cluster of 3 replicas: gfs-1 (10.76.153.206), gfs-2 (10.76.153.213), and gfs-3new (10.76.153.207), running 3.12.15. When online upgraded gfs-1 from 3.12.15 to 5.5, all bricks were online and heal succeeded. Continuing with gfs-2, then gfs-3new, online upgrade, heal succeeded as well. 1) Here's the output after gfs-1 was online upgraded from 3.12.15 to 5.5: Logs uploaded are: gfs-1_gfs1_upg_log.tgz, gfs-2_gfs1_upg_log.tgz, and gfs-3new_gfs1_upg_log.tgz. All volumes/bricks are online and heal succeeded. [root at gfs-1 ansible2]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49155 0 Y 19559 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 11171 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 25740 Self-heal Daemon on localhost N/A N/A Y 19587 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11161 Self-heal Daemon on 10.76.153.207 N/A N/A Y 25730 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49156 0 Y 19568 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 11180 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 25749 Self-heal Daemon on localhost N/A N/A Y 19587 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11161 Self-heal Daemon on 10.76.153.207 N/A N/A Y 25730 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49157 0 Y 19578 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 11189 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 25758 Self-heal Daemon on localhost N/A N/A Y 19587 Self-heal Daemon on 10.76.153.207 N/A N/A Y 25730 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11161 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 ansible2]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. Case 2) online rollback from 5.5 to 3.12.15 - upgrades stared right after: Thu Mar 21 14:20:01 UTC 2019 =========================================== A) Here're the outputs after gfs-1 was online rolled back from 5.5 to 3.12.15 - rollback succeeded. All bricks were online, but "gluster volume heal" was unsuccessful: Logs uploaded are: gfs-1_gfs1_rollbk_log.tgz, gfs-2_gfs1_rollbk_log.tgz, and gfs-3new_gfs1_rollbk_log.tgz [root at gfs-1 glusterfs]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 21586 Brick 10.76.153.213:/mnt/data1/1 49155 0 Y 9772 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 12139 Self-heal Daemon on localhost N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 9799 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 21595 Brick 10.76.153.213:/mnt/data2/2 49156 0 Y 9781 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 12148 Self-heal Daemon on localhost N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 9799 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 21604 Brick 10.76.153.213:/mnt/data3/3 49157 0 Y 9790 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 12157 Self-heal Daemon on localhost N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 9799 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 glusterfs]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. [root at gfs-1 glusterfs]# B) Same "heal" failure after rolling back gfs-2 from 5.5 to 3.12.15 =================================================================== [root at gfs-2 glusterfs]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 21586 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 11313 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 12139 Self-heal Daemon on localhost N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 21595 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 11322 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 12148 Self-heal Daemon on localhost N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 21604 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 11331 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 12157 Self-heal Daemon on localhost N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-2 glusterfs]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. [root at gfs-2 glusterfs]# C) After rolling back gfs-3new from 5.5 to 3.12.15 (all are on 3.12.15 now) heal succeeded Logs uploaded are: gfs-1_all_rollbk_log.tgz, gfs-2_all_rollbk_log.tgz, and gfs-3new_all_rollbk_log.tgz [root at gfs-3new glusterfs]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 21586 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 11313 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 13724 Self-heal Daemon on localhost N/A N/A Y 13714 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11303 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 21595 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 11322 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 13733 Self-heal Daemon on localhost N/A N/A Y 13714 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11303 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 21604 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 11331 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 13742 Self-heal Daemon on localhost N/A N/A Y 13714 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-3new glusterfs]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. [root at gfs-3new glusterfs]# Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:20:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:20:15 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #36 from Amgad --- Thanks Sanju and Shyam. I went ahead and built the 5.5 RPMS and re-did the online upgrade/rollback tests from 3.12.15 to 5.5, and back. I got the same issue with online rollback. Here is the data (logs are attached as well): Case 1) online upgrade from 3.12.15 to 5.5 - upgrades stared right after: Thu Mar 21 14:01:06 UTC 2019 ========================================== A) I have same cluster of 3 replicas: gfs-1 (10.76.153.206), gfs-2 (10.76.153.213), and gfs-3new (10.76.153.207), running 3.12.15. When online upgraded gfs-1 from 3.12.15 to 5.5, all bricks were online and heal succeeded. Continuing with gfs-2, then gfs-3new, online upgrade, heal succeeded as well. 1) Here's the output after gfs-1 was online upgraded from 3.12.15 to 5.5: Logs uploaded are: gfs-1_gfs1_upg_log.tgz, gfs-2_gfs1_upg_log.tgz, and gfs-3new_gfs1_upg_log.tgz. All volumes/bricks are online and heal succeeded. [root at gfs-1 ansible2]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49155 0 Y 19559 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 11171 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 25740 Self-heal Daemon on localhost N/A N/A Y 19587 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11161 Self-heal Daemon on 10.76.153.207 N/A N/A Y 25730 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49156 0 Y 19568 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 11180 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 25749 Self-heal Daemon on localhost N/A N/A Y 19587 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11161 Self-heal Daemon on 10.76.153.207 N/A N/A Y 25730 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49157 0 Y 19578 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 11189 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 25758 Self-heal Daemon on localhost N/A N/A Y 19587 Self-heal Daemon on 10.76.153.207 N/A N/A Y 25730 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11161 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 ansible2]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. Case 2) online rollback from 5.5 to 3.12.15 - upgrades stared right after: Thu Mar 21 14:20:01 UTC 2019 =========================================== A) Here're the outputs after gfs-1 was online rolled back from 5.5 to 3.12.15 - rollback succeeded. All bricks were online, but "gluster volume heal" was unsuccessful: Logs uploaded are: gfs-1_gfs1_rollbk_log.tgz, gfs-2_gfs1_rollbk_log.tgz, and gfs-3new_gfs1_rollbk_log.tgz [root at gfs-1 glusterfs]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 21586 Brick 10.76.153.213:/mnt/data1/1 49155 0 Y 9772 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 12139 Self-heal Daemon on localhost N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 9799 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 21595 Brick 10.76.153.213:/mnt/data2/2 49156 0 Y 9781 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 12148 Self-heal Daemon on localhost N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 9799 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 21604 Brick 10.76.153.213:/mnt/data3/3 49157 0 Y 9790 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 12157 Self-heal Daemon on localhost N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 9799 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 glusterfs]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. [root at gfs-1 glusterfs]# B) Same "heal" failure after rolling back gfs-2 from 5.5 to 3.12.15 =================================================================== [root at gfs-2 glusterfs]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 21586 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 11313 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 12139 Self-heal Daemon on localhost N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 21595 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 11322 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 12148 Self-heal Daemon on localhost N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 21604 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 11331 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 12157 Self-heal Daemon on localhost N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.207 N/A N/A Y 12166 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-2 glusterfs]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. [root at gfs-2 glusterfs]# C) After rolling back gfs-3new from 5.5 to 3.12.15 (all are on 3.12.15 now) heal succeeded Logs uploaded are: gfs-1_all_rollbk_log.tgz, gfs-2_all_rollbk_log.tgz, and gfs-3new_all_rollbk_log.tgz [root at gfs-3new glusterfs]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 21586 Brick 10.76.153.213:/mnt/data1/1 49152 0 Y 11313 Brick 10.76.153.207:/mnt/data1/1 49152 0 Y 13724 Self-heal Daemon on localhost N/A N/A Y 13714 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11303 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 21595 Brick 10.76.153.213:/mnt/data2/2 49153 0 Y 11322 Brick 10.76.153.207:/mnt/data2/2 49153 0 Y 13733 Self-heal Daemon on localhost N/A N/A Y 13714 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11303 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 21604 Brick 10.76.153.213:/mnt/data3/3 49154 0 Y 11331 Brick 10.76.153.207:/mnt/data3/3 49154 0 Y 13742 Self-heal Daemon on localhost N/A N/A Y 13714 Self-heal Daemon on 10.76.153.213 N/A N/A Y 11303 Self-heal Daemon on 10.76.153.206 N/A N/A Y 21576 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-3new glusterfs]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol2 has been successful Use heal info commands to check status. Launching heal operation to perform index self heal on volume glustervol3 has been successful Use heal info commands to check status. [root at gfs-3new glusterfs]# Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:22:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:22:27 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #37 from Amgad --- (In reply to Amgad from comment #36) > Thanks Sanju and Shyam. > > I went ahead and built the 5.5 RPMS and re-did the online upgrade/rollback > tests from 3.12.15 to 5.5, and back. I got the same issue with online > rollback. > Here is the data (logs are attached as well): > > Case 1) online upgrade from 3.12.15 to 5.5 - upgrades stared right after: > Thu Mar 21 14:01:06 UTC 2019 > ========================================== > A) I have same cluster of 3 replicas: gfs-1 (10.76.153.206), gfs-2 > (10.76.153.213), and gfs-3new (10.76.153.207), running 3.12.15. > When online upgraded gfs-1 from 3.12.15 to 5.5, all bricks were online and > heal succeeded. Continuing with gfs-2, then gfs-3new, online upgrade, heal > succeeded as well. > > 1) Here's the output after gfs-1 was online upgraded from 3.12.15 to 5.5: > Logs uploaded are: gfs-1_gfs1_upg_log.tgz, gfs-2_gfs1_upg_log.tgz, and > gfs-3new_gfs1_upg_log.tgz. > > All volumes/bricks are online and heal succeeded. > > [root at gfs-1 ansible2]# gluster volume status > Status of volume: glustervol1 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data1/1 49155 0 Y > 19559 > Brick 10.76.153.213:/mnt/data1/1 49152 0 Y > 11171 > Brick 10.76.153.207:/mnt/data1/1 49152 0 Y > 25740 > Self-heal Daemon on localhost N/A N/A Y > 19587 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 11161 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 25730 > > Task Status of Volume glustervol1 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol2 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data2/2 49156 0 Y > 19568 > Brick 10.76.153.213:/mnt/data2/2 49153 0 Y > 11180 > Brick 10.76.153.207:/mnt/data2/2 49153 0 Y > 25749 > Self-heal Daemon on localhost N/A N/A Y > 19587 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 11161 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 25730 > > Task Status of Volume glustervol2 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol3 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data3/3 49157 0 Y > 19578 > Brick 10.76.153.213:/mnt/data3/3 49154 0 Y > 11189 > Brick 10.76.153.207:/mnt/data3/3 49154 0 Y > 25758 > Self-heal Daemon on localhost N/A N/A Y > 19587 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 25730 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 11161 > > Task Status of Volume glustervol3 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > [root at gfs-1 ansible2]# for i in glustervol1 glustervol2 glustervol3; do > gluster volume heal $i; done > Launching heal operation to perform index self heal on volume glustervol1 > has been successful > Use heal info commands to check status. > Launching heal operation to perform index self heal on volume glustervol2 > has been successful > Use heal info commands to check status. > Launching heal operation to perform index self heal on volume glustervol3 > has been successful > Use heal info commands to check status. > > Case 2) online rollback from 5.5 to 3.12.15 - upgrades stared right after: > Thu Mar 21 14:20:01 UTC 2019 > =========================================== > A) Here're the outputs after gfs-1 was online rolled back from 5.5 to > 3.12.15 - rollback succeeded. All bricks were online, but "gluster volume > heal" was unsuccessful: > Logs uploaded are: gfs-1_gfs1_rollbk_log.tgz, gfs-2_gfs1_rollbk_log.tgz, and > gfs-3new_gfs1_rollbk_log.tgz > > > [root at gfs-1 glusterfs]# gluster volume status > Status of volume: glustervol1 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data1/1 49152 0 Y > 21586 > Brick 10.76.153.213:/mnt/data1/1 49155 0 Y > 9772 > Brick 10.76.153.207:/mnt/data1/1 49155 0 Y > 12139 > Self-heal Daemon on localhost N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 9799 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 12166 > > Task Status of Volume glustervol1 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol2 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data2/2 49153 0 Y > 21595 > Brick 10.76.153.213:/mnt/data2/2 49156 0 Y > 9781 > Brick 10.76.153.207:/mnt/data2/2 49156 0 Y > 12148 > Self-heal Daemon on localhost N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 9799 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 12166 > > Task Status of Volume glustervol2 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol3 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data3/3 49154 0 Y > 21604 > Brick 10.76.153.213:/mnt/data3/3 49157 0 Y > 9790 > Brick 10.76.153.207:/mnt/data3/3 49157 0 Y > 12157 > Self-heal Daemon on localhost N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 9799 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 12166 > > Task Status of Volume glustervol3 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > [root at gfs-1 glusterfs]# for i in glustervol1 glustervol2 glustervol3; do > gluster volume heal $i; done > Launching heal operation to perform index self heal on volume glustervol1 > has been unsuccessful: > Commit failed on 10.76.153.207. Please check log file for details. > Commit failed on 10.76.153.213. Please check log file for details. > Launching heal operation to perform index self heal on volume glustervol2 > has been unsuccessful: > Commit failed on 10.76.153.207. Please check log file for details. > Commit failed on 10.76.153.213. Please check log file for details. > Launching heal operation to perform index self heal on volume glustervol3 > has been unsuccessful: > Commit failed on 10.76.153.207. Please check log file for details. > Commit failed on 10.76.153.213. Please check log file for details. > [root at gfs-1 glusterfs]# > > B) Same "heal" failure after rolling back gfs-2 from 5.5 to 3.12.15 > =================================================================== > > [root at gfs-2 glusterfs]# gluster volume status > Status of volume: glustervol1 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data1/1 49152 0 Y > 21586 > Brick 10.76.153.213:/mnt/data1/1 49152 0 Y > 11313 > Brick 10.76.153.207:/mnt/data1/1 49155 0 Y > 12139 > Self-heal Daemon on localhost N/A N/A Y > 11303 > Self-heal Daemon on 10.76.153.206 N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 12166 > > Task Status of Volume glustervol1 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol2 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data2/2 49153 0 Y > 21595 > Brick 10.76.153.213:/mnt/data2/2 49153 0 Y > 11322 > Brick 10.76.153.207:/mnt/data2/2 49156 0 Y > 12148 > Self-heal Daemon on localhost N/A N/A Y > 11303 > Self-heal Daemon on 10.76.153.206 N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 12166 > > Task Status of Volume glustervol2 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol3 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data3/3 49154 0 Y > 21604 > Brick 10.76.153.213:/mnt/data3/3 49154 0 Y > 11331 > Brick 10.76.153.207:/mnt/data3/3 49157 0 Y > 12157 > Self-heal Daemon on localhost N/A N/A Y > 11303 > Self-heal Daemon on 10.76.153.206 N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.207 N/A N/A Y > 12166 > > Task Status of Volume glustervol3 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > [root at gfs-2 glusterfs]# for i in glustervol1 glustervol2 glustervol3; do > gluster volume heal $i; done > Launching heal operation to perform index self heal on volume glustervol1 > has been unsuccessful: > Commit failed on 10.76.153.207. Please check log file for details. > Launching heal operation to perform index self heal on volume glustervol2 > has been unsuccessful: > Commit failed on 10.76.153.207. Please check log file for details. > Launching heal operation to perform index self heal on volume glustervol3 > has been unsuccessful: > Commit failed on 10.76.153.207. Please check log file for details. > [root at gfs-2 glusterfs]# > > C) After rolling back gfs-3new from 5.5 to 3.12.15 (all are on 3.12.15 now) > heal succeeded > Logs uploaded are: gfs-1_all_rollbk_log.tgz, gfs-2_all_rollbk_log.tgz, and > gfs-3new_all_rollbk_log.tgz > > [root at gfs-3new glusterfs]# gluster volume status > Status of volume: glustervol1 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data1/1 49152 0 Y > 21586 > Brick 10.76.153.213:/mnt/data1/1 49152 0 Y > 11313 > Brick 10.76.153.207:/mnt/data1/1 49152 0 Y > 13724 > Self-heal Daemon on localhost N/A N/A Y > 13714 > Self-heal Daemon on 10.76.153.206 N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 11303 > > Task Status of Volume glustervol1 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol2 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data2/2 49153 0 Y > 21595 > Brick 10.76.153.213:/mnt/data2/2 49153 0 Y > 11322 > Brick 10.76.153.207:/mnt/data2/2 49153 0 Y > 13733 > Self-heal Daemon on localhost N/A N/A Y > 13714 > Self-heal Daemon on 10.76.153.206 N/A N/A Y > 21576 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 11303 > > Task Status of Volume glustervol2 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > Status of volume: glustervol3 > Gluster process TCP Port RDMA Port Online Pid > ----------------------------------------------------------------------------- > - > Brick 10.76.153.206:/mnt/data3/3 49154 0 Y > 21604 > Brick 10.76.153.213:/mnt/data3/3 49154 0 Y > 11331 > Brick 10.76.153.207:/mnt/data3/3 49154 0 Y > 13742 > Self-heal Daemon on localhost N/A N/A Y > 13714 > Self-heal Daemon on 10.76.153.213 N/A N/A Y > 11303 > Self-heal Daemon on 10.76.153.206 N/A N/A Y > 21576 > > Task Status of Volume glustervol3 > ----------------------------------------------------------------------------- > - > There are no active volume tasks > > [root at gfs-3new glusterfs]# for i in glustervol1 glustervol2 glustervol3; do > gluster volume heal $i; done > Launching heal operation to perform index self heal on volume glustervol1 > has been successful > Use heal info commands to check status. > Launching heal operation to perform index self heal on volume glustervol2 > has been successful > Use heal info commands to check status. > Launching heal operation to perform index self heal on volume glustervol3 > has been successful > Use heal info commands to check status. > [root at gfs-3new glusterfs]# > > Regards, > Amgad comment seems to be duplicated -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:24:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:24:03 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #38 from Amgad --- Created attachment 1546575 --> https://bugzilla.redhat.com/attachment.cgi?id=1546575&action=edit gfs-1 logs when gfs-1 online upgraded from 3.12.15 to 5.5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:25:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:25:02 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #39 from Amgad --- Created attachment 1546576 --> https://bugzilla.redhat.com/attachment.cgi?id=1546576&action=edit gfs-2 logs when gfs-1 online upgraded from 3.12.15 to 5.5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:25:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:25:55 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #40 from Amgad --- Created attachment 1546577 --> https://bugzilla.redhat.com/attachment.cgi?id=1546577&action=edit gfs-3new logs when gfs-1 online upgraded from 3.12.15 to 5.5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:26:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:26:55 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #41 from Amgad --- Created attachment 1546578 --> https://bugzilla.redhat.com/attachment.cgi?id=1546578&action=edit gfs-1 logs when gfs-1 online rolled-back from 5.5 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:28:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:28:22 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #42 from Amgad --- Created attachment 1546579 --> https://bugzilla.redhat.com/attachment.cgi?id=1546579&action=edit gfs-2 logs when gfs-1 online rolled-back from 5.5 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:29:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:29:21 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #43 from Amgad --- Created attachment 1546580 --> https://bugzilla.redhat.com/attachment.cgi?id=1546580&action=edit gfs-3new logs when gfs-1 online rolled-back from 5.5 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:30:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:30:45 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #44 from Amgad --- Created attachment 1546588 --> https://bugzilla.redhat.com/attachment.cgi?id=1546588&action=edit gfs-1 logs when all servers online rolled-back from 5.5 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:31:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:31:22 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #45 from Amgad --- Created attachment 1546589 --> https://bugzilla.redhat.com/attachment.cgi?id=1546589&action=edit gfs-2 logs when all servers online rolled-back from 5.5 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 15:32:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 15:32:03 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #46 from Amgad --- Created attachment 1546591 --> https://bugzilla.redhat.com/attachment.cgi?id=1546591&action=edit gfs-3new logs when all servers online rolled-back from 5.5 to 3.12.15 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 21 17:53:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 17:53:54 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22394 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 17:53:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 17:53:55 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #594 from Worker Ant --- REVIEW: https://review.gluster.org/22394 (mem-pool: remove dead code.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 21:30:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 21:30:54 +0000 Subject: [Bugs] [Bug 1351139] The volume option "nfs.enable-ino32" should be a per volume one, not global In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1351139 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|hlalwani at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 21:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 21:30:59 +0000 Subject: [Bugs] [Bug 1535495] Add option -h and --help to gluster cli In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535495 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|hlalwani at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 21:31:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 21:31:02 +0000 Subject: [Bugs] [Bug 1643349] [OpenSSL] : auth.ssl-allow has no option description. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643349 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|hlalwani at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 21:31:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 21:31:03 +0000 Subject: [Bugs] [Bug 1653565] tests/geo-rep: Add arbiter volume test case In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653565 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|hlalwani at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 21:31:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 21:31:04 +0000 Subject: [Bugs] [Bug 1654187] [geo-rep]: RFE - Make slave volume read-only while setting up geo-rep (by default) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654187 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|hlalwani at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 21 21:31:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 21 Mar 2019 21:31:05 +0000 Subject: [Bugs] [Bug 1664335] [geo-rep]: Transport endpoint not connected with arbiter volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664335 PnT Account Manager changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|hlalwani at redhat.com |bugs at gluster.org -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 02:39:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 02:39:17 +0000 Subject: [Bugs] [Bug 1506487] glusterfs / glusterfsd processes may not start properly upon reboot/restart In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1506487 jack.wong at laserfiche.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jack.wong at laserfiche.com --- Comment #26 from jack.wong at laserfiche.com --- (In reply to Milind Changire from comment #16) > Sylvain, > Could you check if adding/changing "option transport.listen-backlog 100" in > /etc/glusterfs/glusterd.vol helps any bit ? We have run into this issue too. Thank you for your suggestion. It was quite helpful. Tweaking "transport.listen-backlog" fixed the problem for us. One thing I want to note is that 100 may be too low. We are running about 40 Gluster volumes on a single server. We had to set "transport.listen-backlog" higher than 1024 before all our glusterfsd processes consistently started up. Because 1024 is higher than the default net.core.somaxconn kernel configuration value of 128, we also had to increase net.core.somaxconn to make that take effect. Otherwise, the kernel would silently truncate the listen backlog to SOMAXCONN (https://manpages.debian.org/stretch/manpages-dev/listen.2.en.html#NOTES). We only had to edit /etc/glusterfs/glusterd.vol. We did not have to set the transport.listen-backlog on any of our Gluster volumes. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:09:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:09:53 +0000 Subject: [Bugs] [Bug 1691616] New: client log flooding with intentional socket shutdown message when a brick is down Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Bug ID: 1691616 Summary: client log flooding with intentional socket shutdown message when a brick is down Product: GlusterFS Version: mainline Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: amukherj at redhat.com, bugs at gluster.org, mchangir at redhat.com, pasik at iki.fi Depends On: 1679904 Blocks: 1672818 (glusterfs-6.0) Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1679904 +++ Description of problem: client log flooding with intentional socket shutdown message when a brick is down [2019-02-22 08:24:42.472457] I [socket.c:811:__socket_shutdown] 0-test-vol-client-0: intentional socket shutdown(5) Version-Release number of selected component (if applicable): glusterfs-6 How reproducible: Always Steps to Reproduce: 1. 1 X 3 volume created and started over a 3 node cluster 2. mount a fuse client 3. kill a brick 4. Observe that fuse client log is flooded with the intentional socket shutdown message after every 3 seconds. Actual results: Expected results: Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1679904 [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:09:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:09:53 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1691616 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:09:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:09:53 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1691616 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:14:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:14:19 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22395 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:14:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:14:20 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22395 (transport/socket: move shutdown msg to DEBUG loglevel) posted (#1) for review on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:15:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:15:42 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22396 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:15:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:15:43 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22396 (transport/socket: move shutdown msg to DEBUG loglevel) posted (#1) for review on release-6 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:37:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:37:05 +0000 Subject: [Bugs] [Bug 1691617] New: clang-scan tests are failing nightly. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691617 Bug ID: 1691617 Summary: clang-scan tests are failing nightly. Product: GlusterFS Version: 4.1 Status: NEW Component: project-infrastructure Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: https://build.gluster.org/job/clang-scan/641/console seems to be failing since last 20 days. Version-Release number of selected component (if applicable): master -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:53:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:53:15 +0000 Subject: [Bugs] [Bug 1691620] New: client log flooding with intentional socket shutdown message when a brick is down Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 Bug ID: 1691620 Summary: client log flooding with intentional socket shutdown message when a brick is down Product: Red Hat Gluster Storage Status: NEW Component: core Assignee: atumball at redhat.com Reporter: amukherj at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, mchangir at redhat.com, pasik at iki.fi, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1679904 Blocks: 1691616, 1672818 (glusterfs-6.0) Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1679904 +++ Description of problem: *Please note that this bug is only applicable in rhgs-3.5.0 development branch* client log flooding with intentional socket shutdown message when a brick is down [2019-02-22 08:24:42.472457] I [socket.c:811:__socket_shutdown] 0-test-vol-client-0: intentional socket shutdown(5) Version-Release number of selected component (if applicable): glusterfs-6 How reproducible: Always Steps to Reproduce: 1. 1 X 3 volume created and started over a 3 node cluster 2. mount a fuse client 3. kill a brick 4. Observe that fuse client log is flooded with the intentional socket shutdown message after every 3 seconds. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2019-03-22 05:15:43 UTC --- REVIEW: https://review.gluster.org/22396 (transport/socket: move shutdown msg to DEBUG loglevel) posted (#1) for review on release-6 by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1679904 [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1691616 [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:53:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:53:15 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1691620 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:53:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:53:15 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1691620 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:53:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:53:15 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1691620 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:54:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:54:37 +0000 Subject: [Bugs] [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1691616, 1672818 | |(glusterfs-6.0) | Depends On|1679904 |1691616 Assignee|atumball at redhat.com |rgowdapp at redhat.com Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 [Bug 1672818] GlusterFS 6.0 tracker https://bugzilla.redhat.com/show_bug.cgi?id=1679904 [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1691616 [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:54:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:54:37 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On|1691620 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 05:54:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:54:37 +0000 Subject: [Bugs] [Bug 1679904] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679904 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1691620 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 05:54:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 05:54:37 +0000 Subject: [Bugs] [Bug 1691616] client log flooding with intentional socket shutdown message when a brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691616 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1691620 Depends On|1691620 | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1691620 [Bug 1691620] client log flooding with intentional socket shutdown message when a brick is down -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 06:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 06:30:19 +0000 Subject: [Bugs] [Bug 1663780] On docs.gluster.org, we should convert spaces in folder or file names to 301 redirects to hypens In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663780 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |atumball at redhat.com --- Comment #1 from Amar Tumballi --- Team, can we consider to pick this up? This change is blocking the above patch merger. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 09:19:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 09:19:32 +0000 Subject: [Bugs] [Bug 1691617] clang-scan tests are failing nightly. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691617 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- yep, the builder depend on F27 who is now EOL, so removed from mock config. One small step is to update it to F29 (so we can have 1 year before it fail like this), but this will bring new tests and maybe new failure to fix. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 09:26:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 09:26:25 +0000 Subject: [Bugs] [Bug 1667168] Thin Arbiter documentation refers commands don't exist "glustercli' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667168 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|ravishankar at redhat.com |vpandey at redhat.com --- Comment #2 from Ravishankar N --- Changing the assignee to Vishal who has agreed to help out with the cli/glusterd (GD1) related changes needed for this. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 09:46:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 09:46:06 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(amgad.saleh at nokia | |.com) --- Comment #47 from Sanju --- (In reply to Amgad from comment #36) Amgad, Did you check whether you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1676812? I believe that you are facing the same issue. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 12:46:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 12:46:14 +0000 Subject: [Bugs] [Bug 1580315] gluster volume status inode getting timed out after 30 minutes with no output/error In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1580315 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed|2019-03-20 07:16:15 |2019-03-22 12:46:14 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22389 (inode: fix unused vars) merged (#2) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 14:15:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 14:15:06 +0000 Subject: [Bugs] [Bug 1691789] New: rpc-statd service stops on AWS builders Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691789 Bug ID: 1691789 Summary: rpc-statd service stops on AWS builders Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: dkhandel at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: On AWS builders, rpc-statd service stops abruptly causing the job to fail. Though there's a workaround for this but needs more investigation on it. One such example: https://build.gluster.org/job/centos7-regression/5208/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 15:57:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 15:57:15 +0000 Subject: [Bugs] [Bug 1691833] New: Client sends 128KByte network packet for 0 length file copy Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691833 Bug ID: 1691833 Summary: Client sends 128KByte network packet for 0 length file copy Product: GlusterFS Version: 5 Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: medium Assignee: bugs at gluster.org Reporter: otto.jonyer at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Created attachment 1546951 --> https://bugzilla.redhat.com/attachment.cgi?id=1546951&action=edit Wireshark report of the network overhead Description of problem: Installed Ubuntu Server 18.10 from official release. Installed debian packages from ppa:gluster/glusterfs-5 (download.gluster.org) The problem was originally reproduced on CentOs based on with custom compilation of glusterfs-5.1 Created the simplest replica filesystem on 2 nodes and mounted on one of the nodes but it is also reproducible when mounted from dedicated client as well. Created a 0 length file with "touch /tmp/0length" Copied the file to gluster mount. Network capture shows that 128KByte packet is sent to both replica which is a huge network overhead for really small files. the commands I've used: mkdir /mnt/brick gluster peer probe 192.168.56.100 gluster volume create gv0 replica 2 192.168.56.100:/mnt/brick 192.168.56.101:/mnt/brick force gluster volume start gv0 mkdir /mnt/gv0 mount -t glusterfs 192.168.56.100:/gv0 /mnt/gv0 touch /tmp/0length cp /tmp/0length /mnt/gv0 Version-Release number of selected component (if applicable): 5.5-ubuntu1~cosmic1: glusterfs-client glusterfs-common glusterfs-server How reproducible: Steps to Reproduce: 1) on both nodes: mkdir /mnt/brick 2) on 192.168.56.101: gluster peer probe 192.168.56.100 3) gluster volume create gv0 replica 2 192.168.56.100:/mnt/brick 192.168.56.101:/mnt/brick force 4) gluster volume start gv0 5) mkdir /mnt/gv0 6) mount -t glusterfs 192.168.56.100:/gv0 /mnt/gv0 7) touch /tmp/0length 8) tcpdump -s 0 -w /tmp/glusterfs.pkt 'host 192.168.56.100' 9) cp /tmp/0length /mnt/gv0 10) stop network capture and load into wireshark Actual results: The network capture is 256KByte length and I see 1-1 'proc-27' calls to both glusterfs nodes with 128KB sizes. Expected results: The network capture is a few kilobytes and low network overhead for small files. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 16:18:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 16:18:05 +0000 Subject: [Bugs] [Bug 1691833] Client sends 128KByte network packet for 0 length file copy In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691833 --- Comment #1 from Otto Jonyer --- Very strange for larger sizes: If I use 7 KByte file I get 2*128 KB packets. If I use 15, 31 KByte file I get 2*128 KB + 1*32KB packets. If I use 63, 127 or 129 KByte file I get 6*128KB packets. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 17:01:42 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:01:42 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(amgad.saleh at nokia | |.com) | --- Comment #48 from Amgad --- That's not the case here. In my scenario, heal is performed after the rolback (from 5.5 to 3.12.15) is done on gfs-1 (gfs-2 and gfs-3new are still on 5.5) and all volumes/bricks were up. I actually did another test, during the rollback for gfs-1, a client generated 128 files. All files existed on nodes gfs-2 and gfs-3new, but not on gfs-1. Heal kept failing despite all bricks are online. Here's the outputs: ================== 1) On gfs-1, the one rolled-back to 3.12.15 [root at gfs-1 ansible2]# gluster --version glusterfs 3.12.15 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root at gfs-1 ansible2]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 10712 Brick 10.76.153.213:/mnt/data1/1 49155 0 Y 20297 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 21395 Self-heal Daemon on localhost N/A N/A Y 10703 Self-heal Daemon on 10.76.153.213 N/A N/A Y 20336 Self-heal Daemon on 10.76.153.207 N/A N/A Y 21422 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 10721 Brick 10.76.153.213:/mnt/data2/2 49156 0 Y 20312 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 21404 Self-heal Daemon on localhost N/A N/A Y 10703 Self-heal Daemon on 10.76.153.213 N/A N/A Y 20336 Self-heal Daemon on 10.76.153.207 N/A N/A Y 21422 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 10731 Brick 10.76.153.213:/mnt/data3/3 49157 0 Y 20327 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 21413 Self-heal Daemon on localhost N/A N/A Y 10703 Self-heal Daemon on 10.76.153.207 N/A N/A Y 21422 Self-heal Daemon on 10.76.153.213 N/A N/A Y 20336 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks [root at gfs-1 ansible2]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.213. Please check log file for details. Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.213. Please check log file for details. Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Commit failed on 10.76.153.213. Please check log file for details. [root at gfs-1 ansible2]# [root at gfs-1 ansible2]# gluster volume heal glustervol3 infoBrick 10.76.153.206:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.76.153.213:/mnt/data3/3 /test_file.0 / /test_file.1 /test_file.2 /test_file.3 /test_file.4 .. /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 129 Brick 10.76.153.207:/mnt/data3/3 /test_file.0 / /test_file.1 /test_file.2 /test_file.3 /test_file.4 ... /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 129 [root at gfs-1 ansible2]# ls -ltr /mnt/data3/3/ ====> None of the test_file.? exists total 8 -rw-------. 2 root root 0 Mar 11 15:52 c2file3 -rw-------. 2 root root 66 Mar 11 16:37 c1file3 -rw-------. 2 root root 91 Mar 22 16:36 c1file2 [root at gfs-1 ansible2]# 2) On gfs-2, on 5.5 [root at gfs-2 ansible2]# gluster --version glusterfs 5.5 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root at gfs-2 ansible2]# [root at gfs-2 ansible2]# gluster volume status Status of volume: glustervol1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data1/1 49152 0 Y 10712 Brick 10.76.153.213:/mnt/data1/1 49155 0 Y 20297 Brick 10.76.153.207:/mnt/data1/1 49155 0 Y 21395 Self-heal Daemon on localhost N/A N/A Y 20336 Self-heal Daemon on 10.76.153.206 N/A N/A Y 10703 Self-heal Daemon on 10.76.153.207 N/A N/A Y 21422 Task Status of Volume glustervol1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data2/2 49153 0 Y 10721 Brick 10.76.153.213:/mnt/data2/2 49156 0 Y 20312 Brick 10.76.153.207:/mnt/data2/2 49156 0 Y 21404 Self-heal Daemon on localhost N/A N/A Y 20336 Self-heal Daemon on 10.76.153.206 N/A N/A Y 10703 Self-heal Daemon on 10.76.153.207 N/A N/A Y 21422 Task Status of Volume glustervol2 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: glustervol3 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.76.153.206:/mnt/data3/3 49154 0 Y 10731 Brick 10.76.153.213:/mnt/data3/3 49157 0 Y 20327 Brick 10.76.153.207:/mnt/data3/3 49157 0 Y 21413 Self-heal Daemon on localhost N/A N/A Y 20336 Self-heal Daemon on 10.76.153.206 N/A N/A Y 10703 Self-heal Daemon on 10.76.153.207 N/A N/A Y 21422 Task Status of Volume glustervol3 ------------------------------------------------------------------------------ There are no active volume tasks ** gluster volume heal glustervol3 info has the same output as gfs-1 [root at gfs-2 ansible2]# ls -ltr /mnt/data3/3/ =====> all test_file.? are there total 131080 -rw-------. 2 root root 0 Mar 11 15:52 c2file3 -rw-------. 2 root root 66 Mar 11 16:37 c1file3 -rw-------. 2 root root 91 Mar 22 16:36 c1file2 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.0 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.1 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.2 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.3 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.4 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.5 ........ -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.123 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.124 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.125 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.126 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.127 [root at gfs-2 ansible2]# 3) On gfs-3new, same as gfs-2 [root at gfs-3new ansible2]# ls -ltr /mnt/data3/3/ total 131080 -rw-------. 2 root root 0 Mar 11 15:52 c2file3 -rw-------. 2 root root 66 Mar 11 16:37 c1file3 -rw-------. 2 root root 91 Mar 22 16:36 c1file2 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.0 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.1 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.2 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.3 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.4 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.5 ..... -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.122 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.123 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.124 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.125 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.126 -rw-------. 2 root root 1048576 Mar 22 16:43 test_file.127 [root at gfs-3new ansible2]# I'm attaching the logs for this case as well Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 17:08:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:08:56 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #49 from Amgad --- Created attachment 1547013 --> https://bugzilla.redhat.com/attachment.cgi?id=1547013&action=edit gfs-1 logs when gfs-1 online rolled-back from 5.5 to 3.12.15 with 128 files generated -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 17:09:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:09:34 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #50 from Amgad --- Created attachment 1547014 --> https://bugzilla.redhat.com/attachment.cgi?id=1547014&action=edit gfs-2 logs when gfs-1 online rolled-back from 5.5 to 3.12.15 with 128 files generated -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 17:10:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:10:12 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #51 from Amgad --- Created attachment 1547015 --> https://bugzilla.redhat.com/attachment.cgi?id=1547015&action=edit gfs-3new logs when gfs-1 online rolled-back from 5.5 to 3.12.15 with 128 files generated -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 17:18:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:18:26 +0000 Subject: [Bugs] [Bug 1644322] flooding log with "glusterfs-fuse: read from /dev/fuse returned -1 (Operation not permitted)" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644322 --- Comment #2 from Christian Lohmaier --- yes, there is a possibility of it recovering, however never when it manages to fill up the ~60GB of free disk space on var before - which unfortunately is the case more often than not.. - if it fills the disk, then also the other geo-replication sessions go to faulty state. so if it cannot recover within 10-15 minutes, it likely won't (as the disk is filled up with the log spam) - I'd say we have it once a week. Nothing special about system state AFAICT - at least not a ramp-up of resource usage, if there's anything, it comes and goes in a flash. No effect on rest of the system, apart from var being full and other geo-replication sessions suffering from that. Geo-replication where it occurred last time are in history changelog mode, but not sure whether that is coincidence But I think bug#1643716 might be related, as I think it is more likely to trigger after it failed because of that, i.e. when geo-repliction session keeps relaunching a gvfs mount / switches from failed to initializing. But that as well might be a red herring, as the recovery method used so far is to truncate the logs.... But at least that was the case on the last case where I didn't throw away the whole log. The usage pattern on the volume that is geo-replicated is as follows: rsnapshot creates backups from other hosts via rsync, then those backups are rotated using hardlinks, in the directories .sync, daily.[0-6], weekly.[0-3] i.e. it rsyncs to .sync, then mv daily.6 _delete.$pid; mv daily.5 daily.6 (...); cp -al .sync daily.0; rm -r _delete.$pid Thus most of the files are hardlinks. Unfortunately I cannot offer a 100% reliable way to trigger the problem, HTH. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 22 17:42:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:42:54 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #20 from Poornima G --- Thank You all for the report. We have the RCA, working on the patch will be posting it shortly. The issue was with the size of the payload being sent from the client to server for operations like lookup and readdirp. Hence worakload involving lookup and readdir would consume a lot of bandwidth. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 24 03:55:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 03:55:36 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #52 from Amgad --- Hi Sanju: I did more testing to take a closer look and here's a more-fine description of the behavior: 0) Stating with 3-replica: gsf-1, gfs-2, and gfs-3new all on 3.12.15 1) Always had successful replication and success of the "gluster volume heal " command during the online-upgrade from 3.12.15 to 5.5 on all three nodes in all steps. 2) During rolling back one node (gfs-1) to 3.12.15, I added files (128 files) to one volume, the files were replicated between gfs-2 and gfs-3new servers. 3) When rollback was complete on gfs-1 to 3.12.15 (while gfs-2 and gfs-3new are still on 5.5), files didn't replicate to gfs-1 and "gluster volume heal " command failed (NO bricks were offline). "gluster volume heal info showed "Number of entries:129" (128 files and a directory) on the bricks on gfs-2 and gfs-3new. ** Heal never succeeded even when rebooted gfs-1. [root at gfs-1 ~]# gluster volume heal glustervol3 info Brick 10.76.153.206:/mnt/data3/3 ==> gfs-1 Status: Connected Number of entries: 0 Brick 10.76.153.213:/mnt/data3/3 ==> gfs-2 /test_file.0 / /test_file.1 /test_file.2 ....... /test_file.124 /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 129 Brick 10.76.153.207:/mnt/data3/3 ==> gfs-3new /test_file.0 / /test_file.1 /test_file.2 /test_file.3 /test_file.4 ..... /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 129 [root at gfs-1 ~]# 4) When rolled-back gfs-2 to 3.12.15 (now gfs-1 is on 3.12.15 and gfs-3new is on 5.5), the moment "glusterd" started on gfs-2, replication and heal started and the "Number of entries:" started to go down till "0" within "8" seconds. Brick 10.76.153.206:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.76.153.213:/mnt/data3/3 /test_file.0 / - Possibly undergoing heal /test_file.1 /test_file.2 /test_file.3 .. /test_file.124 /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 129 Brick 10.76.153.207:/mnt/data3/3 /test_file.0 /test_file.4 /test_file.5 /test_file.6 /test_file.7 /test_file.8 .. /test_file.124 /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 125 ============== Brick 10.76.153.206:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.76.153.213:/mnt/data3/3 /test_file.0 /test_file.68 /test_file.69 .. /test_file.124 /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 61 Brick 10.76.153.207:/mnt/data3/3 /test_file.0 /test_file.76 /test_file.77 /test_file.78 .. /test_file.122 /test_file.123 /test_file.124 /test_file.125 /test_file.126 /test_file.127 Status: Connected Number of entries: 53 ============== Brick 10.76.153.206:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.76.153.213:/mnt/data3/3 /test_file.0 Status: Connected Number of entries: 1 Brick 10.76.153.207:/mnt/data3/3 /test_file.0 Status: Connected Number of entries: 1 ================ Brick 10.76.153.206:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.76.153.213:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.76.153.207:/mnt/data3/3 Status: Connected Number of entries: 0 5) Despite heal started when gfs-2 was rolled-back to 3.12.15 (2-nodes now are on 3.12.15), the command "gluster volume heal " was continuously unsuccessful. No bricks were offline. [root at gfs-1 ~]# for i in glustervol1 glustervol2 glustervol3; do gluster volume heal $i; done Launching heal operation to perform index self heal on volume glustervol1 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol2 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. Launching heal operation to perform index self heal on volume glustervol3 has been unsuccessful: Commit failed on 10.76.153.207. Please check log file for details. You have new mail in /var/spool/mail/root [root at gfs-1 ~]# 6) When the gfs-3new was rolled back (all three servers are on 3.12.15), the command "gluster volume heal " was successful. Conclusions: - "Heal" is not successful with one server is rolled-back to 3.12.15 while the other two are on 5.5. The command "gluster volume heal " is not successful as well - Heal starts once two servers are rolled-back to 3.12.15. - The command "gluster volume heal " is not successful till all servers are rolled-back to 3.12.15. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 24 09:07:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 09:07:39 +0000 Subject: [Bugs] [Bug 1692093] New: Network throughput usage increased x5 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Bug ID: 1692093 Summary: Network throughput usage increased x5 Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: high Priority: high Assignee: bugs at gluster.org Reporter: pgurusid at redhat.com CC: amukherj at redhat.com, bengoa at gmail.com, bugs at gluster.org, info at netbulae.com, jsecchiero at enter.eu, nbalacha at redhat.com, pgurusid at redhat.com, revirii at googlemail.com, rob.dewit at coosto.com Depends On: 1673058 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1673058 +++ Description of problem: Client network throughput in OUT direction usage increased x5 after an upgrade from 3.11, 3.12 to 5.3 of the server. Now i have ~110Mbps of traffic in OUT direction for each client and on the server i have a total of ~1450Mbps for each gluster server. Watch the attachment for graph before/after upgrade network throughput. Version-Release number of selected component (if applicable): 5.3 How reproducible: upgrade from 3.11, 3.12 to 5.3 Steps to Reproduce: 1. https://docs.gluster.org/en/v3/Upgrade-Guide/upgrade_to_3.12/ 2. https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_5/ Actual results: Network throughput usage increased x5 Expected results: Just the features and the bugfix of the 5.3 release Cluster Information: 2 nodes with 1 volume with 2 distributed brick for each node Number of Peers: 1 Hostname: 10.2.0.180 Uuid: 368055db-9e90-433f-9a56-bfc1507a25c5 State: Peer in Cluster (Connected) Volume Information: Volume Name: storage_other Type: Distributed-Replicate Volume ID: 6857bf2b-c97d-4505-896e-8fbc24bd16e8 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.2.0.181:/mnt/storage-brick1/data Brick2: 10.2.0.180:/mnt/storage-brick1/data Brick3: 10.2.0.181:/mnt/storage-brick2/data Brick4: 10.2.0.180:/mnt/storage-brick2/data Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on Status of volume: storage_other Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.2.0.181:/mnt/storage-brick1/data 49152 0 Y 1165 Brick 10.2.0.180:/mnt/storage-brick1/data 49152 0 Y 1149 Brick 10.2.0.181:/mnt/storage-brick2/data 49153 0 Y 1166 Brick 10.2.0.180:/mnt/storage-brick2/data 49153 0 Y 1156 Self-heal Daemon on localhost N/A N/A Y 1183 Self-heal Daemon on 10.2.0.180 N/A N/A Y 1166 Task Status of Volume storage_other ------------------------------------------------------------------------------ There are no active volume tasks --- Additional comment from Nithya Balachandran on 2019-02-21 07:53:44 UTC --- Is this high throughput consistent? Please provide a tcpdump of the client process for about 30s to 1 min during the high throughput to see what packets gluster is sending: In a terminal to the client machine: tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 Wait for 30s-1min and stop the capture. Send us the pcap file. Another user reported that turning off readdir-ahead worked for him. Please try that after capturing the statedump and see if it helps you. --- Additional comment from Alberto Bengoa on 2019-02-21 11:17:22 UTC --- (In reply to Nithya Balachandran from comment #1) > Is this high throughput consistent? > Please provide a tcpdump of the client process for about 30s to 1 min during > the high throughput to see what packets gluster is sending: > > In a terminal to the client machine: > tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 > > Wait for 30s-1min and stop the capture. Send us the pcap file. > > Another user reported that turning off readdir-ahead worked for him. Please > try that after capturing the statedump and see if it helps you. I'm the another user and I can confirm the same behaviour here. On our tests we did: - Mounted the new cluster servers (running 5.3 version) using client 5.3 - Started a find . -type d on a directory with lots of directories. - It generated an outgoing traffic (on the client) of around 90mbps (so, inbound traffic on gluster server). We repeated the same test using 3.8 client (on 5.3 cluster) and the outgoing traffic on the client was just around 1.3 mbps. I can provide pcaps if needed. Cheers, Alberto Bengoa --- Additional comment from Nithya Balachandran on 2019-02-22 04:09:41 UTC --- Assigning this to Amar to be reassigned appropriately. --- Additional comment from Jacob on 2019-02-25 13:42:45 UTC --- i'm not able to upload in the bugzilla portal due to the size of the pcap. You can download from here: https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 --- Additional comment from Poornima G on 2019-03-04 15:23:14 UTC --- Disabling readdir-ahead fixed the issue? --- Additional comment from Hubert on 2019-03-04 15:32:17 UTC --- We seem to have the same problem with a fresh install of glusterfs 5.3 on a debian stretch. We migrated from an existing setup (version 4.1.6, distribute-replicate) to a new setup (version 5.3, replicate), and traffic on clients went up significantly, maybe causing massive iowait on the clients during high-traffic times. Here are some munin graphs: network traffic on high iowait client: https://abload.de/img/client-eth1-traffic76j4i.jpg network traffic on old servers: https://abload.de/img/oldservers-eth1nejzt.jpg network traffic on new servers: https://abload.de/img/newservers-eth17ojkf.jpg performance.readdir-ahead is on by default. I could deactivate it tomorrow morning (07:00 CEST), and provide tcpdump data if necessary. Regards, Hubert --- Additional comment from Hubert on 2019-03-05 12:03:11 UTC --- i set performance.readdir-ahead to off and watched network traffic for about 2 hours now, but traffic is still as high. 5-8 times higher than it was with old 4.1.x volumes. just curious: i see hundreds of thousands of these messages: [2019-03-05 12:02:38.423299] W [dict.c:761:dict_ref] (-->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/quick-read.so(+0x6df4) [0x7f0db452edf4] -->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/io-cache.so(+0xa39d) [0x7f0db474039d] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) [0x7f0dbb7e4a38] ) 5-dict: dict is NULL [Invalid argument] see https://bugzilla.redhat.com/show_bug.cgi?id=1674225 - could this be related? --- Additional comment from Jacob on 2019-03-06 09:54:26 UTC --- Disabling readdir-ahead doesn't change the througput. --- Additional comment from Alberto Bengoa on 2019-03-06 10:07:59 UTC --- Neither to me. BTW, read-ahead/readdir-ahead shouldn't generate traffic in the opposite direction? ( Server -> Client) --- Additional comment from Nithya Balachandran on 2019-03-06 11:40:49 UTC --- (In reply to Jacob from comment #4) > i'm not able to upload in the bugzilla portal due to the size of the pcap. > You can download from here: > > https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 @Poornima, the following are the calls and instances from the above: 104 proc-1 (stat) 8259 proc-11 (open) 46 proc-14 (statfs) 8239 proc-15 (flush) 8 proc-18 (getxattr) 68 proc-2 (readlink) 5576 proc-27 (lookup) 8388 proc-41 (forget) Not sure if it helps. --- Additional comment from Hubert on 2019-03-07 08:34:21 UTC --- i made a tcpdump as well: tcpdump -i eth1 -s 0 -w /tmp/dirls.pcap tcp and not port 2222 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 259699 packets captured 259800 packets received by filter 29 packets dropped by kernel The file is 1.1G big; gzipped and uploaded it: https://ufile.io/5h6i2 Hope this helps. --- Additional comment from Hubert on 2019-03-07 09:00:12 UTC --- Maybe i should add that the relevant IP addresses of the gluster servers are: 192.168.0.50, 192.168.0.51, 192.168.0.52 --- Additional comment from Hubert on 2019-03-18 13:45:51 UTC --- fyi: on a test setup (debian stretch, after upgrade 5.3 -> 5.5) i did a little test: - copied 11GB of data - via rsync: rsync --bwlimit=10000 --inplace --- bandwith limit of max. 10000 KB/s - rsync pulled data over interface eth0 - rsync stats: sent 1,484,200 bytes received 11,402,695,074 bytes 5,166,106.13 bytes/sec - so external traffic average was about 5 MByte/s - result was an internal traffic up to 350 MBit/s (> 40 MByte/s) on eth1 (LAN interface) - graphic of internal traffic: https://abload.de/img/if_eth1-internal-trafdlkcy.png - graphic of external traffic: https://abload.de/img/if_eth0-external-trafrejub.png --- Additional comment from Poornima G on 2019-03-19 06:15:50 UTC --- Apologies for the delay, there have been some changes done to quick-read feature, which deals with reading the content of a file in lookup fop, if the file is smaller than 64KB. I m suspecting that with 5.3 the increase in bandwidth may be due to more number of reads of small file(generated by quick-read). Please try the following: gluster vol set quick-read off gluster vol set read-ahead off gluster vol set io-cache off And let us know if the network bandwidth consumption decreases, meanwhile i will try to reproduce the same locally. --- Additional comment from Hubert on 2019-03-19 08:12:04 UTC --- I deactivated the 3 params and did the same test again. - same rsync params: rsync --bwlimit=10000 --inplace - rsync stats: sent 1,491,733 bytes received 11,444,330,300 bytes 6,703,263.27 bytes/sec - so ~6,7 MByte/s or ~54 MBit/s in average (peak of 60 MBit/s) over external network interface - traffic graphic of the server with rsync command: https://abload.de/img/if_eth1-internal-traf4zjow.png - so server is sending with an average of ~110 MBit/s and with peak at ~125 MBit/s over LAN interface - traffic graphic of one of the replica servers (disregard first curve: is the delete of the old data): https://abload.de/img/if_enp5s0-internal-trn5k9v.png - so one of the replicas receices data with ~55 MBit/s average and peak ~62 MBit/s - as a comparison - traffic before and after changing the 3 params (rsync server, highest curve is relevant): - https://abload.de/img/if_eth1-traffic-befortvkib.png So it looks like the traffic was reduced to about a third. Is it this what you expected? If so: traffic would be still a bit higher when i compare 4.1.6 and 5.3 - here's a graphic of one client in our live system after switching from 4.1.6 (~20 MBit/s) to 5.3. (~100 MBit/s in march): https://abload.de/img/if_eth1-comparison-gly8kyx.png So if this traffic gets reduced to 1/3: traffic would be ~33 MBit/s then. Way better, i think. And could be "normal"? Thx so far :-) --- Additional comment from Poornima G on 2019-03-19 09:23:48 UTC --- Awesome thank you for trying it out, i was able to reproduce this issue locally, one of the major culprit was the quick-read. The other two options had no effect in reducing the bandwidth consumption. So for now as a workaround, can disable quick-read: # gluster vol set quick-read off Quick-read alone reduced the bandwidth consumption by 70% for me. Debugging the rest 30% increase. Meanwhile, planning to make this bug a blocker for our next gulster-6 release. Will keep the bug updated with the progress. --- Additional comment from Hubert on 2019-03-19 10:07:35 UTC --- i'm running another test, just alongside... simply deleting and copying data, no big effort. Just curious :-) 2 little questions: - does disabling quick-read have any performance issues for certain setups/scenarios? - bug only blocker for v6 release? update for v5 planned? --- Additional comment from Poornima G on 2019-03-19 10:36:20 UTC --- (In reply to Hubert from comment #17) > i'm running another test, just alongside... simply deleting and copying > data, no big effort. Just curious :-) I think if the volume hosts small files, then any kind of operation around these files will see increased bandwidth usage. > > 2 little questions: > > - does disabling quick-read have any performance issues for certain > setups/scenarios? Small file reads(files with size <= 64kb) will see reduced performance. Eg: web server use case. > - bug only blocker for v6 release? update for v5 planned? Yes there will be updated for v5, not sure when. The updates for major releases are made once in every 3 or 4 weeks not sure. For critical bugs the release will be made earlier. --- Additional comment from Alberto Bengoa on 2019-03-19 11:54:58 UTC --- Hello guys, Thanks for your update Poornima. I was already running quick-read off here so, on my case, I noticed the traffic growing consistently after enabling it. I've made some tests on my scenario, and I wasn't able to reproduce your 70% reduction results. To me, it's near 46% of traffic reduction (from around 103 Mbps to around 55 Mbps, graph attached here: https://pasteboard.co/I68s9qE.png ) What I'm doing is just running a find . type -d on a directory with loads of directories/files. Poornima, if you don't mind to answer a question, why are we seem this traffic on the inbound of gluster servers (outbound of clients)? On my particular case, the traffic should be basically on the opposite direction I think, and I'm very curious about that. Thank you, Alberto --- Additional comment from Poornima G on 2019-03-22 17:42:54 UTC --- Thank You all for the report. We have the RCA, working on the patch will be posting it shortly. The issue was with the size of the payload being sent from the client to server for operations like lookup and readdirp. Hence worakload involving lookup and readdir would consume a lot of bandwidth. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 [Bug 1673058] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 09:07:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 09:07:39 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692093 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 [Bug 1692093] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 24 09:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 09:31:51 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22402 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 09:31:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 09:31:53 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22402 (client-rpc: Fix the payload being sent on the wire) posted (#1) for review on master by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 10:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:30:11 +0000 Subject: [Bugs] [Bug 1692101] New: Network throughput usage increased x5 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Bug ID: 1692101 Summary: Network throughput usage increased x5 Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: high Priority: high Assignee: bugs at gluster.org Reporter: pgurusid at redhat.com CC: amukherj at redhat.com, bengoa at gmail.com, bugs at gluster.org, info at netbulae.com, jsecchiero at enter.eu, nbalacha at redhat.com, pgurusid at redhat.com, revirii at googlemail.com, rob.dewit at coosto.com Depends On: 1673058 Blocks: 1692093 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1673058 +++ Description of problem: Client network throughput in OUT direction usage increased x5 after an upgrade from 3.11, 3.12 to 5.3 of the server. Now i have ~110Mbps of traffic in OUT direction for each client and on the server i have a total of ~1450Mbps for each gluster server. Watch the attachment for graph before/after upgrade network throughput. Version-Release number of selected component (if applicable): 5.3 How reproducible: upgrade from 3.11, 3.12 to 5.3 Steps to Reproduce: 1. https://docs.gluster.org/en/v3/Upgrade-Guide/upgrade_to_3.12/ 2. https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_5/ Actual results: Network throughput usage increased x5 Expected results: Just the features and the bugfix of the 5.3 release Cluster Information: 2 nodes with 1 volume with 2 distributed brick for each node Number of Peers: 1 Hostname: 10.2.0.180 Uuid: 368055db-9e90-433f-9a56-bfc1507a25c5 State: Peer in Cluster (Connected) Volume Information: Volume Name: storage_other Type: Distributed-Replicate Volume ID: 6857bf2b-c97d-4505-896e-8fbc24bd16e8 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.2.0.181:/mnt/storage-brick1/data Brick2: 10.2.0.180:/mnt/storage-brick1/data Brick3: 10.2.0.181:/mnt/storage-brick2/data Brick4: 10.2.0.180:/mnt/storage-brick2/data Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on Status of volume: storage_other Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.2.0.181:/mnt/storage-brick1/data 49152 0 Y 1165 Brick 10.2.0.180:/mnt/storage-brick1/data 49152 0 Y 1149 Brick 10.2.0.181:/mnt/storage-brick2/data 49153 0 Y 1166 Brick 10.2.0.180:/mnt/storage-brick2/data 49153 0 Y 1156 Self-heal Daemon on localhost N/A N/A Y 1183 Self-heal Daemon on 10.2.0.180 N/A N/A Y 1166 Task Status of Volume storage_other ------------------------------------------------------------------------------ There are no active volume tasks --- Additional comment from Nithya Balachandran on 2019-02-21 07:53:44 UTC --- Is this high throughput consistent? Please provide a tcpdump of the client process for about 30s to 1 min during the high throughput to see what packets gluster is sending: In a terminal to the client machine: tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 Wait for 30s-1min and stop the capture. Send us the pcap file. Another user reported that turning off readdir-ahead worked for him. Please try that after capturing the statedump and see if it helps you. --- Additional comment from Alberto Bengoa on 2019-02-21 11:17:22 UTC --- (In reply to Nithya Balachandran from comment #1) > Is this high throughput consistent? > Please provide a tcpdump of the client process for about 30s to 1 min during > the high throughput to see what packets gluster is sending: > > In a terminal to the client machine: > tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 > > Wait for 30s-1min and stop the capture. Send us the pcap file. > > Another user reported that turning off readdir-ahead worked for him. Please > try that after capturing the statedump and see if it helps you. I'm the another user and I can confirm the same behaviour here. On our tests we did: - Mounted the new cluster servers (running 5.3 version) using client 5.3 - Started a find . -type d on a directory with lots of directories. - It generated an outgoing traffic (on the client) of around 90mbps (so, inbound traffic on gluster server). We repeated the same test using 3.8 client (on 5.3 cluster) and the outgoing traffic on the client was just around 1.3 mbps. I can provide pcaps if needed. Cheers, Alberto Bengoa --- Additional comment from Nithya Balachandran on 2019-02-22 04:09:41 UTC --- Assigning this to Amar to be reassigned appropriately. --- Additional comment from Jacob on 2019-02-25 13:42:45 UTC --- i'm not able to upload in the bugzilla portal due to the size of the pcap. You can download from here: https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 --- Additional comment from Poornima G on 2019-03-04 15:23:14 UTC --- Disabling readdir-ahead fixed the issue? --- Additional comment from Hubert on 2019-03-04 15:32:17 UTC --- We seem to have the same problem with a fresh install of glusterfs 5.3 on a debian stretch. We migrated from an existing setup (version 4.1.6, distribute-replicate) to a new setup (version 5.3, replicate), and traffic on clients went up significantly, maybe causing massive iowait on the clients during high-traffic times. Here are some munin graphs: network traffic on high iowait client: https://abload.de/img/client-eth1-traffic76j4i.jpg network traffic on old servers: https://abload.de/img/oldservers-eth1nejzt.jpg network traffic on new servers: https://abload.de/img/newservers-eth17ojkf.jpg performance.readdir-ahead is on by default. I could deactivate it tomorrow morning (07:00 CEST), and provide tcpdump data if necessary. Regards, Hubert --- Additional comment from Hubert on 2019-03-05 12:03:11 UTC --- i set performance.readdir-ahead to off and watched network traffic for about 2 hours now, but traffic is still as high. 5-8 times higher than it was with old 4.1.x volumes. just curious: i see hundreds of thousands of these messages: [2019-03-05 12:02:38.423299] W [dict.c:761:dict_ref] (-->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/quick-read.so(+0x6df4) [0x7f0db452edf4] -->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/io-cache.so(+0xa39d) [0x7f0db474039d] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) [0x7f0dbb7e4a38] ) 5-dict: dict is NULL [Invalid argument] see https://bugzilla.redhat.com/show_bug.cgi?id=1674225 - could this be related? --- Additional comment from Jacob on 2019-03-06 09:54:26 UTC --- Disabling readdir-ahead doesn't change the througput. --- Additional comment from Alberto Bengoa on 2019-03-06 10:07:59 UTC --- Neither to me. BTW, read-ahead/readdir-ahead shouldn't generate traffic in the opposite direction? ( Server -> Client) --- Additional comment from Nithya Balachandran on 2019-03-06 11:40:49 UTC --- (In reply to Jacob from comment #4) > i'm not able to upload in the bugzilla portal due to the size of the pcap. > You can download from here: > > https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 @Poornima, the following are the calls and instances from the above: 104 proc-1 (stat) 8259 proc-11 (open) 46 proc-14 (statfs) 8239 proc-15 (flush) 8 proc-18 (getxattr) 68 proc-2 (readlink) 5576 proc-27 (lookup) 8388 proc-41 (forget) Not sure if it helps. --- Additional comment from Hubert on 2019-03-07 08:34:21 UTC --- i made a tcpdump as well: tcpdump -i eth1 -s 0 -w /tmp/dirls.pcap tcp and not port 2222 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 259699 packets captured 259800 packets received by filter 29 packets dropped by kernel The file is 1.1G big; gzipped and uploaded it: https://ufile.io/5h6i2 Hope this helps. --- Additional comment from Hubert on 2019-03-07 09:00:12 UTC --- Maybe i should add that the relevant IP addresses of the gluster servers are: 192.168.0.50, 192.168.0.51, 192.168.0.52 --- Additional comment from Hubert on 2019-03-18 13:45:51 UTC --- fyi: on a test setup (debian stretch, after upgrade 5.3 -> 5.5) i did a little test: - copied 11GB of data - via rsync: rsync --bwlimit=10000 --inplace --- bandwith limit of max. 10000 KB/s - rsync pulled data over interface eth0 - rsync stats: sent 1,484,200 bytes received 11,402,695,074 bytes 5,166,106.13 bytes/sec - so external traffic average was about 5 MByte/s - result was an internal traffic up to 350 MBit/s (> 40 MByte/s) on eth1 (LAN interface) - graphic of internal traffic: https://abload.de/img/if_eth1-internal-trafdlkcy.png - graphic of external traffic: https://abload.de/img/if_eth0-external-trafrejub.png --- Additional comment from Poornima G on 2019-03-19 06:15:50 UTC --- Apologies for the delay, there have been some changes done to quick-read feature, which deals with reading the content of a file in lookup fop, if the file is smaller than 64KB. I m suspecting that with 5.3 the increase in bandwidth may be due to more number of reads of small file(generated by quick-read). Please try the following: gluster vol set quick-read off gluster vol set read-ahead off gluster vol set io-cache off And let us know if the network bandwidth consumption decreases, meanwhile i will try to reproduce the same locally. --- Additional comment from Hubert on 2019-03-19 08:12:04 UTC --- I deactivated the 3 params and did the same test again. - same rsync params: rsync --bwlimit=10000 --inplace - rsync stats: sent 1,491,733 bytes received 11,444,330,300 bytes 6,703,263.27 bytes/sec - so ~6,7 MByte/s or ~54 MBit/s in average (peak of 60 MBit/s) over external network interface - traffic graphic of the server with rsync command: https://abload.de/img/if_eth1-internal-traf4zjow.png - so server is sending with an average of ~110 MBit/s and with peak at ~125 MBit/s over LAN interface - traffic graphic of one of the replica servers (disregard first curve: is the delete of the old data): https://abload.de/img/if_enp5s0-internal-trn5k9v.png - so one of the replicas receices data with ~55 MBit/s average and peak ~62 MBit/s - as a comparison - traffic before and after changing the 3 params (rsync server, highest curve is relevant): - https://abload.de/img/if_eth1-traffic-befortvkib.png So it looks like the traffic was reduced to about a third. Is it this what you expected? If so: traffic would be still a bit higher when i compare 4.1.6 and 5.3 - here's a graphic of one client in our live system after switching from 4.1.6 (~20 MBit/s) to 5.3. (~100 MBit/s in march): https://abload.de/img/if_eth1-comparison-gly8kyx.png So if this traffic gets reduced to 1/3: traffic would be ~33 MBit/s then. Way better, i think. And could be "normal"? Thx so far :-) --- Additional comment from Poornima G on 2019-03-19 09:23:48 UTC --- Awesome thank you for trying it out, i was able to reproduce this issue locally, one of the major culprit was the quick-read. The other two options had no effect in reducing the bandwidth consumption. So for now as a workaround, can disable quick-read: # gluster vol set quick-read off Quick-read alone reduced the bandwidth consumption by 70% for me. Debugging the rest 30% increase. Meanwhile, planning to make this bug a blocker for our next gulster-6 release. Will keep the bug updated with the progress. --- Additional comment from Hubert on 2019-03-19 10:07:35 UTC --- i'm running another test, just alongside... simply deleting and copying data, no big effort. Just curious :-) 2 little questions: - does disabling quick-read have any performance issues for certain setups/scenarios? - bug only blocker for v6 release? update for v5 planned? --- Additional comment from Poornima G on 2019-03-19 10:36:20 UTC --- (In reply to Hubert from comment #17) > i'm running another test, just alongside... simply deleting and copying > data, no big effort. Just curious :-) I think if the volume hosts small files, then any kind of operation around these files will see increased bandwidth usage. > > 2 little questions: > > - does disabling quick-read have any performance issues for certain > setups/scenarios? Small file reads(files with size <= 64kb) will see reduced performance. Eg: web server use case. > - bug only blocker for v6 release? update for v5 planned? Yes there will be updated for v5, not sure when. The updates for major releases are made once in every 3 or 4 weeks not sure. For critical bugs the release will be made earlier. --- Additional comment from Alberto Bengoa on 2019-03-19 11:54:58 UTC --- Hello guys, Thanks for your update Poornima. I was already running quick-read off here so, on my case, I noticed the traffic growing consistently after enabling it. I've made some tests on my scenario, and I wasn't able to reproduce your 70% reduction results. To me, it's near 46% of traffic reduction (from around 103 Mbps to around 55 Mbps, graph attached here: https://pasteboard.co/I68s9qE.png ) What I'm doing is just running a find . type -d on a directory with loads of directories/files. Poornima, if you don't mind to answer a question, why are we seem this traffic on the inbound of gluster servers (outbound of clients)? On my particular case, the traffic should be basically on the opposite direction I think, and I'm very curious about that. Thank you, Alberto --- Additional comment from Poornima G on 2019-03-22 17:42:54 UTC --- Thank You all for the report. We have the RCA, working on the patch will be posting it shortly. The issue was with the size of the payload being sent from the client to server for operations like lookup and readdirp. Hence worakload involving lookup and readdir would consume a lot of bandwidth. Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 [Bug 1673058] Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1692093 [Bug 1692093] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 10:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:30:11 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692101 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 [Bug 1692101] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 24 10:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:30:11 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1692101 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 [Bug 1692101] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 10:32:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:32:06 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22403 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 10:32:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:32:07 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22403 (client-rpc: Fix the payload being sent on the wire) posted (#1) for review on release-6 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 24 10:42:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:42:31 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22404 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 24 10:42:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 24 Mar 2019 10:42:32 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #21 from Worker Ant --- REVIEW: https://review.gluster.org/22404 (client-rpc: Fix the payload being sent on the wire) posted (#1) for review on release-5 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 05:35:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 05:35:39 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 --- Comment #22 from Worker Ant --- REVIEW: https://review.gluster.org/22381 (Multiple files: remove HAVE_BD_XLATOR related code.) merged (#3) on master by Atin Mukherjee -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 08:24:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 08:24:28 +0000 Subject: [Bugs] [Bug 1676812] Manual Index heal throws error which is misguiding when heal is triggered to heal a brick if another brick is down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676812 Bug 1676812 depends on bug 1603082, which changed state. Bug 1603082 Summary: Manual Index heal throws error which is misguiding when heal is triggered to heal a brick if another brick is down https://bugzilla.redhat.com/show_bug.cgi?id=1603082 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |WONTFIX -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 08:38:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 08:38:35 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22406 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 08:38:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 08:38:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #595 from Worker Ant --- REVIEW: https://review.gluster.org/22406 (cluster/afr: Remove un-used variables related to pump) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 10:29:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 10:29:49 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22407 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 10:29:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 10:29:50 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 --- Comment #11 from Worker Ant --- REVIEW: https://review.gluster.org/22407 (cluster/dht: refactor dht lookup functions) posted (#1) for review on master by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 12:32:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 12:32:27 +0000 Subject: [Bugs] [Bug 1692349] New: gluster-csi-containers job is failing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692349 Bug ID: 1692349 Summary: gluster-csi-containers job is failing Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: dkhandel at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: gluster-csi-containers nightly jenkins job is failing from so long because of no space left on device. This job is aimed to build gluster-csi containers and push it to dockerhub. https://build.gluster.org/job/gluster-csi-containers/200/console Do we need this job anymore or we can delete it? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 13:58:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 13:58:17 +0000 Subject: [Bugs] [Bug 1692394] New: GlusterFS 6.1 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Bug ID: 1692394 Summary: GlusterFS 6.1 tracker Product: GlusterFS Version: 6 Status: NEW Component: core Keywords: Tracking, Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Tracker for the release 6.1 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 14:29:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 14:29:04 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #53 from Amgad --- Hi Sanju: I just saw the 5.5 CentOS RPMs posted this morning! Any change, if not would you kindly update the status for the rollback issue here. Regards, Amgad -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 15:40:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 15:40:38 +0000 Subject: [Bugs] [Bug 1684496] compiler errors building qemu against glusterfs-6.0-0.1.rc0.fc30 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684496 Bug 1684496 depends on bug 1684298, which changed state. Bug 1684298 Summary: glusterfs 6 changed API, qemu needs adjustments https://bugzilla.redhat.com/show_bug.cgi?id=1684298 What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED Resolution|--- |RAWHIDE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 15:48:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 15:48:25 +0000 Subject: [Bugs] [Bug 1692441] New: [GSS] Problems using ls or find on volumes using RDMA transport Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Bug ID: 1692441 Summary: [GSS] Problems using ls or find on volumes using RDMA transport Product: Red Hat Gluster Storage Version: 3.4 Hardware: x86_64 OS: Linux Status: NEW Component: rdma Keywords: Triaged Severity: high Assignee: rkavunga at redhat.com Reporter: ccalhoun at redhat.com QA Contact: rhinduja at redhat.com CC: bugs at gluster.org, iheim at redhat.com, jkinney at emory.edu, rgowdapp at redhat.com, rhs-bugs at redhat.com, rkavunga at redhat.com, sankarshan at redhat.com, shane at axiomalaska.com Depends On: 1532842 Target Milestone: --- Classification: Red Hat Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1532842 [Bug 1532842] Large directories in disperse volumes with rdma transport can't be accessed with ls -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 15:48:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 15:48:25 +0000 Subject: [Bugs] [Bug 1532842] Large directories in disperse volumes with rdma transport can't be accessed with ls In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1532842 Cal Calhoun changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692441 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 15:48:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 15:48:50 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Cal Calhoun changed: What |Removed |Added ---------------------------------------------------------------------------- Priority|unspecified |high -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1138841] allow the use of the CIDR format with auth.allow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1138841 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-11-20 09:08:53 |2019-03-25 16:30:11 --- Comment #8 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1236272] socket: Use newer system calls that provide better interface/performance on Linux/*BSD when available In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1236272 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:11 --- Comment #14 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1243991] "gluster volume set group " is not in the help text In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1243991 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1285126] RFE: GlusterFS NFS does not implement an all_squash volume setting In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1285126 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:11 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1343926] port-map: let brick choose its own port In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1343926 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:11 --- Comment #10 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1364707] Remove deprecated stripe xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1364707 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:11 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:15 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Bug 1635688 depends on bug 1364707, which changed state. Bug 1364707 Summary: Remove deprecated stripe xlator https://bugzilla.redhat.com/show_bug.cgi?id=1364707 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1427397] script to strace processes consuming high CPU In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1427397 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:11 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:11 +0000 Subject: [Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1467614 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version|glusterfs-4.0.0 |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-03-15 11:17:12 |2019-03-25 16:30:11 --- Comment #77 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:16 +0000 Subject: [Bugs] [Bug 1495397] Make event-history feature configurable and have it disabled by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1495397 Bug 1495397 depends on bug 1467614, which changed state. Bug 1467614 Summary: Gluster read/write performance improvements on NVMe backend https://bugzilla.redhat.com/show_bug.cgi?id=1467614 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:17 +0000 Subject: [Bugs] [Bug 1495430] Make event-history feature configurable and have it disabled by default In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1495430 Bug 1495430 depends on bug 1467614, which changed state. Bug 1467614 Summary: Gluster read/write performance improvements on NVMe backend https://bugzilla.redhat.com/show_bug.cgi?id=1467614 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1486532] need a script to resolve backtraces In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1486532 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:19 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1511339] In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1511339 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version|glusterfs-4.0.0 |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-03-15 11:20:54 |2019-03-25 16:30:19 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:22 +0000 Subject: [Bugs] [Bug 1511768] In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1511768 Bug 1511768 depends on bug 1511339, which changed state. Bug 1511339 Summary: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon https://bugzilla.redhat.com/show_bug.cgi?id=1511339 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:22 +0000 Subject: [Bugs] [Bug 1511782] In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1511782 Bug 1511782 depends on bug 1511339, which changed state. Bug 1511339 Summary: In Replica volume 2*2 when quorum is set, after glusterd restart nfs server is coming up instead of self-heal daemon https://bugzilla.redhat.com/show_bug.cgi?id=1511339 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1535495] Add option -h and --help to gluster cli In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535495 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:19 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1535528] Gluster cli show no help message in prompt In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1535528 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1560561] systemd service file enhancements In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1560561 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:19 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1560969] Garbage collect inactive inodes in fuse-bridge In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1560969 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:19 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1564149] Agree upon a coding standard, and automate check for this in smoke In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1564149 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #45 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:19 +0000 Subject: [Bugs] [Bug 1564890] mount.glusterfs: can't shift that many In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1564890 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:19 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1575836] logic in S30samba-start.sh hook script needs tweaking In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1575836 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:27 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1579788] Thin-arbiter: Have the state of volume in memory In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1579788 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version|glusterfs-5.0 |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-10-23 15:09:12 |2019-03-25 16:30:27 --- Comment #17 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:29 +0000 Subject: [Bugs] [Bug 1648205] Thin-arbiter: Have the state of volume in memory and use it for shd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648205 Bug 1648205 depends on bug 1579788, which changed state. Bug 1579788 Summary: Thin-arbiter: Have the state of volume in memory https://bugzilla.redhat.com/show_bug.cgi?id=1579788 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1582516] libgfapi: glfs init fails on afr volume with ctime feature enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1582516 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version|glusterfs-5.0 |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-10-23 15:11:13 |2019-03-25 16:30:27 --- Comment #12 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1593538] ctime: Access time is different with in same replica/EC volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593538 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:27 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:30 +0000 Subject: [Bugs] [Bug 1633015] ctime: Access time is different with in same replica/EC volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633015 Bug 1633015 depends on bug 1593538, which changed state. Bug 1593538 Summary: ctime: Access time is different with in same replica/EC volume https://bugzilla.redhat.com/show_bug.cgi?id=1593538 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1596787] glusterfs rpc-clnt.c: error returned while attempting to connect to host: (null), port 0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1596787 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:27 --- Comment #9 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1598345] gluster get-state command is crashing glusterd process when geo-replication is configured In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1598345 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:27 +0000 Subject: [Bugs] [Bug 1600145] [geo-rep]: Worker still ACTIVE after killing bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1600145 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:27 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1605056] [RHHi] Mount hung and not accessible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1605056 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1605077] If a node disconnects during volume delete, it assumes deleted volume as a freshly created volume when it is back online In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1605077 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1608512] cluster.server-quorum-type help text is missing possible settings In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1608512 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:33 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1624006] /var/run/gluster/metrics/ wasn't created automatically In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624006 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #8 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1624332] [Thin-arbiter]: Add tests for thin arbiter feature In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624332 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:33 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1624724] ctime: Enable ctime feature by default and also improve usability by providing single option to enable In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624724 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-11-28 04:35:18 |2019-03-25 16:30:33 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1624796] mkdir -p fails with "No data available" when root-squash is enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1624796 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:33 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:33 +0000 Subject: [Bugs] [Bug 1625850] tests: fixes to bug-1015990-rep.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1625850 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:33 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1625961] Writes taking very long time leading to system hogging In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1625961 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1626313] fix glfs_fini related problems In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1626313 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:38 --- Comment #17 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1626610] [USS]: Change gf_log to gf_msg In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1626610 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1626994] split-brain observed on parent dir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1626994 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1627610] glusterd crash in regression build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1627610 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:38 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:40 +0000 Subject: [Bugs] [Bug 1631418] glusterd crash in regression build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1631418 Bug 1631418 depends on bug 1627610, which changed state. Bug 1627610 Summary: glusterd crash in regression build https://bugzilla.redhat.com/show_bug.cgi?id=1627610 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:41 +0000 Subject: [Bugs] [Bug 1633552] glusterd crash in regression build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633552 Bug 1633552 depends on bug 1627610, which changed state. Bug 1627610 Summary: glusterd crash in regression build https://bugzilla.redhat.com/show_bug.cgi?id=1627610 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1627620] SAS job aborts complaining about file doesn't exist In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1627620 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #8 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1628194] tests/dht: Additional tests for dht operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628194 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:38 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:38 +0000 Subject: [Bugs] [Bug 1628605] One client hangs when another client loses communication with bricks during intensive write I/O In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628605 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:38 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1628664] Update op-version from 4.2 to 5.0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628664 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:43 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:45 +0000 Subject: [Bugs] [Bug 1628668] Update op-version from 4.2 to 5.0 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628668 Bug 1628668 depends on bug 1628664, which changed state. Bug 1628664 Summary: Update op-version from 4.2 to 5.0 https://bugzilla.redhat.com/show_bug.cgi?id=1628664 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1629561] geo-rep: geo-rep config set fails to set rsync-options In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1629561 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1630368] Low Random write IOPS in VM workloads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1630368 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2018-11-20 09:12:46 |2019-03-25 16:30:43 --- Comment #13 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:48 +0000 Subject: [Bugs] [Bug 1635972] Low Random write IOPS in VM workloads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635972 Bug 1635972 depends on bug 1630368, which changed state. Bug 1630368 Summary: Low Random write IOPS in VM workloads https://bugzilla.redhat.com/show_bug.cgi?id=1630368 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:49 +0000 Subject: [Bugs] [Bug 1635976] Low Random write IOPS in VM workloads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635976 Bug 1635976 depends on bug 1630368, which changed state. Bug 1630368 Summary: Low Random write IOPS in VM workloads https://bugzilla.redhat.com/show_bug.cgi?id=1630368 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:50 +0000 Subject: [Bugs] [Bug 1635980] Low Random write IOPS in VM workloads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635980 Bug 1635980 depends on bug 1630368, which changed state. Bug 1630368 Summary: Low Random write IOPS in VM workloads https://bugzilla.redhat.com/show_bug.cgi?id=1630368 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1630798] Add performance options to virt profile In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1630798 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:43 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1630804] libgfapi-python: test_listdir_with_stat and test_scandir failure on release 5 branch In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1630804 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-5.0 |glusterfs-6.0 --- Comment #32 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1630922] glusterd crashed and core generated at gd_mgmt_v3_unlock_timer_cbk after huge number of volumes were created In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1630922 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:43 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1631128] rpc marks brick disconnected from glusterd & volume stop transaction gets timed out In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1631128 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:43 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:43 +0000 Subject: [Bugs] [Bug 1631357] glusterfsd keeping fd open in index xlator after stop the volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1631357 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:43 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1631886] Update database profile settings for gluster In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1631886 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:59 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:00 +0000 Subject: [Bugs] [Bug 1644120] Update database profile settings for gluster In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644120 Bug 1644120 depends on bug 1631886, which changed state. Bug 1631886 Summary: Update database profile settings for gluster https://bugzilla.redhat.com/show_bug.cgi?id=1631886 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1632161] [Disperse] : Set others.eager-lock on for ec-1468261.t test to pass In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1632161 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1632236] Provide indication at the console or in the logs about the progress being made with changelog processing. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1632236 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:59 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1632503] FUSE client segfaults when performance.md-cache-statfs is enabled for a volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1632503 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:59 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1632717] EC crashes when running on non 64-bit architectures In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1632717 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:59 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:05 +0000 Subject: [Bugs] [Bug 1633242] 'df' shows half as much space on volume after upgrade to RHGS 3.4 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633242 Bug 1633242 depends on bug 1632889, which changed state. Bug 1632889 Summary: 'df' shows half as much space on volume after upgrade to RHGS 3.4 https://bugzilla.redhat.com/show_bug.cgi?id=1632889 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:06 +0000 Subject: [Bugs] [Bug 1633479] 'df' shows half as much space on volume after upgrade to RHGS 3.4 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633479 Bug 1633479 depends on bug 1632889, which changed state. Bug 1632889 Summary: 'df' shows half as much space on volume after upgrade to RHGS 3.4 https://bugzilla.redhat.com/show_bug.cgi?id=1632889 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1633926] Script to collect system-stats In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633926 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:59 --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:30:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:30:59 +0000 Subject: [Bugs] [Bug 1633930] ASan (address sanitizer) fixes - Blanket bug In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1633930 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:30:59 --- Comment #63 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:08 +0000 Subject: [Bugs] [Bug 1635373] ASan (address sanitizer) fixes - Blanket bug In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635373 Bug 1635373 depends on bug 1633930, which changed state. Bug 1633930 Summary: ASan (address sanitizer) fixes - Blanket bug https://bugzilla.redhat.com/show_bug.cgi?id=1633930 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1634102] MAINTAINERS: Add sunny kumar as a peer for snapshot component In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1634102 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:10 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1634220] md-cache: some problems of cache virtual glusterfs ACLs for ganesha In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1634220 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:10 --- Comment #22 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1635050] [SNAPSHOT]: with brick multiplexing, snapshot restore will make glusterd send wrong volfile In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635050 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1635480] Correction for glusterd memory leak because use "gluster volume status volume_name --detail" continuesly (cli) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635480 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:10 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1635593] glusterd crashed in cleanup_and_exit when glusterd comes up with upgrade mode. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635593 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:10 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:10 --- Comment #23 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:10 +0000 Subject: [Bugs] [Bug 1635820] Seeing defunt translator and discrepancy in volume info when issued from node which doesn't host bricks in that volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635820 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:10 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:14 +0000 Subject: [Bugs] [Bug 1643052] Seeing defunt translator and discrepancy in volume info when issued from node which doesn't host bricks in that volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643052 Bug 1643052 depends on bug 1635820, which changed state. Bug 1635820 Summary: Seeing defunt translator and discrepancy in volume info when issued from node which doesn't host bricks in that volume https://bugzilla.redhat.com/show_bug.cgi?id=1635820 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:15 +0000 Subject: [Bugs] [Bug 1647968] Seeing defunt translator and discrepancy in volume info when issued from node which doesn't host bricks in that volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647968 Bug 1647968 depends on bug 1635820, which changed state. Bug 1635820 Summary: Seeing defunt translator and discrepancy in volume info when issued from node which doesn't host bricks in that volume https://bugzilla.redhat.com/show_bug.cgi?id=1635820 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1635863] Gluster peer probe doesn't work for IPv6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635863 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1636570] Cores due to SIGILL during multiplex regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1636570 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:17 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1636631] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1636631 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:17 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:19 +0000 Subject: [Bugs] [Bug 1644681] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644681 Bug 1644681 depends on bug 1636631, which changed state. Bug 1636631 Summary: Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. https://bugzilla.redhat.com/show_bug.cgi?id=1636631 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:20 +0000 Subject: [Bugs] [Bug 1651525] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651525 Bug 1651525 depends on bug 1636631, which changed state. Bug 1636631 Summary: Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. https://bugzilla.redhat.com/show_bug.cgi?id=1636631 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1637196] Disperse volume 'df' usage is extremely incorrect after replace-brick. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1637196 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:17 --- Comment #10 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:20 +0000 Subject: [Bugs] [Bug 1644279] Disperse volume 'df' usage is extremely incorrect after replace-brick. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644279 Bug 1644279 depends on bug 1637196, which changed state. Bug 1637196 Summary: Disperse volume 'df' usage is extremely incorrect after replace-brick. https://bugzilla.redhat.com/show_bug.cgi?id=1637196 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1637249] gfid heal does not happen when there is no source brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1637249 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:17 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:22 +0000 Subject: [Bugs] [Bug 1655545] gfid heal does not happen when there is no source brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655545 Bug 1655545 depends on bug 1637249, which changed state. Bug 1637249 Summary: gfid heal does not happen when there is no source brick https://bugzilla.redhat.com/show_bug.cgi?id=1637249 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:22 +0000 Subject: [Bugs] [Bug 1655561] gfid heal does not happen when there is no source brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655561 Bug 1655561 depends on bug 1637249, which changed state. Bug 1637249 Summary: gfid heal does not happen when there is no source brick https://bugzilla.redhat.com/show_bug.cgi?id=1637249 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1637802] data-self-heal in arbiter volume results in stale locks. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1637802 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1637934] glusterfsd is keeping fd open in index xlator In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1637934 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:17 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:17 +0000 Subject: [Bugs] [Bug 1638453] Gfid mismatch seen on shards when lookup and mknod are in progress at the same time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1638453 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:17 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:23 +0000 Subject: [Bugs] [Bug 1641429] Gfid mismatch seen on shards when lookup and mknod are in progress at the same time In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1641429 Bug 1641429 depends on bug 1638453, which changed state. Bug 1638453 Summary: Gfid mismatch seen on shards when lookup and mknod are in progress at the same time https://bugzilla.redhat.com/show_bug.cgi?id=1638453 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1639599] Improve support-ability of glusterfs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1639599 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:24 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1640026] improper checking to avoid identical mounts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640026 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:24 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1640066] [Stress] : Mismatching iatt in glustershd logs during MTSH and continous IO from Ganesha mounts In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640066 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1640165] io-stats: garbage characters in the filenames generated In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640165 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:24 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:27 +0000 Subject: [Bugs] [Bug 1640392] io-stats: garbage characters in the filenames generated In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640392 Bug 1640392 depends on bug 1640165, which changed state. Bug 1640165 Summary: io-stats: garbage characters in the filenames generated https://bugzilla.redhat.com/show_bug.cgi?id=1640165 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1640489] Invalid memory read after freed in dht_rmdir_readdirp_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640489 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:24 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:27 +0000 Subject: [Bugs] [Bug 1654103] Invalid memory read after freed in dht_rmdir_readdirp_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654103 Bug 1654103 depends on bug 1640489, which changed state. Bug 1640489 Summary: Invalid memory read after freed in dht_rmdir_readdirp_cbk https://bugzilla.redhat.com/show_bug.cgi?id=1640489 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1640581] [AFR] : Start crawling indices and healing only if both data bricks are UP in replica 2 (thin-arbiter) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640581 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:24 +0000 Subject: [Bugs] [Bug 1641344] Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1641344 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:24 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:28 +0000 Subject: [Bugs] [Bug 1641761] Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1641761 Bug 1641761 depends on bug 1641344, which changed state. Bug 1641344 Summary: Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t https://bugzilla.redhat.com/show_bug.cgi?id=1641344 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:29 +0000 Subject: [Bugs] [Bug 1641762] Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1641762 Bug 1641762 depends on bug 1641344, which changed state. Bug 1641344 Summary: Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t https://bugzilla.redhat.com/show_bug.cgi?id=1641344 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:29 +0000 Subject: [Bugs] [Bug 1641872] Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1641872 Bug 1641872 depends on bug 1641344, which changed state. Bug 1641344 Summary: Spurious failures in bug-1637802-arbiter-stale-data-heal-lock.t https://bugzilla.redhat.com/show_bug.cgi?id=1641344 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1642448] EC volume getting created without any redundant brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642448 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1642597] tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642597 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:31 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:32 +0000 Subject: [Bugs] [Bug 1643075] tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643075 Bug 1643075 depends on bug 1642597, which changed state. Bug 1642597 Summary: tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t failing https://bugzilla.redhat.com/show_bug.cgi?id=1642597 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:32 +0000 Subject: [Bugs] [Bug 1643078] tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643078 Bug 1643078 depends on bug 1642597, which changed state. Bug 1642597 Summary: tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t failing https://bugzilla.redhat.com/show_bug.cgi?id=1642597 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1642800] socket: log voluntary socket close/shutdown and EOF on socket at INFO log-level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642800 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:31 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1642807] remove 'tier' translator from build and code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642807 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:31 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:33 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Bug 1635688 depends on bug 1642807, which changed state. Bug 1642807 Summary: remove 'tier' translator from build and code https://bugzilla.redhat.com/show_bug.cgi?id=1642807 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1642810] remove glupy from code and build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642810 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:31 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:33 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Bug 1635688 depends on bug 1642810, which changed state. Bug 1642810 Summary: remove glupy from code and build https://bugzilla.redhat.com/show_bug.cgi?id=1642810 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:34 +0000 Subject: [Bugs] [Bug 1680585] remove glupy from code and build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680585 Bug 1680585 depends on bug 1642810, which changed state. Bug 1642810 Summary: remove glupy from code and build https://bugzilla.redhat.com/show_bug.cgi?id=1642810 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1642850] glusterd: raise default transport.listen-backlog to 1024 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642850 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:31 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:34 +0000 Subject: [Bugs] [Bug 1642854] glusterd: raise default transport.listen-backlog to 1024 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1642854 Bug 1642854 depends on bug 1642850, which changed state. Bug 1642850 Summary: glusterd: raise default transport.listen-backlog to 1024 https://bugzilla.redhat.com/show_bug.cgi?id=1642850 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:31 +0000 Subject: [Bugs] [Bug 1643349] [OpenSSL] : auth.ssl-allow has no option description. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643349 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:31 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1643519] Provide an option to silence glfsheal logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643519 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:39 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:40 +0000 Subject: [Bugs] [Bug 1654236] Provide an option to silence glfsheal logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654236 Bug 1654236 depends on bug 1643519, which changed state. Bug 1643519 Summary: Provide an option to silence glfsheal logs https://bugzilla.redhat.com/show_bug.cgi?id=1643519 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:41 +0000 Subject: [Bugs] [Bug 1654229] Provide an option to silence glfsheal logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654229 Bug 1654229 depends on bug 1643519, which changed state. Bug 1643519 Summary: Provide an option to silence glfsheal logs https://bugzilla.redhat.com/show_bug.cgi?id=1643519 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1643929] geo-rep: gluster-mountbroker status crashes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643929 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-4.1.6 |glusterfs-6.0 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1643932] geo-rep: On gluster command failure on slave, worker crashes with python3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643932 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|v5.1 |glusterfs-6.0 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1643935] cliutils: geo-rep cliutils' usage of Popen is not python3 compatible In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1643935 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|v5.1 |glusterfs-6.0 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1644129] Excessive logging in posix_update_utime_in_mdata In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644129 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|v5.1 |glusterfs-6.0 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1644164] Use GF_ATOMIC ops to update inode->nlookup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644164 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:39 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:39 +0000 Subject: [Bugs] [Bug 1644629] [rpcsvc] Single request Queue for all event threads is a performance bottleneck In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644629 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:39 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1644755] CVE-2018-14651 glusterfs: glusterfs server exploitable via symlinks to relative paths [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644755 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1644756] CVE-2018-14653 glusterfs: Heap-based buffer overflow via "gf_getspec_req" RPC message [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644756 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #9 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1644757] CVE-2018-14659 glusterfs: Unlimited file creation via "GF_XATTR_IOSTATS_DUMP_KEY" xattr allows for denial of service [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644757 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1644758] CVE-2018-14660 glusterfs: Repeat use of "GF_META_LOCK_KEY" xattr allows for memory exhaustion [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644758 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1644760] CVE-2018-14654 glusterfs: "features/index" translator can create arbitrary, empty files [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644760 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1644763] CVE-2018-14661 glusterfs: features/locks translator passes an user-controlled string to snprintf without a proper format string resulting in a denial of service [fedora-all] In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644763 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1645986] tests/bugs/glusterd/optimized-basic-testcases-in-cluster.t failing in distributed regression In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1645986 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:44 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:44 +0000 Subject: [Bugs] [Bug 1646104] [Geo-rep]: Faulty geo-rep sessions due to link ownership on slave volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646104 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1646728] [snapview-server]:forget glfs handles during inode forget In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646728 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1646869] gNFS crashed when processing "gluster v status [vol] nfs clients" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646869 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1646892] Portmap entries showing stale brick entries when bricks are down In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646892 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1647029] can't enable shared-storage In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647029 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:53 +0000 Subject: [Bugs] [Bug 1647801] can't enable shared-storage In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647801 Bug 1647801 depends on bug 1647029, which changed state. Bug 1647029 Summary: can't enable shared-storage https://bugzilla.redhat.com/show_bug.cgi?id=1647029 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1647074] when peer detach is issued, throw a warning to remount volumes using other cluster IPs before proceeding In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647074 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1647651] gfapi: fix bad dict setting of lease-id In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1647651 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1648237] Bumping up of op-version times out on a scaled system with ~1200 volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648237 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:51 +0000 Subject: [Bugs] [Bug 1648298] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648298 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:51 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:55 +0000 Subject: [Bugs] [Bug 1660736] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660736 Bug 1660736 depends on bug 1648298, which changed state. Bug 1648298 Summary: dht_revalidate may not heal attrs on the brick root https://bugzilla.redhat.com/show_bug.cgi?id=1648298 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1648687] Incorrect usage of local->fd in afr_open_ftruncate_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648687 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1648768] Tracker bug for all leases related issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648768 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:57 --- Comment #22 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:59 +0000 Subject: [Bugs] [Bug 1651323] Tracker bug for all leases related issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651323 Bug 1651323 depends on bug 1648768, which changed state. Bug 1648768 Summary: Tracker bug for all leases related issues https://bugzilla.redhat.com/show_bug.cgi?id=1648768 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:59 +0000 Subject: [Bugs] [Bug 1655532] Tracker bug for all leases related issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655532 Bug 1655532 depends on bug 1648768, which changed state. Bug 1648768 Summary: Tracker bug for all leases related issues https://bugzilla.redhat.com/show_bug.cgi?id=1648768 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1649709] profile info doesn't work when decompounder xlator is not in graph In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1649709 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:57 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1650115] glusterd requests are timing out in a brick multiplex setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1650115 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:57 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1650389] rpc: log flooding with ENODATA errors In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1650389 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:57 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1650893] fails to sync non-ascii (utf8) file and directory names, causes permanently faulty geo-replication state In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1650893 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:57 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:31:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:31:57 +0000 Subject: [Bugs] [Bug 1651059] [OpenSSL] : Retrieving the value of "client.ssl" option, before SSL is set up, fails . In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651059 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:31:57 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1651165] Race in per-thread mem-pool when a thread is terminated In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651165 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1651431] Resolve memory leak at the time of graph init In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651431 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1651439] gluster-NFS crash while expanding volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651439 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:06 +0000 Subject: [Bugs] [Bug 1679275] dht: fix double extra unref of inode at heal path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679275 Bug 1679275 depends on bug 1651439, which changed state. Bug 1651439 Summary: gluster-NFS crash while expanding volume https://bugzilla.redhat.com/show_bug.cgi?id=1651439 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1651463] glusterd can't regenerate volfiles in container storage upgrade workflow In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651463 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1651498] [geo-rep]: Failover / Failback shows fault status in a non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651498 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1651584] [geo-rep]: validate the config checkpoint date and fail if it is not is exact format hh:mm:ss In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651584 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1652118] default cluster.max-bricks-per-process to 250 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652118 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:09 +0000 Subject: [Bugs] [Bug 1653073] default cluster.max-bricks-per-process to 250 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653073 Bug 1653073 depends on bug 1652118, which changed state. Bug 1652118 Summary: default cluster.max-bricks-per-process to 250 https://bugzilla.redhat.com/show_bug.cgi?id=1652118 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:10 +0000 Subject: [Bugs] [Bug 1653136] default cluster.max-bricks-per-process to 250 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653136 Bug 1653136 depends on bug 1652118, which changed state. Bug 1652118 Summary: default cluster.max-bricks-per-process to 250 https://bugzilla.redhat.com/show_bug.cgi?id=1652118 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:04 +0000 Subject: [Bugs] [Bug 1652430] glusterd fails to start, when glusterd is restarted in a loop for every 45 seconds while volume creation is in-progress In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652430 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:04 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1652852] "gluster volume get" doesn't show real default value for server.tcp-user-timeout In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652852 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1652887] Geo-rep help looks to have a typo. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652887 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1652911] Add no-verify and ssh-port n options for create command in man page In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1652911 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1653277] bump up default value of server.event-threads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653277 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1653359] Self-heal:Improve heal performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653359 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1653565] tests/geo-rep: Add arbiter volume test case In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1653565 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1654138] Optimize for virt store fails with distribute volume type In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654138 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:14 +0000 Subject: [Bugs] [Bug 1654181] glusterd segmentation fault: glusterd_op_ac_brick_op_failed (event=0x7f44e0e63f40, ctx=0x0) at glusterd-op-sm.c:5606 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654181 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:14 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1654187] [geo-rep]: RFE - Make slave volume read-only while setting up geo-rep (by default) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654187 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:19 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1654270] glusterd crashed with seg fault possibly during node reboot while volume creates and deletes were happening In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654270 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1654521] io-stats outputs json numbers as strings In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654521 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:19 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1654805] Bitrot: Scrub status say file is corrupted even it was just created AND 'path' in the output is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654805 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:19 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1654917] cleanup resources in server_init in case of failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654917 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:19 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1655050] automatic split resolution with size as policy should not work on a directory which is in metadata splitbrain In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655050 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1655052] Automatic Splitbrain with size as policy must not resolve splitbrains when both the copies are of same size In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655052 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:19 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:19 +0000 Subject: [Bugs] [Bug 1655827] [Glusterd]: Glusterd crash while expanding volumes using heketi In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655827 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:19 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1655854] Converting distribute to replica-3/arbiter volume fails In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655854 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:33 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656100] configure.ac does not enforce automake --foreign In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656100 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:33 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656264] Fix tests/bugs/shard/zero-flag.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656264 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:33 --- Comment #8 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:35 +0000 Subject: [Bugs] [Bug 1660932] Fix tests/bugs/shard/zero-flag.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660932 Bug 1660932 depends on bug 1656264, which changed state. Bug 1656264 Summary: Fix tests/bugs/shard/zero-flag.t https://bugzilla.redhat.com/show_bug.cgi?id=1656264 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:35 +0000 Subject: [Bugs] [Bug 1662635] Fix tests/bugs/shard/zero-flag.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662635 Bug 1662635 depends on bug 1656264, which changed state. Bug 1656264 Summary: Fix tests/bugs/shard/zero-flag.t https://bugzilla.redhat.com/show_bug.cgi?id=1656264 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656348] Commit c9bde3021202f1d5c5a2d19ac05a510fc1f788ac causes ls slowdown In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656348 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656517] [GSS] Gluster client logs filling with 0-glusterfs-socket: invalid port messages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656517 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:33 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656682] brick memory consumed by volume is not getting released even after delete In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656682 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656771] [Samba-Enhancement] Need for a single group command for setting up volume options for samba In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656771 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:33 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:33 +0000 Subject: [Bugs] [Bug 1656951] cluster.max-bricks-per-process 250 not working as expected In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1656951 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:33 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1657607] Convert nr_files to gf_atomic in posix_private structure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657607 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1657744] quorum count not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657744 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1657783] Rename of a file leading to stale reads In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657783 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:41 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1658045] Resolve memory leak in mgmt_pmap_signout_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1658045 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:41 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1658116] python2 to python3 compatibilty issues In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1658116 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1659327] 43% regression in small-file sequential read performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659327 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:41 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1659432] Memory leak: dict_t leak in rda_opendir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659432 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:41 --- Comment #9 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:44 +0000 Subject: [Bugs] [Bug 1659439] Memory leak: dict_t leak in rda_opendir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659439 Bug 1659439 depends on bug 1659432, which changed state. Bug 1659432 Summary: Memory leak: dict_t leak in rda_opendir https://bugzilla.redhat.com/show_bug.cgi?id=1659432 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:45 +0000 Subject: [Bugs] [Bug 1659676] Memory leak: dict_t leak in rda_opendir In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659676 Bug 1659676 depends on bug 1659432, which changed state. Bug 1659432 Summary: Memory leak: dict_t leak in rda_opendir https://bugzilla.redhat.com/show_bug.cgi?id=1659432 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:41 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:41 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:41 --- Comment #13 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1659857] change max-port value in glusterd vol file to 60999 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659857 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1659868] glusterd : features.selinux was missing in glusterd-volume-set file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659868 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1659869] improvements to io-cache In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659869 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1659971] Setting slave volume read-only option by default results in failure In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659971 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1660577] [Ganesha] Ganesha failed on one node while exporting volumes in loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660577 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:50 +0000 Subject: [Bugs] [Bug 1663131] [Ganesha] Ganesha failed on one node while exporting volumes in loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663131 Bug 1663131 depends on bug 1660577, which changed state. Bug 1660577 Summary: [Ganesha] Ganesha failed on one node while exporting volumes in loop https://bugzilla.redhat.com/show_bug.cgi?id=1660577 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:50 +0000 Subject: [Bugs] [Bug 1663132] [Ganesha] Ganesha failed on one node while exporting volumes in loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663132 Bug 1663132 depends on bug 1660577, which changed state. Bug 1660577 Summary: [Ganesha] Ganesha failed on one node while exporting volumes in loop https://bugzilla.redhat.com/show_bug.cgi?id=1660577 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1660701] Use adaptive mutex in rpcsvc_program_register to improve performance In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660701 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1661214] Brick is getting OOM for tests/bugs/core/bug-1432542-mpx-restart-crash.t In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1661214 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:47 +0000 Subject: [Bugs] [Bug 1662089] NL cache: fix typos In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662089 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:47 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:51 +0000 Subject: [Bugs] [Bug 1662200] NL cache: fix typos In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662200 Bug 1662200 depends on bug 1662089, which changed state. Bug 1662089 Summary: NL cache: fix typos https://bugzilla.redhat.com/show_bug.cgi?id=1662089 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1662264] thin-arbiter: Check with thin-arbiter file before marking new entry change log In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662264 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:55 +0000 Subject: [Bugs] [Bug 1672314] thin-arbiter: Check with thin-arbiter file before marking new entry change log In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672314 Bug 1672314 depends on bug 1662264, which changed state. Bug 1662264 Summary: thin-arbiter: Check with thin-arbiter file before marking new entry change log https://bugzilla.redhat.com/show_bug.cgi?id=1662264 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1662368] [ovirt-gluster] Fuse mount crashed while deleting a 1 TB image file from ovirt In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662368 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:56 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:56 +0000 Subject: [Bugs] [Bug 1665803] [ovirt-gluster] Fuse mount crashed while deleting a 1 TB image file from ovirt In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665803 Bug 1665803 depends on bug 1662368, which changed state. Bug 1662368 Summary: [ovirt-gluster] Fuse mount crashed while deleting a 1 TB image file from ovirt https://bugzilla.redhat.com/show_bug.cgi?id=1662368 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1662679] Log connection_id in statedump for posix-locks as well for better debugging experience In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662679 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1662906] Longevity: glusterfsd(brick process) crashed when we do volume creates and deletes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1662906 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1663077] memory leak in mgmt handshake In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663077 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1663102] Change default value for client side heal to off for replicate volumes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663102 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1663223] profile info command is not displaying information of bricks which are hosted on peers In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663223 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:57 +0000 Subject: [Bugs] [Bug 1663232] profile info command is not displaying information of bricks which are hosted on peers In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663232 Bug 1663232 depends on bug 1663223, which changed state. Bug 1663223 Summary: profile info command is not displaying information of bricks which are hosted on peers https://bugzilla.redhat.com/show_bug.cgi?id=1663223 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:32:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:32:53 +0000 Subject: [Bugs] [Bug 1663243] rebalance status does not display localhost statistics when op-version is not bumped up In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1663243 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:32:53 --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1664122] do not send bit-rot virtual xattrs in lookup response In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664122 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1664124] Improve information dumped from io-threads in statedump In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664124 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1664551] Wrong description of localtime-logging in manpages In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664551 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1664647] dht: Add NULL check for stbuf in dht_rmdir_lookup_cbk In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664647 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1664934] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1664934 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #17 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:04 +0000 Subject: [Bugs] [Bug 1674364] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674364 Bug 1674364 depends on bug 1664934, which changed state. Bug 1664934 Summary: glusterfs-fuse client not benefiting from page cache on read after write https://bugzilla.redhat.com/show_bug.cgi?id=1664934 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1665038] glusterd crashed while running "gluster get-state glusterd odir /get-state" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665038 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1665332] Wrong offset is used in offset for zerofill fop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665332 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:00 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:00 +0000 Subject: [Bugs] [Bug 1665358] allow regression to not run tests with nfs, if nfs is disabled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665358 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1665363] Fix incorrect definition in index-mem-types.h In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665363 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:07 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1665656] testcaes glusterd/add-brick-and-validate-replicated-volume-options.t is crash while brick_mux is enable In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665656 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1665826] [geo-rep]: Directory renames not synced to slave in Hybrid Crawl In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665826 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1666143] Several fixes on socket pollin and pollout return value In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1666143 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:07 --- Comment #9 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:08 +0000 Subject: [Bugs] [Bug 1671207] Several fixes on socket pollin and pollout return value In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671207 Bug 1671207 depends on bug 1666143, which changed state. Bug 1666143 Summary: Several fixes on socket pollin and pollout return value https://bugzilla.redhat.com/show_bug.cgi?id=1666143 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1666833] move few recurring logs to DEBUG level. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1666833 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:07 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1667779] glusterd leaks about 1GB memory per day on single machine of storage pool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667779 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1667804] Unable to delete directories that contain linkto files that point to itself. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667804 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:07 +0000 Subject: [Bugs] [Bug 1667905] dict_leak in __glusterd_handle_cli_uuid_get function In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667905 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1668190] Block hosting volume deletion via heketi-cli failed with error "target is busy" but deleted from gluster backend In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668190 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1668268] Unable to mount gluster volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668268 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1669077] [ovirt-gluster] Fuse mount crashed while creating the preallocated image In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1669077 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1669937] Rebalance : While rebalance is in progress , SGID and sticky bit which is set on the files while file migration is in progress is seen on the mount point In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1669937 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1670031] performance regression seen with smallfile workload tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670031 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:11 --- Comment #17 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1670253] Writes on Gluster 5 volumes fail with EIO when "cluster.consistent-metadata" is set In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670253 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1670259] New GFID file recreated in a replica set after a GFID mismatch resolution In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670259 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:11 +0000 Subject: [Bugs] [Bug 1671213] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671213 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1671637] geo-rep: Issue with configparser import In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671637 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:15 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1672205] 'gluster get-state' command fails if volume brick doesn't exist. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672205 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:15 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #22 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1673267] Fix timeouts so the tests pass on AWS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673267 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1673972] insufficient logging in glusterd_resolve_all_bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673972 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1674364] glusterfs-fuse client not benefiting from page cache on read after write In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674364 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1676429] distribute: Perf regression in mkdir path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676429 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #4 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:15 +0000 Subject: [Bugs] [Bug 1677260] rm -rf fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677260 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1678570] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678570 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed|2019-02-22 03:32:38 |2019-03-25 16:33:20 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:22 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Bug 1667103 depends on bug 1678570, which changed state. Bug 1678570 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1678570 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:22 +0000 Subject: [Bugs] [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676356 Bug 1676356 depends on bug 1678570, which changed state. Bug 1678570 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1678570 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1679004] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679004 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1679275] dht: fix double extra unref of inode at heal path In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679275 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-6.0 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-25 16:33:20 --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:23 +0000 Subject: [Bugs] [Bug 1672818] GlusterFS 6.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672818 Bug 1672818 depends on bug 1679275, which changed state. Bug 1679275 Summary: dht: fix double extra unref of inode at heal path https://bugzilla.redhat.com/show_bug.cgi?id=1679275 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1679965] Upgrade from glusterfs 3.12 to gluster 4/5 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679965 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1680020] Integer Overflow possible in md-cache.c due to data type inconsistency In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680020 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1680585] remove glupy from code and build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680585 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:20 +0000 Subject: [Bugs] [Bug 1680586] Building RPM packages with _for_fedora_koji_builds enabled fails on el6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1680586 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1683506] remove experimental xlators informations from glusterd-volume-set.c In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683506 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1683716] glusterfind: revert shebangs to #!/usr/bin/python3 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683716 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1683880] Multiple shd processes are running on brick_mux environmet In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683880 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1683900] Failed to dispatch handler In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1683900 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1684029] upgrade from 3.12, 4.1 and 5 to 6 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684029 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1684777] gNFS crashed when processing "gluster v profile [vol] info nfs" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684777 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:26 +0000 Subject: [Bugs] [Bug 1686364] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686364 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:31 +0000 Subject: [Bugs] [Bug 1686399] listing a file while writing to it causes deadlock In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686399 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:31 +0000 Subject: [Bugs] [Bug 1686875] packaging: rdma on s390x, unnecessary ldconfig scriptlets In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686875 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:31 +0000 Subject: [Bugs] [Bug 1687672] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687672 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:33:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:33:31 +0000 Subject: [Bugs] [Bug 1688218] Brick process has coredumped, when starting glusterd In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688218 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-6.0 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 16:42:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 16:42:22 +0000 Subject: [Bugs] [Bug 1590385] Refactor dht lookup code In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1590385 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Mon Mar 25 17:10:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 17:10:16 +0000 Subject: [Bugs] [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1058300 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |atumball at redhat.com Fixed In Version| |glusterfs-5.5 Resolution|--- |WORKSFORME Last Closed| |2019-03-25 17:10:16 --- Comment #52 from Amar Tumballi --- Looks like latest releases of glusterfs releases work totally fine with oVirt releases. Please reopen if these issues are seen when used with glusterfs-5.5 or 6.0 versions. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 17:10:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 17:10:37 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22415 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 17:10:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 17:10:38 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- Keywords| |Reopened --- Comment #24 from Worker Ant --- REVIEW: https://review.gluster.org/22415 ([WIP] graph.c: remove extra gettimeofday() - reuse the graph dob.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 20:22:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 20:22:16 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22416 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 20:22:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 20:22:18 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #596 from Worker Ant --- REVIEW: https://review.gluster.org/22416 ([RFC][WIP] dict: add function to retrieve a key based on its hash.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Mon Mar 25 20:39:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Mon, 25 Mar 2019 20:39:52 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #54 from Amgad --- Downloaded 5.5 Centos RPMS -- same behavior, except that "gluster volume heal info is slower compared to my private build from github. ""gluster volume heal info" is taking 10 sec to respond [root at gfs-1 ansible1]# time gluster volume heal glustervol3 info Brick 10.75.147.39:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.75.147.46:/mnt/data3/3 Status: Connected Number of entries: 0 Brick 10.75.147.41:/mnt/data3/3 Status: Connected Number of entries: 0 real 0m10.548s user 0m0.031s sys 0m0.028s [root at gfs-1 ansible1]# -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 02:11:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 02:11:50 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #55 from Sanju --- Amgad, Allow me some time, I will get back to you soon. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 03:09:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:09:44 +0000 Subject: [Bugs] [Bug 1692612] New: Locking issue when restarting bricks Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692612 Bug ID: 1692612 Summary: Locking issue when restarting bricks Product: GlusterFS Version: mainline Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: zhhuan at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Found a potential locking issue when reading code. There are two cases to restart brick, one is when glusterd starts or quorum is met, another is when new peers are joined and quorum is changes. In the later case, sync_lock is not taken, and may cause lock corruption. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 03:13:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:13:09 +0000 Subject: [Bugs] [Bug 1692612] Locking issue when restarting bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692612 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22417 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 03:13:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:13:10 +0000 Subject: [Bugs] [Bug 1692612] Locking issue when restarting bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692612 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22417 (glusterd: fix potential locking issue on peer probe) posted (#1) for review on master by Zhang Huan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 03:21:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:21:18 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |bkunal at redhat.com Flags| |needinfo?(bkunal at redhat.com | |) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 03:21:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:21:35 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(ccalhoun at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 03:21:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:21:53 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sankarshan at redhat | |.com) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 03:46:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:46:04 +0000 Subject: [Bugs] [Bug 1628194] tests/dht: Additional tests for dht operations In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628194 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- Keywords| |Reopened -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 03:52:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 03:52:40 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 sankarshan changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(bkunal at redhat.com | |) | |needinfo?(ccalhoun at redhat.c | |om) | |needinfo?(sankarshan at redhat | |.com) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 04:22:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 04:22:44 +0000 Subject: [Bugs] [Bug 1691164] glusterd leaking memory when issued gluster vol status all tasks continuosly In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691164 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-26 04:22:44 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22388 (glusterd: fix txn-id mem leak) merged (#5) on master by Sanju Rakonde -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 05:31:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 05:31:00 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Bipin Kunal changed: What |Removed |Added ---------------------------------------------------------------------------- Group| |redhat -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 05:54:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 05:54:07 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #597 from Worker Ant --- REVIEW: https://review.gluster.org/22406 (cluster/afr: Remove un-used variables related to pump) merged (#2) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 06:22:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 06:22:32 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 --- Comment #25 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22415 ([WIP] graph.c: remove extra gettimeofday() - reuse the graph dob.) posted (#2) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 06:22:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 06:22:33 +0000 Subject: [Bugs] [Bug 1635688] Keep only the valid (maintained/supported) components in the build In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635688 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22415 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 06:22:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 06:22:35 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22415 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 06:22:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 06:22:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #598 from Worker Ant --- REVIEW: https://review.gluster.org/22415 ([WIP] graph.c: remove extra gettimeofday() - reuse the graph dob.) posted (#2) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 06:22:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 06:22:36 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #599 from Worker Ant --- REVIEW: https://review.gluster.org/22394 (mem-pool: remove dead code.) merged (#3) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 06:47:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 06:47:39 +0000 Subject: [Bugs] [Bug 1654549] merge ssl infra with epoll infra In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654549 Milind Changire changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Rebase -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 07:56:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 07:56:36 +0000 Subject: [Bugs] [Bug 1692666] New: ssh-port config set is failing Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Bug ID: 1692666 Summary: ssh-port config set is failing Product: GlusterFS Version: mainline Status: NEW Component: geo-replication Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: If non-standard ssh-port is used, Geo-rep can be configured to use that ssh port by configuring as below ``` gluster volume geo-replication :: config ssh-port 2222 ``` But this command is failing even if a valid value is passed. ``` $ gluster v geo gv1 centos.sonne::gv2 config ssh-port 2222 geo-replication config-set failed for gv1 centos.sonne::gv2 geo-replication command failed ``` -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 07:56:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 07:56:49 +0000 Subject: [Bugs] [Bug 1692666] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Aravinda VK changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |avishwan at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 08:00:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 08:00:04 +0000 Subject: [Bugs] [Bug 1692666] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22418 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 08:00:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 08:00:05 +0000 Subject: [Bugs] [Bug 1692666] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22418 (geo-rep: fix integer config validation) posted (#1) for review on master by Aravinda VK -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 11:32:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 11:32:29 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #600 from Worker Ant --- REVIEW: https://review.gluster.org/22391 (build: link libgfrpc with MATH_LIB (libm, -lm)) merged (#3) on master by Niels de Vos -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 12:54:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 12:54:09 +0000 Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671556 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1677319 | |(Gluster_5_Affecting_oVirt_ | |4.3) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1677319 [Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 12:54:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 12:54:09 +0000 Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks|1677319 | |(Gluster_5_Affecting_oVirt_ | |4.3) | Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1677319 [Bug 1677319] [Tracker] Gluster 5 issues affecting oVirt 4.3 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 12:59:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 12:59:37 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(amukherj at redhat.c | |om) --- Comment #23 from Sahina Bose --- Is there anything in Comment 21 that could help narrow this? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 13:01:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 13:01:28 +0000 Subject: [Bugs] [Bug 1665216] Databases crashes on Gluster 5 with the option performance.write-behind enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665216 --- Comment #9 from gabisoft at freesurf.ch --- Could not reproduce this issue anymore with Gluster 5.5 and Etcd, Cassandra and PostgreSQL. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 13:24:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 13:24:04 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Atin Mukherjee changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(sabose at redhat.com | |) --- Comment #24 from Atin Mukherjee --- [2019-03-18 11:29:01.000279] I [glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt: disconnected from remote-host: *.*.*.14 Why did we get a disconnect. Was glusterd service at *.14 not running? [2019-03-18 11:29:01.000330] I [glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting to next volfile server *.*.*.15 [2019-03-18 11:29:01.002495] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fb4beddbfbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7fb4beba4e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7fb4beba4f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7fb4beba6531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7fb4beba70d8] ))))) 0-glusterfs: forced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2019-03-18 11:13:29.445101 (xid=0x2) The above log seems to be the culprit here. [2019-03-18 11:29:01.002517] E [glusterfsd-mgmt.c:2136:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/ssd9) And the above log is the after effect. I have few questions: 1. Does the mount fail everytime? 2. Do you see any change in the behaviour when the primary volfile server is changed? 3. What are the gluster version in the individual peers? (Keeping the needinfo intact for now, but request Sahina to get us these details to work on). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 14:17:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 14:17:28 +0000 Subject: [Bugs] [Bug 1665216] Databases crashes on Gluster 5 with the option performance.write-behind enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665216 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(gabisoft at freesurf | |.ch) --- Comment #10 from Raghavendra G --- (In reply to gabisoft from comment #9) > Could not reproduce this issue anymore with Gluster 5.5 and Etcd, Cassandra > and PostgreSQL. Its likely that fixes to bz 1512691 have helped. Can you please close the bug? -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 14:23:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 14:23:09 +0000 Subject: [Bugs] [Bug 1665216] Databases crashes on Gluster 5 with the option performance.write-behind enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665216 gabisoft at freesurf.ch changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |CLOSED Resolution|--- |WORKSFORME Flags|needinfo?(gabisoft at freesurf | |.ch) | Last Closed| |2019-03-26 14:23:09 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 15:25:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 15:25:27 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED CC| |skoduri at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 15:25:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 15:25:51 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |POST --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22368 (gfapi: add function to set client-pid) merged (#6) on master by soumya k -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 15:28:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 15:28:08 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 15:36:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 15:36:51 +0000 Subject: [Bugs] [Bug 1687051] gluster volume heal failed when online upgrading from 3.12 to 5.x and when rolling back online upgrade from 4.1.4 to 3.12.15 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687051 --- Comment #56 from Amgad --- Thanks for your support! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 15:51:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 15:51:46 +0000 Subject: [Bugs] [Bug 1692879] New: Wrong Youtube link in website Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692879 Bug ID: 1692879 Summary: Wrong Youtube link in website Product: GlusterFS Version: mainline Status: NEW Component: website Assignee: bugs at gluster.org Reporter: avishwan at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: Gluster website(https://www.gluster.org/) links youtube channel as https://www.youtube.com/channel/UC8OSwywy18VtzRXm036j5qA but all our podcasts are published under https://www.youtube.com/user/GlusterCommunity Please change the link in the website to https://www.youtube.com/user/GlusterCommunity -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 16:42:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 16:42:58 +0000 Subject: [Bugs] [Bug 1677160] Gluster 5 client can't access Gluster 3.12 servers In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677160 Darrell changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |budic at onholyground.com --- Comment #12 from Darrell --- I encountered this with 5.3 and 5.5 clients connecting to gluster 3.12.15 servers. Might be multiple problems. At first, I encountered https://bugzilla.redhat.com/show_bug.cgi?id=1651246 with 5.3 clients, and 5.5 resolved that problem. I've hit a new one though, so adding my details. Initially, a new 5.5 mount to a 3.12.15 cluster of 3 servers succeeds and everything works well. If you reboot one of the servers, however, all clients no longer connect to it and the other servers are forced to heal everything to the 3rd server. Restarting the clients (new mounts) will cause them to reconnect until you restart a server again. Affects both fuse and glfapi clients. Server brick example from rebooted server (lots of these repeating): [2019-03-25 17:45:37.588519] I [socket.c:3679:socket_submit_reply] 0-socket.mana gement: not connected (priv->connected = -1) [2019-03-25 17:45:37.588571] E [rpcsvc.c:1364:rpcsvc_submit_generic] 0-rpc-servi ce: failed to submit message (XID: 0x542ab, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (socket.management) [2019-03-25 17:48:25.944496] I [socket.c:3679:socket_submit_reply] 0-socket.mana gement: not connected (priv->connected = -1) [2019-03-25 17:48:25.944547] E [rpcsvc.c:1364:rpcsvc_submit_generic] 0-rpc-servi ce: failed to submit message (XID: 0x38036, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (socket.management) [2019-03-25 17:50:34.306141] I [socket.c:3679:socket_submit_reply] 0-socket.mana gement: not connected (priv->connected = -1) [2019-03-25 17:50:34.306206] E [rpcsvc.c:1364:rpcsvc_submit_generic] 0-rpc-servi ce: failed to submit message (XID: 0x1e050e, Program: GF-DUMP, ProgVers: 1, Proc : 2) to rpc-transport (socket.management) [2019-03-25 17:51:58.082944] I [socket.c:3679:socket_submit_reply] 0-socket.mana gement: not connected (priv->connected = -1) [2019-03-25 17:51:58.082999] E [rpcsvc.c:1364:rpcsvc_submit_generic] 0-rpc-servi ce: failed to submit message (XID: 0x1ec5, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (socket.management) Client brick example (also lots repeating): [2019-03-26 14:55:50.582757] W [rpc-clnt-ping.c:215:rpc_clnt_ping_cbk] 0-gv1-client-1: socket disconnected [2019-03-26 14:55:54.582490] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gv1-client-1: changing port to 50155 (from 0) [2019-03-26 14:55:54.585627] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a5164efbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f4a51417e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f4a51417f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f4a51419531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f4a5141a0d8] ))))) 0-gv1-client-1: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2019-03-26 14:55:54.585283 (xid=0x3ef42) [2019-03-26 14:55:54.585644] W [rpc-clnt-ping.c:215:rpc_clnt_ping_cbk] 0-gv1-client-1: socket disconnected [2019-03-26 14:55:58.585636] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gv1-client-1: changing port to 50155 (from 0) [2019-03-26 14:55:58.588760] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a5164efbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f4a51417e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f4a51417f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f4a51419531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f4a5141a0d8] ))))) 0-gv1-client-1: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2019-03-26 14:55:58.588478 (xid=0x3ef47) [2019-03-26 14:55:58.588779] W [rpc-clnt-ping.c:215:rpc_clnt_ping_cbk] 0-gv1-client-1: socket disconnected [2019-03-26 14:56:02.589009] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gv1-client-1: changing port to 50155 (from 0) [2019-03-26 14:56:02.592150] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a5164efbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f4a51417e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f4a51417f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f4a51419531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f4a5141a0d8] ))))) 0-gv1-client-1: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2019-03-26 14:56:02.591818 (xid=0x3ef4c) [2019-03-26 14:56:02.592166] W [rpc-clnt-ping.c:215:rpc_clnt_ping_cbk] 0-gv1-client-1: socket disconnected [2019-03-26 14:56:06.592208] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gv1-client-1: changing port to 50155 (from 0) [2019-03-26 14:56:06.595306] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a5164efbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f4a51417e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f4a51417f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f4a51419531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f4a5141a0d8] ))))) 0-gv1-client-1: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2019-03-26 14:56:06.594965 (xid=0x3ef51) [2019-03-26 14:56:06.595343] W [rpc-clnt-ping.c:215:rpc_clnt_ping_cbk] 0-gv1-client-1: socket disconnected [2019-03-26 14:56:10.594781] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gv1-client-1: changing port to 50155 (from 0) [2019-03-26 14:56:10.597780] E [rpc-clnt.c:346:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f4a5164efbb] (--> /lib64/libgfrpc.so.0(+0xce11)[0x7f4a51417e11] (--> /lib64/libgfrpc.so.0(+0xcf2e)[0x7f4a51417f2e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7f4a51419531] (--> /lib64/libgfrpc.so.0(+0xf0d8)[0x7f4a5141a0d8] ))))) 0-gv1-client-1: forced unwinding frame type(GF-DUMP) op(NULL(2)) called at 2019-03-26 14:56:10.597488 (xid=0x3ef56) [2019-03-26 14:56:10.597796] W [rpc-clnt-ping.c:215:rpc_clnt_ping_cbk] 0-gv1-client-1: socket disconnected [2019-03-26 14:56:14.597866] I [rpc-clnt.c:2042:rpc_clnt_reconfig] 0-gv1-client-1: changing port to 50155 (from 0) Bricks didn't crash, just the clients wouldn't talk to them. Upgrading the currently affected server to 5.5 and rebooting it caused the clients to reconnect to normally. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 17:11:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 17:11:59 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Cal Calhoun changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rkavunga at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 18:06:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:06:13 +0000 Subject: [Bugs] [Bug 1692957] New: build: link libgfrpc with MATH_LIB (libm, -lm) Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Bug ID: 1692957 Summary: build: link libgfrpc with MATH_LIB (libm, -lm) Product: GlusterFS Version: 6 Status: NEW Component: build Assignee: bugs at gluster.org Reporter: kkeithle at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: tl;dnr: libgfrpc.so calls log2(3) from libm; it should be explicitly linked with -lm the autoconf/automake/libtool stack is more or less forgiving on different distributions. On forgiving systems libtool will semi- magically link with implicit dependencies. But on Ubuntu, which seems to be tending toward being less forgiving, the link of libgfrpc will fail with an unresolved referencee to log2(3). Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:08:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:08:06 +0000 Subject: [Bugs] [Bug 1692959] New: build: link libgfrpc with MATH_LIB (libm, -lm) Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Bug ID: 1692959 Summary: build: link libgfrpc with MATH_LIB (libm, -lm) Product: GlusterFS Version: 5 Status: NEW Component: build Assignee: bugs at gluster.org Reporter: kkeithle at redhat.com CC: bugs at gluster.org Depends On: 1692957 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1692957 +++ Description of problem: tl;dnr: libgfrpc.so calls log2(3) from libm; it should be explicitly linked with -lm the autoconf/automake/libtool stack is more or less forgiving on different distributions. On forgiving systems libtool will semi- magically link with implicit dependencies. But on Ubuntu, which seems to be tending toward being less forgiving, the link of libgfrpc will fail with an unresolved referencee to log2(3). Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:08:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:08:06 +0000 Subject: [Bugs] [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692959 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:11:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:11:24 +0000 Subject: [Bugs] [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22421 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:13:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:13:39 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22422 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:13:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:13:40 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22422 (build: link libgfrpc with MATH_LIB (libm, -lm)) posted (#1) for review on release-5 by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:13:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:13:45 +0000 Subject: [Bugs] [Bug 1692394] GlusterFS 6.1 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1692957 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Tue Mar 26 18:13:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 18:13:45 +0000 Subject: [Bugs] [Bug 1692957] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692957 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1692394 (glusterfs-6.1) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692394 [Bug 1692394] GlusterFS 6.1 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 00:07:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 00:07:32 +0000 Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain linkto files that point to itself. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668989 errata-xmlrpc changed: What |Removed |Added ---------------------------------------------------------------------------- Status|VERIFIED |RELEASE_PENDING -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 00:13:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 00:13:03 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |POST --- Comment #4 from Ravishankar N --- AFR patch is yet to be merged, moving back to POST. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 00:48:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 00:48:29 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22369 (afr: add client-pid to all gf_event() calls) merged (#6) on master by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:43:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:43:43 +0000 Subject: [Bugs] [Bug 1651584] [geo-rep]: validate the config checkpoint date and fail if it is not is exact format hh:mm:ss In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651584 Bug 1651584 depends on bug 1429190, which changed state. Bug 1429190 Summary: [geo-rep]: validate the config checkpoint date and fail if it is not is exact format hh:mm:ss https://bugzilla.redhat.com/show_bug.cgi?id=1429190 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 03:43:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:43:49 +0000 Subject: [Bugs] [Bug 1654117] [geo-rep]: Failover / Failback shows fault status in a non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654117 Bug 1654117 depends on bug 1510752, which changed state. Bug 1510752 Summary: [geo-rep]: Failover / Failback shows fault status in a non-root setup https://bugzilla.redhat.com/show_bug.cgi?id=1510752 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:43:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:43:51 +0000 Subject: [Bugs] [Bug 1651498] [geo-rep]: Failover / Failback shows fault status in a non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651498 Bug 1651498 depends on bug 1510752, which changed state. Bug 1510752 Summary: [geo-rep]: Failover / Failback shows fault status in a non-root setup https://bugzilla.redhat.com/show_bug.cgi?id=1510752 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:43:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:43:51 +0000 Subject: [Bugs] [Bug 1654118] [geo-rep]: Failover / Failback shows fault status in a non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654118 Bug 1654118 depends on bug 1510752, which changed state. Bug 1510752 Summary: [geo-rep]: Failover / Failback shows fault status in a non-root setup https://bugzilla.redhat.com/show_bug.cgi?id=1510752 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:43:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:43:55 +0000 Subject: [Bugs] [Bug 1560969] Garbage collect inactive inodes in fuse-bridge In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1560969 Bug 1560969 depends on bug 1511779, which changed state. Bug 1511779 Summary: Garbage collect inactive inodes in fuse-bridge https://bugzilla.redhat.com/show_bug.cgi?id=1511779 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 03:43:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:43:40 +0000 Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain linkto files that point to itself. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668989 errata-xmlrpc changed: What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA Last Closed| |2019-03-27 03:43:40 --- Comment #10 from errata-xmlrpc --- Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0658 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:44:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:44:46 +0000 Subject: [Bugs] [Bug 1685414] glusterd memory usage grows at 98 MB/h while running "gluster v profile" in a loop In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685414 Bug 1685414 depends on bug 1684648, which changed state. Bug 1684648 Summary: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA https://bugzilla.redhat.com/show_bug.cgi?id=1684648 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:44:48 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:44:48 +0000 Subject: [Bugs] [Bug 1685771] glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1685771 Bug 1685771 depends on bug 1684648, which changed state. Bug 1684648 Summary: glusterd memory usage grows at 98 MB/h while being monitored by RHGSWA https://bugzilla.redhat.com/show_bug.cgi?id=1684648 What |Removed |Added ---------------------------------------------------------------------------- Status|RELEASE_PENDING |CLOSED Resolution|--- |ERRATA -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 03:44:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 03:44:49 +0000 Subject: [Bugs] [Bug 1668989] Unable to delete directories that contain linkto files that point to itself. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668989 errata-xmlrpc changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Red Hat Product Errata | |RHBA-2019:0658 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 04:01:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:01:47 +0000 Subject: [Bugs] [Bug 1693057] New: dht_revalidate may not heal attrs on the brick root Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 Bug ID: 1693057 Summary: dht_revalidate may not heal attrs on the brick root Product: GlusterFS Version: 4.1 Status: NEW Component: distribute Keywords: ZStream Severity: high Priority: high Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org, rhs-bugs at redhat.com, sankarshan at redhat.com, storage-qa-internal at redhat.com, tdesala at redhat.com Depends On: 1648298 Blocks: 1648296, 1660736 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1648298 +++ +++ This bug was initially created as a clone of Bug #1648296 +++ Description of problem: In dht_revalidate_cbk, the stbuf returned by the brick is merged into local->stbuf only if the dir contains a layout. local->stbuf is also used to compare the stbuf returned in responses to check if an attr heal is needed. If the newly added brick (which does not contain a layout) is the first one to respond, its stbuf is not merged into local->stbuf. Subsequent responses from existing bricks never see a mismatch in the stbuf and self heal is not triggered. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2018-11-09 05:58:05 EST --- This bug is automatically being proposed for a Z-stream release of Red Hat Gluster Storage 3 under active development and open for bug fixes, by setting the release flag 'rhgs?3.4.z' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Worker Ant on 2018-11-09 11:59:55 UTC --- REVIEW: https://review.gluster.org/21611 (cluster/dht: sync brick root perms on add brick) posted (#1) for review on master by N Balachandran --- Additional comment from Worker Ant on 2018-11-19 05:45:13 UTC --- REVIEW: https://review.gluster.org/21611 (cluster/dht: sync brick root perms on add brick) posted (#5) for review on master by N Balachandran --- Additional comment from Shyamsundar on 2019-03-25 16:31:51 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1648296 [Bug 1648296] dht_revalidate may not heal attrs on the brick root https://bugzilla.redhat.com/show_bug.cgi?id=1648298 [Bug 1648298] dht_revalidate may not heal attrs on the brick root https://bugzilla.redhat.com/show_bug.cgi?id=1660736 [Bug 1660736] dht_revalidate may not heal attrs on the brick root -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 04:01:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:01:47 +0000 Subject: [Bugs] [Bug 1648298] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1648298 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693057 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 [Bug 1693057] dht_revalidate may not heal attrs on the brick root -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 04:01:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:01:47 +0000 Subject: [Bugs] [Bug 1660736] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660736 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1693057 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 [Bug 1693057] dht_revalidate may not heal attrs on the brick root -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 04:41:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:41:45 +0000 Subject: [Bugs] [Bug 1693057] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22423 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 04:41:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:41:46 +0000 Subject: [Bugs] [Bug 1693057] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22423 (cluster/dht: sync brick root perms on add brick) posted (#1) for review on release-4.1 by N Balachandran -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 04:42:22 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:42:22 +0000 Subject: [Bugs] [Bug 1693057] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Assignee|bugs at gluster.org |nbalacha at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 04:56:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:56:58 +0000 Subject: [Bugs] [Bug 1672869] With parallel-readdir enabled, deleting a directory containing stale linkto files fails with "Directory not empty" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672869 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Rebase -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 04:58:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 04:58:39 +0000 Subject: [Bugs] [Bug 1668995] DHT: Provide a virtual xattr to get the hash subvol for a file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668995 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Rebase -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 05:36:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 05:36:54 +0000 Subject: [Bugs] [Bug 1426044] read-ahead not working if open-behind is turned on In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1426044 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Rebase -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 06:07:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:07:03 +0000 Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671556 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Last Closed|2019-02-22 14:49:43 |2019-03-27 06:07:03 --- Comment #32 from Nithya Balachandran --- Artem, I am closing this BZ as the crash the others reported (the one fixed by turning off write-behind) has been fixed. We will continue to track the crash you are seeing in https://bugzilla.redhat.com/show_bug.cgi?id=1690769 as that looks like a different problem. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 06:07:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:07:04 +0000 Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Bug 1691292 depends on bug 1671556, which changed state. Bug 1671556 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1671556 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 06:07:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:07:05 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Bug 1667103 depends on bug 1671556, which changed state. Bug 1671556 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1671556 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 06:07:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:07:05 +0000 Subject: [Bugs] [Bug 1674406] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1674406 Bug 1674406 depends on bug 1671556, which changed state. Bug 1671556 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1671556 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 06:07:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:07:06 +0000 Subject: [Bugs] [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676356 Bug 1676356 depends on bug 1671556, which changed state. Bug 1671556 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1671556 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 06:07:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:07:06 +0000 Subject: [Bugs] [Bug 1678570] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678570 Bug 1678570 depends on bug 1671556, which changed state. Bug 1671556 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1671556 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 06:08:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:08:05 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(rkavunga at redhat.c | |om) | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 06:21:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:21:08 +0000 Subject: [Bugs] [Bug 1657163] Stack overflow in readdirp with parallel-readdir enabled In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1657163 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Rebase -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 06:27:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 06:27:37 +0000 Subject: [Bugs] [Bug 1682925] Gluster volumes never heal during oVirt 4.2->4.3 upgrade In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1682925 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |ravishankar at redhat.com Flags| |needinfo?(ravishankar at redha | |t.com) --- Comment #6 from Sahina Bose --- Ravi, can you or someone on the team take a look? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 08:36:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 08:36:24 +0000 Subject: [Bugs] [Bug 1682925] Gluster volumes never heal during oVirt 4.2->4.3 upgrade In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1682925 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(ravishankar at redha |needinfo?(jthomasp at gmualumn |t.com) |i.org) --- Comment #7 from Ravishankar N --- Hi Jason, this is probably a little late but what is the state now? For debugging incomplete heals, we would need the list of files (`gluster vol heal $volname info`) and the `getfattr -d -m. -e hex /path/to/brick/file-in-question` outputs of the files from all 3 the bricks of the replica along with the glustershd.log from all 3 nodes. Please also provide the output of` gluster volume info $volname` -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 08:59:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 08:59:59 +0000 Subject: [Bugs] [Bug 1677160] Gluster 5 client can't access Gluster 3.12 servers In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1677160 Sanju changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(srakonde at redhat.c |needinfo?(budic at onholygroun |om) |d.com) --- Comment #13 from Sanju --- Darrel, Did you collect any logs? If so, please provide us with all the log files from /var/log/glusterfs (for both glusterfs-server and client from all the machines). That helps us in debugging this issue further. Thanks, Sanju -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 09:06:14 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:06:14 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |MODIFIED -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 09:08:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:08:05 +0000 Subject: [Bugs] [Bug 1693155] New: Excessive AFR messages from gluster showing in RHGSWA. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Bug ID: 1693155 Summary: Excessive AFR messages from gluster showing in RHGSWA. Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: replicate Keywords: Triaged Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org, skoduri at redhat.com Depends On: 1676495, 1689250 Blocks: 1666386 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1689250 +++ +++ This bug was initially created as a clone of Bug #1676495 +++ +++ This bug was initially created as a clone of Bug #1666386 +++ Description of problem: See https://lists.gluster.org/pipermail/gluster-devel/2019-March/055925.html --- Additional comment from Worker Ant on 2019-03-15 14:16:14 UTC --- REVIEW: https://review.gluster.org/22368 (gfapi: add function to set client-pid) posted (#1) for review on master by Ravishankar N --- Additional comment from Worker Ant on 2019-03-15 14:17:20 UTC --- REVIEW: https://review.gluster.org/22369 (afr: add client-id to all gf_event() calls) posted (#1) for review on master by Ravishankar N --- Additional comment from Worker Ant on 2019-03-26 15:25:51 UTC --- REVIEW: https://review.gluster.org/22368 (gfapi: add function to set client-pid) merged (#6) on master by soumya k --- Additional comment from Ravishankar N on 2019-03-27 00:13:03 UTC --- AFR patch is yet to be merged, moving back to POST. --- Additional comment from Worker Ant on 2019-03-27 00:48:29 UTC --- REVIEW: https://review.gluster.org/22369 (afr: add client-pid to all gf_event() calls) merged (#6) on master by Ravishankar N Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1666386 [Bug 1666386] [GSS] Excessive AFR messages from gluster showing in RHGSWA. https://bugzilla.redhat.com/show_bug.cgi?id=1676495 [Bug 1676495] Excessive AFR messages from gluster showing in RHGSWA. https://bugzilla.redhat.com/show_bug.cgi?id=1689250 [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 09:08:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:08:05 +0000 Subject: [Bugs] [Bug 1689250] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689250 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693155 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 09:08:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:08:23 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 09:10:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:10:19 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22424 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 09:10:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:10:20 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22424 (gfapi: add function to set client-pid) posted (#1) for review on release-6 by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 09:11:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:11:29 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22425 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 09:11:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 09:11:30 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22425 (afr: add client-pid to all gf_event() calls) posted (#1) for review on release-6 by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 10:01:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 10:01:49 +0000 Subject: [Bugs] [Bug 1693184] New: A brick process(glusterfsd) died with 'memory violation' Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693184 Bug ID: 1693184 Summary: A brick process(glusterfsd) died with 'memory violation' Product: GlusterFS Version: experimental Hardware: x86_64 OS: Linux Status: NEW Component: replicate Assignee: bugs at gluster.org Reporter: knjeong at growthsoft.co.kr CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: I'm using a volume with two replicas of the 3.6.9 version of GlusterFS. The volume on which the issue occurs is not very active and at one point a process dies suddenly. This issue has also caused core dumps, and what we found at the time of the problem is as follows: - /var/log/messages (Brick log is also the same) Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: pending frames: Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: pending frames: Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: patchset: git://git.gluster.com/glusterfs.git Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: signal received: 6 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: time of crash: Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: 2019-03-24 09:15:40 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: configuration details: Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: argp 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: backtrace 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: dlfcn 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: libpthread 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: llistxattr 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: setfsid 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: spinlock 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: epoll.h 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: xattr.h 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: st_atim.tv_nsec 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: package-string: glusterfs 3.6.9 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: --------- Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: patchset: git://git.gluster.com/glusterfs.git Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: signal received: 6 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: time of crash: Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: 2019-03-24 09:15:40 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: configuration details: Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: argp 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: backtrace 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: dlfcn 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: libpthread 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: llistxattr 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: setfsid 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: spinlock 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: epoll.h 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: xattr.h 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: st_atim.tv_nsec 1 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: package-string: glusterfs 3.6.9 Mar 24 18:15:40 P-NAS8 var-lib-glusterFS8-8[119226]: --------- Mar 24 18:15:40 P-NAS8 kernel: audit_printk_skb: 57 callbacks suppressed Mar 24 18:15:40 P-NAS8 kernel: type=1701 audit(1553418940.165:27816716): auid=1002 uid=0 gid=0 ses=3174727 pid=127312 comm="glusterfsd" reason="memory violation" sig=6 Mar 24 18:15:40 P-NAS8 systemd-logind: Removed session 3174727. Mar 24 18:15:40 P-NAS8 kernel: audit_printk_skb: 57 callbacks suppressed Mar 24 18:15:40 P-NAS8 kernel: type=1701 audit(1553418940.165:27816716): auid=1002 uid=0 gid=0 ses=3174727 pid=127312 comm="glusterfsd" reason="memory violation" sig=6 - CoreDump [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/sbin/glusterfsd -s p-tview-nas8 --volfile-id repl_dist_vol'. Program terminated with signal 6, Aborted. #0 0x00007fb6da9895f7 in raise () from /lib64/libc.so.6 Missing separate debuginfos, use: debuginfo-install glusterfs-3.6.9-1.el7.x86_64 (gdb) bt #0 0x00007fb6da9895f7 in raise () from /lib64/libc.so.6 #1 0x00007fb6da98ace8 in abort () from /lib64/libc.so.6 #2 0x00007fb6da9c9317 in __libc_message () from /lib64/libc.so.6 #3 0x00007fb6da9d1023 in _int_free () from /lib64/libc.so.6 #4 0x00007fb6db968d29 in dict_destroy () from /lib64/libglusterfs.so.0 #5 0x00007fb6db99776d in call_stub_destroy () from /lib64/libglusterfs.so.0 #6 0x00007fb6ca286333 in iot_worker () from /usr/lib64/glusterfs/3.6.9/xlator/performance/io-threads.so #7 0x00007fb6db103dc5 in start_thread () from /lib64/libpthread.so.0 #8 0x00007fb6daa4a28d in clone () from /lib64/libc.so.6 - free total used free shared buff/cache available Mem: 31G 21G 1.1G 4.2G 9.0G 5.1G Swap: 15G 173M 15G Unfortunately, I didn't know the exact cause here. Is there any other good way to determine the cause? I look forward to your help. Version-Release number of selected component (if applicable): glusterfs-3.6.9 (community version) How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 10:34:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 10:34:31 +0000 Subject: [Bugs] [Bug 1693201] New: core: move "dict is NULL" logs to DEBUG log level Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 Bug ID: 1693201 Summary: core: move "dict is NULL" logs to DEBUG log level Product: GlusterFS Version: 4.1 Status: NEW Component: core Assignee: bugs at gluster.org Reporter: rgowdapp at redhat.com CC: atumball at redhat.com, bugs at gluster.org, mchangir at redhat.com Depends On: 1671213 Blocks: 1671217 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1671213 +++ Description of problem: too many "dict is NULL" get printed if dict_ref() and dict_unref() are passed a NULL pointer --- Additional comment from Worker Ant on 2019-01-31 06:02:34 UTC --- REVIEW: https://review.gluster.org/22128 (core: move \"dict is NULL\" logs to DEBUG log level) posted (#1) for review on master by Milind Changire --- Additional comment from Amar Tumballi on 2019-01-31 10:15:48 UTC --- Can you post some logs? Ideally, if dict is NULL during a 'ref()/unref()', it is a debug hint for developer during development. Surely should be a DEBUG log in release branch. --- Additional comment from Worker Ant on 2019-02-01 03:29:51 UTC --- REVIEW: https://review.gluster.org/22128 (core: move \"dict is NULL\" logs to DEBUG log level) merged (#2) on master by Amar Tumballi --- Additional comment from Shyamsundar on 2019-03-25 16:33:11 UTC --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/ Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1671213 [Bug 1671213] core: move "dict is NULL" logs to DEBUG log level https://bugzilla.redhat.com/show_bug.cgi?id=1671217 [Bug 1671217] core: move "dict is NULL" logs to DEBUG log level -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 10:34:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 10:34:31 +0000 Subject: [Bugs] [Bug 1671213] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671213 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693201 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 [Bug 1693201] core: move "dict is NULL" logs to DEBUG log level -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 10:34:31 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 10:34:31 +0000 Subject: [Bugs] [Bug 1671217] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671217 Raghavendra G changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1693201 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 [Bug 1693201] core: move "dict is NULL" logs to DEBUG log level -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 10:41:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 10:41:53 +0000 Subject: [Bugs] [Bug 1693201] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22427 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 10:41:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 10:41:54 +0000 Subject: [Bugs] [Bug 1693201] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22427 (core: move \"dict is NULL\" logs to DEBUG log level) posted (#1) for review on release-4.1 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:15:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:15:29 +0000 Subject: [Bugs] [Bug 1593224] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593224 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/21744 (cluster/ec: Don't enqueue an entry if it is already healing) merged (#11) on master by Xavi Hernandez -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 11:18:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:18:12 +0000 Subject: [Bugs] [Bug 1693057] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693057 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-27 11:18:12 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22423 (cluster/dht: sync brick root perms on add brick) merged (#2) on release-4.1 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 11:18:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:18:13 +0000 Subject: [Bugs] [Bug 1660736] dht_revalidate may not heal attrs on the brick root In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1660736 Bug 1660736 depends on bug 1693057, which changed state. Bug 1693057 Summary: dht_revalidate may not heal attrs on the brick root https://bugzilla.redhat.com/show_bug.cgi?id=1693057 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Tue Mar 26 12:54:09 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Tue, 26 Mar 2019 12:54:09 +0000 Subject: [Bugs] [Bug 1691292] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691292 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-27 11:18:36 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22393 (performance/write-behind: fix use after free in readdirp_cbk) merged (#3) on release-4.1 by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:18:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:18:36 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Bug 1667103 depends on bug 1691292, which changed state. Bug 1691292 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1691292 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:18:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:18:36 +0000 Subject: [Bugs] [Bug 1676356] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1676356 Bug 1676356 depends on bug 1691292, which changed state. Bug 1691292 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1691292 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:18:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:18:37 +0000 Subject: [Bugs] [Bug 1678570] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678570 Bug 1678570 depends on bug 1691292, which changed state. Bug 1691292 Summary: glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' https://bugzilla.redhat.com/show_bug.cgi?id=1691292 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:23:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:23:39 +0000 Subject: [Bugs] [Bug 1693223] New: [Disperse] : Client side heal is not removing dirty flag for some of the files. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 Bug ID: 1693223 Summary: [Disperse] : Client side heal is not removing dirty flag for some of the files. Product: GlusterFS Version: 6 Status: NEW Component: disperse Assignee: bugs at gluster.org Reporter: aspandey at redhat.com CC: bugs at gluster.org, jahernan at redhat.com, nchilaka at redhat.com, pkarampu at redhat.com, vavuthu at redhat.com Depends On: 1593224 Blocks: 1600918 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1593224 +++ Description of problem: While server side heal is disabled, client side heal is not removing dirty flag for some files. Version-Release number of selected component (if applicable): How reproducible: 50% Steps to Reproduce: 1. Create a 4+2 volume and mount 2. Touch 100 files 3. Kill one brick 4. write some data all files using dd 5. bricng the bricks UP 6. Append some data on all the bricks using dd, this will trigger heal on all the files 7. Read data from all the files using dd command. At the end all files should be healed. However, I have observed that 2-3 files are still showing up in heal info. When I looked for the getxattr, all the xattrs were same and dirty flag was still present for data fop. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2018-11-28 12:51:02 UTC --- REVIEW: https://review.gluster.org/21744 (cluster/ec: Don't enqueue an entry if it is already healing) posted (#2) for review on master by Ashish Pandey --- Additional comment from Ashish Pandey on 2018-12-06 10:23:52 UTC --- There are two things we need to fix. First, If there are number of entries to be healed and client touches all of them while SHD if off, it will trigger heal for all the files but not all the files will be placed in queue. This is because, for a single file there are number of fops coming and all are placing the same file in queue for healing which fills the queue quickly and new files will not get chance. Second, When a file is started healing, sometimes we see that the dirty flag is not removed while version and size, have been healed and are same. Need to find out why is this happening. If the shd is OFF and if a client access this file it will not find any discrepancies as version and size on all the bricks are same, hence heal will not be triggered for this file and the dirty will remain as it is. --- Additional comment from Ashish Pandey on 2018-12-11 06:51:11 UTC --- While debugging the failure of this patch and thinking of incorporating comments given by Pranith and Xavi, I found that there is some design constraints to implement the idea of not enqueue an entry if it is already healing. Consider 2+1 config and following scenario - 1 - Create volume and disable self heal daemon. 2 - Created a file wrote some data while all the bricks are UP. 3 - Kill one brick and write some data on the same file. 4 - Bring the brick UP. 5 - Now to trigger heal we will do "chmod 0666 file". This will do stst on file which will find the brick is not healthy and trigger the heal. 6 - Now a synctask for the heal will be created and started which will call ec_heal_do, which in turn calls ec_heal_metadata and ec_heal_data. 7 - A fop setattr will also be called on the file to set permission. Now, a sequence of steps could be like this- a > Stat- which saw unhealthy file and triggered heal b >ec_heal_metadata - took lock and healed metadata and healed metadata part of trusted.ec.version, release the lock on file. [At this point setattr is waiting for lock] c > setattr takes the lock and found that the brick is still unhealthy as data version is not healed and miss matching. Mark the dirty for metadata version, unlock the file. d > ec_heal_data takes the locks and heals the data. Now, if we restrict only one fop to trigger heal, after step d, the file will contain dirty flag and mismatched metadata versions. If we keep all the heal request from every fop in a queue and after every heal we check if the heal is needed or not then we will end up triggering heal for all the fop, defeats the purpose of the patch. Xavi, Pranith, Please provide your comments. Am I correct in my understanding? --- Additional comment from Ashish Pandey on 2018-12-18 11:23:15 UTC --- I have found a bug which was the actual cause of dirty flag remain set even after heal happened and all the version and size are matching. This bug can only be visible when we have shd disabled as shd will clear the dirty flag if it has nothing to heal. 1 - Let's say we have disabled shd 2 - Create a file and then kill a brick 3 - Write data, around 1GB, on file which will be healed after bricks comes UP 4 - Bring the brick UP 5 - Do "chmod 0777 file" this will trigger heal. 6 - Immediately, start write on this file append 512 bytes using dd. Now, while data healing was happening, write from mount came (step 6) and took the lock. It saw that healing flag is set on version xattr of file so it will send write on all the bricks. Before releasing lock it will also update version and size on ALL the bricks including brick which is healing. However, in ec_update_info, we consider lock->good_mask to decide if we should unset the "dirty" flag or not which was set by this write fop, even if it has succeeded on all the bricks. So, +1 of dirty flag will remain as it is by this write fop. Now, data heal again got the lock and after completing data heal, we unset the dirty flag by decreasing it by the _same_ number which we found at the start of healing. This will not have the incremented value made by the write fop in step 6. So, after healing a dirty flag will remain set on the file. This flag will never be unset if shd is not enabled. --- Additional comment from Worker Ant on 2019-03-27 11:15:29 UTC --- REVIEW: https://review.gluster.org/21744 (cluster/ec: Don't enqueue an entry if it is already healing) merged (#11) on master by Xavi Hernandez Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1593224 [Bug 1593224] [Disperse] : Client side heal is not removing dirty flag for some of the files. https://bugzilla.redhat.com/show_bug.cgi?id=1600918 [Bug 1600918] [Disperse] : Client side heal is not removing dirty flag for some of the files. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:23:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:23:39 +0000 Subject: [Bugs] [Bug 1593224] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1593224 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693223 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 11:23:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:23:39 +0000 Subject: [Bugs] [Bug 1600918] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1600918 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1693223 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 11:23:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:23:57 +0000 Subject: [Bugs] [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 Ashish Pandey changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:25:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:25:39 +0000 Subject: [Bugs] [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22429 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:25:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:25:40 +0000 Subject: [Bugs] [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22429 (cluster/ec: Don't enqueue an entry if it is already healing) posted (#1) for review on release-6 by Ashish Pandey -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 11:26:32 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 11:26:32 +0000 Subject: [Bugs] [Bug 1693223] [Disperse] : Client side heal is not removing dirty flag for some of the files. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693223 --- Comment #2 from Ashish Pandey --- patch for rel-6 https://review.gluster.org/#/c/glusterfs/+/22429/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:26:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:26:11 +0000 Subject: [Bugs] [Bug 1693295] New: rpc.statd not started on builder204.aws.gluster.org Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693295 Bug ID: 1693295 Summary: rpc.statd not started on builder204.aws.gluster.org Product: GlusterFS Version: 4.1 Status: NEW Component: project-infrastructure Assignee: bugs at gluster.org Reporter: nbalacha at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem https://build.gluster.org/job/centos7-regression/5244/ fails with: 11:59:01 mount.nfs: rpc.statd is not running but is required for remote locking. 11:59:01 mount.nfs: Either use '-o nolock' to keep locks local, or start statd. 11:59:01 mount.nfs: an incorrect mount option was specified Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:36:33 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:36:33 +0000 Subject: [Bugs] [Bug 1693300] New: GlusterFS 5.6 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Bug ID: 1693300 Summary: GlusterFS 5.6 tracker Product: GlusterFS Version: 5 Status: NEW Component: core Keywords: Tracking, Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Tracker for the release 5.6 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:38:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:38:18 +0000 Subject: [Bugs] [Bug 1628620] GlusterFS 5.0 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1628620 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |CURRENTRELEASE Last Closed|2018-10-23 15:19:00 |2019-03-27 13:38:18 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1636631] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1636631 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version|glusterfs-6.0 |glusterfs-5.2 --- Comment #6 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1651525] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651525 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:40:34 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:36 +0000 Subject: [Bugs] [Bug 1644681] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644681 Bug 1644681 depends on bug 1651525, which changed state. Bug 1651525 Summary: Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. https://bugzilla.redhat.com/show_bug.cgi?id=1651525 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1654115] [Geo-rep]: Faulty geo-rep sessions due to link ownership on slave volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654115 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:40:34 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:37 +0000 Subject: [Bugs] [Bug 1646806] [Geo-rep]: Faulty geo-rep sessions due to link ownership on slave volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1646806 Bug 1646806 depends on bug 1654115, which changed state. Bug 1654115 Summary: [Geo-rep]: Faulty geo-rep sessions due to link ownership on slave volume https://bugzilla.redhat.com/show_bug.cgi?id=1654115 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1654117] [geo-rep]: Failover / Failback shows fault status in a non-root setup In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654117 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:40:34 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1654236] Provide an option to silence glfsheal logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654236 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:40:34 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:38 +0000 Subject: [Bugs] [Bug 1654229] Provide an option to silence glfsheal logs In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654229 Bug 1654229 depends on bug 1654236, which changed state. Bug 1654236 Summary: Provide an option to silence glfsheal logs https://bugzilla.redhat.com/show_bug.cgi?id=1654236 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1654370] Bitrot: Scrub status say file is corrupted even it was just created AND 'path' in the output is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654370 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:40:34 --- Comment #8 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:38 +0000 Subject: [Bugs] [Bug 1654805] Bitrot: Scrub status say file is corrupted even it was just created AND 'path' in the output is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1654805 Bug 1654805 depends on bug 1654370, which changed state. Bug 1654370 Summary: Bitrot: Scrub status say file is corrupted even it was just created AND 'path' in the output is broken https://bugzilla.redhat.com/show_bug.cgi?id=1654370 What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Resolution|--- |CURRENTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. You are the Docs Contact for the bug. From bugzilla at redhat.com Wed Mar 27 13:40:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:40:34 +0000 Subject: [Bugs] [Bug 1655545] gfid heal does not happen when there is no source brick In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1655545 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|MODIFIED |CLOSED Fixed In Version| |glusterfs-5.2 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:40:34 --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.2, please open a new bug report. glusterfs-5.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-December/000117.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1651246] Failed to dispatch handler In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1651246 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #44 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1665145] Writes on Gluster 5 volumes fail with EIO when "cluster.consistent-metadata" is set In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1665145 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1667103] GlusterFS 5.4 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667103 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1669382] [ovirt-gluster] Fuse mount crashed while creating the preallocated image In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1669382 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1670307] api: bad GFAPI_4.1.6 block In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1670307 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1671217] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671217 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:02 +0000 Subject: [Bugs] [Bug 1671556] glusterfs FUSE client crashing every few days with 'Failed to dispatch handler' In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671556 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 --- Comment #33 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:07 +0000 Subject: [Bugs] [Bug 1671611] Unable to delete directories that contain linkto files that point to itself. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671611 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:07 +0000 Subject: [Bugs] [Bug 1672248] quorum count not updated in nfs-server vol file In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672248 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:07 +0000 Subject: [Bugs] [Bug 1672314] thin-arbiter: Check with thin-arbiter file before marking new entry change log In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672314 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Fixed In Version| |glusterfs-5.5 Resolution|--- |CURRENTRELEASE Last Closed| |2019-03-27 13:44:07 --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:07 +0000 Subject: [Bugs] [Bug 1673268] Fix timeouts so the tests pass on AWS In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673268 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:07 +0000 Subject: [Bugs] [Bug 1678726] Integer Overflow possible in md-cache.c due to data type inconsistency In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1678726 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #7 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:44:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:44:07 +0000 Subject: [Bugs] [Bug 1679968] Upgrade from glusterfs 3.12 to gluster 4/5 broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1679968 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:46:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:46:38 +0000 Subject: [Bugs] [Bug 1684385] [ovirt-gluster] Rolling gluster upgrade from 3.12.5 to 5.3 led to shard on-disk xattrs disappearing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684385 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #8 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 13:46:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:46:38 +0000 Subject: [Bugs] [Bug 1684569] Upgrade from 4.1 and 5 is broken In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1684569 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #5 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:46:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:46:38 +0000 Subject: [Bugs] [Bug 1687687] [geo-rep]: Checksum mismatch when 2x2 vols are converted to arbiter In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1687687 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #2 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 13:46:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 13:46:38 +0000 Subject: [Bugs] [Bug 1689214] GlusterFS 5.5 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1689214 Shyamsundar changed: What |Removed |Added ---------------------------------------------------------------------------- Fixed In Version| |glusterfs-5.5 Resolution|NEXTRELEASE |CURRENTRELEASE --- Comment #3 from Shyamsundar --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.5, please open a new bug report. glusterfs-5.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000119.html [2] https://www.gluster.org/pipermail/gluster-users/ -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 14:31:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 14:31:00 +0000 Subject: [Bugs] [Bug 1682925] Gluster volumes never heal during oVirt 4.2->4.3 upgrade In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1682925 --- Comment #8 from Jason --- I reverted my nodes back to oVirt node 4.2 and they healed up just fine. I do not have the results of the commands you've requested. I plan to spin up a testing cluster, install 4.2 on it, then upgrade to 4.3 to see if there's still problems. We have a lot of new hardware coming in soon, so I'll be light on time to mess with oVirt for a few weeks. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 14:35:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 14:35:10 +0000 Subject: [Bugs] [Bug 1692666] ssh-port config set is failing In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692666 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-27 14:35:10 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22418 (geo-rep: fix integer config validation) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Wed Mar 27 14:57:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 14:57:13 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Depends On| |1692959 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 14:57:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 14:57:13 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693300 (glusterfs-5.6) Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 [Bug 1693300] GlusterFS 5.6 tracker -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 16:38:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 16:38:38 +0000 Subject: [Bugs] [Bug 1693295] rpc.statd not started on builder204.aws.gluster.org In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693295 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- So, it fail because the network service didn't return correctly, but I can't find why this do happen. I may just reboot after the test is finished. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 17:18:08 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 17:18:08 +0000 Subject: [Bugs] [Bug 1692612] Locking issue when restarting bricks In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692612 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-27 17:18:08 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22417 (glusterd: fix potential locking issue on peer probe) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 17:19:30 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 17:19:30 +0000 Subject: [Bugs] [Bug 1692879] Wrong Youtube link in website In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692879 Amye Scavarda changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |CLOSED CC| |amye at redhat.com Resolution|--- |UPSTREAM Last Closed| |2019-03-27 17:19:30 --- Comment #2 from Amye Scavarda --- Either way, resolved! -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 17:35:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 17:35:16 +0000 Subject: [Bugs] [Bug 1693385] New: request to change the version of fedora in fedora-smoke-job Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693385 Bug ID: 1693385 Summary: request to change the version of fedora in fedora-smoke-job Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: There are at least 2 jobs which use 'fedora' while running smoke. https://build.gluster.org/job/devrpm-fedora/ && https://build.gluster.org/job/fedora-smoke/ I guess we are running Fedora 28 in both of these, would be good to update it to higher version, say F29 (and soon F30). Version-Release number of selected component (if applicable): master Additional info: Would be good to remove '--enable-debug' in these builds on some jobs (there are 2 smoke, and 4 rpm build jobs). We should remove --enable-debug in at least 1 of these, so our release RPMs which has no DEBUG defined, can be warning free. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 17:48:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 17:48:25 +0000 Subject: [Bugs] [Bug 1693385] request to change the version of fedora in fedora-smoke-job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693385 M. Scherer changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |mscherer at redhat.com --- Comment #1 from M. Scherer --- So, that requires to upgrade the builders (or reinstall them), I think it would be better to wait on F30 to do it only once. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 18:28:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 18:28:44 +0000 Subject: [Bugs] [Bug 1693401] New: client-log-level and brick-log-level options are not properly handled in latest codebase Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693401 Bug ID: 1693401 Summary: client-log-level and brick-log-level options are not properly handled in latest codebase Product: GlusterFS Version: mainline Status: NEW Component: core Severity: high Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: If we set client-log-level option to TRACE, the behavior was not as expected. Version-Release number of selected component (if applicable): mainline How reproducible: 100% Steps to Reproduce: 1. glusterd; create volume; start volume; mount volume; 2. gluster volume set volume client-log-level TRACE 3. Notice that the client log file is not dumping all the TRACE level logs. 4. gluster volume set volume brick-log-level TRACE 5. Notice that the brick log file is not dumping all the TRACE level logs. Actual results: No proper logging, causing issues with debugging. Expected results: Log files should start showing corresponding level logs immediately. Additional info: Looks like this got introduced by https://review.gluster.org/21470 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 18:32:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 18:32:12 +0000 Subject: [Bugs] [Bug 1693401] client-log-level and brick-log-level options are not properly handled in latest codebase In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693401 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22426 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 18:32:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 18:32:13 +0000 Subject: [Bugs] [Bug 1693401] client-log-level and brick-log-level options are not properly handled in latest codebase In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693401 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22426 (xlators: make the 'handle_default_options()' as dummy function.) posted (#2) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 19:30:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 19:30:39 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 21439 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 19:40:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 19:40:21 +0000 Subject: [Bugs] [Bug 1635863] Gluster peer probe doesn't work for IPv6 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1635863 Amgad changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amgad.saleh at nokia.com --- Comment #5 from Amgad --- would the fix be ported to 5.x stream? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 20:07:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 20:07:58 +0000 Subject: [Bugs] [Bug 1693201] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693201 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-27 20:07:58 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22427 (core: move \"dict is NULL\" logs to DEBUG log level) merged (#2) on release-4.1 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 20:07:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 20:07:58 +0000 Subject: [Bugs] [Bug 1671217] core: move "dict is NULL" logs to DEBUG log level In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1671217 Bug 1671217 depends on bug 1693201, which changed state. Bug 1693201 Summary: core: move "dict is NULL" logs to DEBUG log level https://bugzilla.redhat.com/show_bug.cgi?id=1693201 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 20:15:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 20:15:57 +0000 Subject: [Bugs] [Bug 1667099] GlusterFS 4.1.8 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667099 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22432 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 20:15:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 20:15:58 +0000 Subject: [Bugs] [Bug 1667099] GlusterFS 4.1.8 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667099 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22432 (doc: Added release notes for 4.1.8) posted (#1) for review on release-4.1 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Wed Mar 27 22:09:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Wed, 27 Mar 2019 22:09:12 +0000 Subject: [Bugs] [Bug 1667099] GlusterFS 4.1.8 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1667099 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-27 22:09:12 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22432 (doc: Added release notes for 4.1.8) merged (#1) on release-4.1 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 05:37:23 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 05:37:23 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22433 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 05:37:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 05:37:24 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #601 from Worker Ant --- REVIEW: https://review.gluster.org/22433 (rpc: Remove duplicate code) posted (#1) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 08:49:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 08:49:01 +0000 Subject: [Bugs] [Bug 1693575] New: gfapi: do not block epoll thread for upcall notifications Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Bug ID: 1693575 Summary: gfapi: do not block epoll thread for upcall notifications Product: GlusterFS Version: mainline Hardware: All OS: All Status: NEW Component: libgfapi Severity: high Assignee: bugs at gluster.org Reporter: skoduri at redhat.com QA Contact: bugs at gluster.org CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: With https://review.gluster.org/#/c/glusterfs/+/21783/, we have made changes to offload processing upcall notifications to synctask so as not to block epoll threads. However seems like the purpose wasnt fully resolved. In "glfs_cbk_upcall_data" -> "synctask_new1" after creating synctask if there is no callback defined, the thread waits on synctask_join till the syncfn is finished. So that way even with those changes, epoll threads are blocked till the upcalls are processed. Hence the right fix now is to define a callback function for that synctask "glfs_cbk_upcall_syncop" so as to unblock epoll/notify threads completely and the upcall processing can happen in parallel by synctask threads. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 08:49:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 08:49:01 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Soumya Koduri changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |skoduri at redhat.com -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 09:28:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 09:28:58 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 --- Comment #1 from Soumya Koduri --- Users have complained about nfs-ganesha process getting stuck here - https://github.com/nfs-ganesha/nfs-ganesha/issues/335 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 05:37:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 05:37:24 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #602 from Worker Ant --- REVIEW: https://review.gluster.org/22433 (rpc: Remove duplicate code) merged (#1) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 09:34:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 09:34:10 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22436 -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 09:34:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 09:34:11 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) posted (#1) for review on master by soumya k -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 10:36:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 10:36:06 +0000 Subject: [Bugs] [Bug 1659708] Optimize by not stopping (restart) selfheal deamon (shd) when a volume is stopped unless it is the last volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1659708 Mohammed Rafi KC changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|CURRENTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 12:03:04 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:03:04 +0000 Subject: [Bugs] [Bug 1644681] Issuing a "heal ... full" on a disperse volume causes permanent high CPU utilization. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1644681 Sunil Kumar Acharya changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |Rebase CC| |sheggodu at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 12:07:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:07:12 +0000 Subject: [Bugs] [Bug 1693648] New: [GSS] Geo replication falling in "cannot allocate memory" and "operation not permitted" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693648 Bug ID: 1693648 Summary: [GSS] Geo replication falling in "cannot allocate memory" and "operation not permitted" Product: GlusterFS Version: mainline Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Keywords: ZStream Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: abhishku at redhat.com, avishwan at redhat.com, bkunal at redhat.com, csaba at redhat.com, khiremat at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, skandark at redhat.com, smulay at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1670429 Target Milestone: --- Group: redhat Classification: Community -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 12:07:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:07:51 +0000 Subject: [Bugs] [Bug 1693648] Geo-re: Geo replication falling in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693648 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|[GSS] Geo replication |Geo-re: Geo replication |falling in "cannot allocate |falling in "cannot allocate |memory" and "operation not |memory" |permitted" | -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 12:12:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:12:01 +0000 Subject: [Bugs] [Bug 1693648] Geo-re: Geo replication falling in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693648 --- Comment #1 from Kotresh HR --- Description of the Problem: Geo-rep is 'Faulty' and not syncing Slave worker crash: [2019-01-21 14:46:36.338450] I [resource(slave):1422:connect] GLUSTER: Mounting gluster volume locally... [2019-01-21 14:46:47.581492] I [resource(slave):1435:connect] GLUSTER: Mounted gluster volume duration=11.2428 [2019-01-21 14:46:47.582036] I [resource(slave):905:service_loop] GLUSTER: slave listening [2019-01-21 14:47:36.831804] E [repce(slave):117:worker] : call failed: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker res = getattr(self.obj, rmeth)(*in_data[2:]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 756, in entry_ops [ESTALE, EINVAL, EBUSY]) File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 553, in errno_wrap return call(*arg) File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 79, in lsetxattr cls.raise_oserr() File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr raise OSError(errn, os.strerror(errn)) OSError: [Errno 12] Cannot allocate memory Master worker crash: [2019-01-21 14:46:36.7253] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1700:connect_remote] SSH: Initializing SSH connection between master and slave... [2019-01-21 14:46:36.7440] I [changelogagent(/glusterfs/glprd01-vsb-pil-modshape000/brick1):73:__init__] ChangelogAgent: Agent listining... [2019-01-21 14:46:47.585638] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1707:connect_remote] SSH: SSH connection between master and slave established. duration=11.5781 [2019-01-21 14:46:47.585905] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1422:connect] GLUSTER: Mounting gluster volume locally... [2019-01-21 14:46:48.650470] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1435:connect] GLUSTER: Mounted gluster volume duration=1.0644 [2019-01-21 14:46:48.650816] I [gsyncd(/glusterfs/glprd01-vsb-pil-modshape000/brick1):803:main_i] : Worker spawn successful. Acknowledging back to monitor [2019-01-21 14:46:50.675277] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1583:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/pil-vbs-modshape/ssh%3A%2F%2Fgeoaccount%40172.21.142. 33%3Agluster%3A%2F%2F127.0.0.1%3Apil-vbs-modshape/5eaac78a29ba1e2e24b401621c5240c3 [2019-01-21 14:46:50.675633] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1582:service_loop] GLUSTER: Register time time=1548082010 [2019-01-21 14:46:50.690826] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):482:mgmt_lock] _GMaster: Didn't get lock Becoming PASSIVE brick=/glusterfs/glprd01-vsb-pil-modshape000/brick1 [2019-01-21 14:46:50.703552] I [gsyncdstatus(/glusterfs/glprd01-vsb-pil-modshape000/brick1):282:set_passive] GeorepStatus: Worker Status Change status=Passive [2019-01-21 14:47:35.797741] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):436:mgmt_lock] _GMaster: Got lock Becoming ACTIVE brick=/glusterfs/glprd01-vsb-pil-modshape000/brick1 [2019-01-21 14:47:35.802330] I [gsyncdstatus(/glusterfs/glprd01-vsb-pil-modshape000/brick1):276:set_active] GeorepStatus: Worker Status Change status=Active [2019-01-21 14:47:35.804092] I [gsyncdstatus(/glusterfs/glprd01-vsb-pil-modshape000/brick1):248:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl [2019-01-21 14:47:35.804485] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1497:crawl] _GMaster: starting history crawl turns=1 stime=(1548059316, 0) entry_stime=(1548059310, 0) etime=15480 82055 [2019-01-21 14:47:36.808142] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1526:crawl] _GMaster: slave's time stime=(1548059316, 0) [2019-01-21 14:47:36.833885] E [repce(/glusterfs/glprd01-vsb-pil-modshape000/brick1):209:__call__] RepceClient: call failed call=32116:139676615182144:1548082056.82 method=entry_ops error=OSError [2019-01-21 14:47:36.834212] E [syncdutils(/glusterfs/glprd01-vsb-pil-modshape000/brick1):349:log_raise_exception] : FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main main_i() File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 805, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1588, in service_loop g3.crawlwrap(oneshot=True) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 597, in crawlwrap self.crawl() File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1535, in crawl self.changelogs_batch_process(changes) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1435, in changelogs_batch_process self.process(batch) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1269, in process self.process_change(change, done, retry) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1165, in process_change failures = self.slave.server.entry_ops(entries) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 228, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 210, in __call__ raise res OSError: [Errno 12] Cannot allocate memory [2019-01-21 14:47:36.846298] I [syncdutils(/glusterfs/glprd01-vsb-pil-modshape000/brick1):289:finalize] : exiting. [2019-01-21 14:47:36.849236] I [repce(/glusterfs/glprd01-vsb-pil-modshape000/brick1):92:service_loop] RepceServer: terminating -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 12:28:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:28:25 +0000 Subject: [Bugs] [Bug 1390914] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1390914 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 15804 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 12:28:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:28:26 +0000 Subject: [Bugs] [Bug 1390914] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1390914 --- Comment #9 from Worker Ant --- REVIEW: https://review.gluster.org/15804 (protocol/client: Do not fallback to anon-fd if fd is not open) posted (#3) for review on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 12:38:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:38:28 +0000 Subject: [Bugs] [Bug 1692959] build: link libgfrpc with MATH_LIB (libm, -lm) In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692959 Kaleb KEITHLEY changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NOTABUG Last Closed| |2019-03-28 12:38:28 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 12:38:28 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 12:38:28 +0000 Subject: [Bugs] [Bug 1693300] GlusterFS 5.6 tracker In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693300 Bug 1693300 depends on bug 1692959, which changed state. Bug 1692959 Summary: build: link libgfrpc with MATH_LIB (libm, -lm) https://bugzilla.redhat.com/show_bug.cgi?id=1692959 What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NOTABUG -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 13:25:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 13:25:15 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #22 from Znamensky Pavel --- Unfortunately, it's blocker for us too. As Jacob, we've faced with 4x increasing outgoing traffic on clients. Disabling read-ahead and readdir-ahead didn't help. Disabling quick-read helped a little bit. Look forward to the fix and hope this bug is marked as critical so fix for the 5x branch will be released earlier. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Thu Mar 28 13:45:07 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 13:45:07 +0000 Subject: [Bugs] [Bug 1693692] New: Increase code coverage from regression tests Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Bug ID: 1693692 Summary: Increase code coverage from regression tests Product: GlusterFS Version: mainline URL: https://build.gluster.org/job/line-coverage/lastComple tedBuild/Line_20Coverage_20Report/ Status: NEW Component: core Severity: urgent Priority: high Assignee: bugs at gluster.org Reporter: atumball at redhat.com CC: bugs at gluster.org, ykaul at redhat.com Target Milestone: --- Classification: Community Description of problem: Currently the overall code coverage is around 60%, it is good to increase it to 70%. ref: https://build.gluster.org/job/line-coverage/lastCompletedBuild/Line_20Coverage_20Report/ Version-Release number of selected component (if applicable): master How reproducible: 100% Steps to Reproduce: 1. Keep checking the URL and check individual components etc. Expected results: Line coverage should be 70% Additional info: Some pointers to look into: * sdfs is at 1% (can be increased to 80% + with a minor fix. * quiesce is at 20%. Can be increased by a test with handcrafted volfile. * cloudsync is at 22% need a way to increase it. * trace is at <30% looks like we can do more with handcrafted volfile. * protocol is missing major part of the tests mainly due to not having code of 3.x RPC programs. Need a mechanism to have a test of old 3.x version protocol too. Just with the above we should be able to handle up-to 3% may be. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 13:49:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 13:49:15 +0000 Subject: [Bugs] [Bug 1693693] New: GlusterFS 4.1.9 tracker Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693693 Bug ID: 1693693 Summary: GlusterFS 4.1.9 tracker Product: GlusterFS Version: 4.1 Status: NEW Component: core Keywords: Tracking, Triaged Assignee: bugs at gluster.org Reporter: srangana at redhat.com CC: bugs at gluster.org Target Milestone: --- Deadline: 2019-05-20 Classification: Community Tracker bug for 4.1.9 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 14:20:54 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 14:20:54 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22439 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 14:20:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 14:20:55 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #603 from Worker Ant --- REVIEW: https://review.gluster.org/22439 (rpclib: slow floating point math and libm) posted (#1) for review on master by Kaleb KEITHLEY -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 15:01:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 15:01:17 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |CodeChange, Tracking --- Comment #1 from Yaniv Kaul --- I'd start with the important code, for example, code that is ~50%: https://build.gluster.org/job/line-coverage/Line_20Coverage_20Report/xlators/mgmt/glusterd/src/glusterd-op-sm.c.gcov.html https://build.gluster.org/job/line-coverage/Line_20Coverage_20Report/xlators/mgmt/glusterd/src/glusterd-brick-ops.c.gcov.html https://build.gluster.org/job/line-coverage/Line_20Coverage_20Report/xlators/mgmt/glusterd/src/glusterd-geo-rep.c.gcov.html Critical, core components, or features (geo-rep), which have 50-60%. We can see whole functions not be called. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 15:09:47 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 15:09:47 +0000 Subject: [Bugs] [Bug 1693385] request to change the version of fedora in fedora-smoke-job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693385 --- Comment #2 from Niels de Vos --- (In reply to Amar Tumballi from comment #0) > Description of problem: ... > Would be good to remove '--enable-debug' in these builds on some jobs (there > are 2 smoke, and 4 rpm build jobs). We should remove --enable-debug in at > least 1 of these, so our release RPMs which has no DEBUG defined, can be > warning free. I do not think these jobs are used for the RPMs that get marked as 'released' and land on download.gluster.org. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Thu Mar 28 15:26:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Thu, 28 Mar 2019 15:26:24 +0000 Subject: [Bugs] [Bug 1693385] request to change the version of fedora in fedora-smoke-job In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693385 --- Comment #3 from Amar Tumballi --- Agree, I was asking for a job without DEBUG mainly because a few times, there may be warning without DEBUG being there during compile (ref: https://review.gluster.org/22347 && https://review.gluster.org/22389 ) As I had --enable-debug while testing locally, never saw the warning, and none of the smoke tests captured the error. If we had a job without --enable-debug, we could have seen the warning while compiling, which would have failed Smoke. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 02:21:43 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 02:21:43 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-29 02:21:43 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22402 (client-rpc: Fix the payload being sent on the wire) merged (#3) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:16:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:16:24 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22441 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:16:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:16:25 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22441 (tests: add statedump to playground) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:17:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:17:38 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22442 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:17:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:17:39 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22442 (tests: add a tests for trace xlator) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:18:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:18:44 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22443 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:18:45 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:18:45 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22443 (sdfs: enable pass-through) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:19:52 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:19:52 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22444 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 03:19:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 03:19:53 +0000 Subject: [Bugs] [Bug 1693692] Increase code coverage from regression tests In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693692 --- Comment #5 from Worker Ant --- REVIEW: https://review.gluster.org/22444 (protocol: add an option to force using old-protocol) posted (#1) for review on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 05:02:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:02:03 +0000 Subject: [Bugs] [Bug 1693935] New: Network throughput usage increased x5 Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 Bug ID: 1693935 Summary: Network throughput usage increased x5 Product: Red Hat Gluster Storage Hardware: x86_64 OS: Linux Status: NEW Component: core Severity: high Priority: high Assignee: atumball at redhat.com Reporter: pgurusid at redhat.com QA Contact: rhinduja at redhat.com CC: amukherj at redhat.com, bengoa at gmail.com, bugs at gluster.org, info at netbulae.com, jsecchiero at enter.eu, nbalacha at redhat.com, pgurusid at redhat.com, revirii at googlemail.com, rhs-bugs at redhat.com, rob.dewit at coosto.com, sankarshan at redhat.com, storage-qa-internal at redhat.com Depends On: 1673058, 1692101, 1692093 Target Milestone: --- Classification: Red Hat +++ This bug was initially created as a clone of Bug #1692093 +++ +++ This bug was initially created as a clone of Bug #1673058 +++ Description of problem: Client network throughput in OUT direction usage increased x5 after an upgrade from 3.11, 3.12 to 5.3 of the server. Now i have ~110Mbps of traffic in OUT direction for each client and on the server i have a total of ~1450Mbps for each gluster server. Watch the attachment for graph before/after upgrade network throughput. Version-Release number of selected component (if applicable): 5.3 How reproducible: upgrade from 3.11, 3.12 to 5.3 Steps to Reproduce: 1. https://docs.gluster.org/en/v3/Upgrade-Guide/upgrade_to_3.12/ 2. https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_5/ Actual results: Network throughput usage increased x5 Expected results: Just the features and the bugfix of the 5.3 release Cluster Information: 2 nodes with 1 volume with 2 distributed brick for each node Number of Peers: 1 Hostname: 10.2.0.180 Uuid: 368055db-9e90-433f-9a56-bfc1507a25c5 State: Peer in Cluster (Connected) Volume Information: Volume Name: storage_other Type: Distributed-Replicate Volume ID: 6857bf2b-c97d-4505-896e-8fbc24bd16e8 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.2.0.181:/mnt/storage-brick1/data Brick2: 10.2.0.180:/mnt/storage-brick1/data Brick3: 10.2.0.181:/mnt/storage-brick2/data Brick4: 10.2.0.180:/mnt/storage-brick2/data Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on Status of volume: storage_other Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.2.0.181:/mnt/storage-brick1/data 49152 0 Y 1165 Brick 10.2.0.180:/mnt/storage-brick1/data 49152 0 Y 1149 Brick 10.2.0.181:/mnt/storage-brick2/data 49153 0 Y 1166 Brick 10.2.0.180:/mnt/storage-brick2/data 49153 0 Y 1156 Self-heal Daemon on localhost N/A N/A Y 1183 Self-heal Daemon on 10.2.0.180 N/A N/A Y 1166 Task Status of Volume storage_other ------------------------------------------------------------------------------ There are no active volume tasks --- Additional comment from Nithya Balachandran on 2019-02-21 07:53:44 UTC --- Is this high throughput consistent? Please provide a tcpdump of the client process for about 30s to 1 min during the high throughput to see what packets gluster is sending: In a terminal to the client machine: tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 Wait for 30s-1min and stop the capture. Send us the pcap file. Another user reported that turning off readdir-ahead worked for him. Please try that after capturing the statedump and see if it helps you. --- Additional comment from Alberto Bengoa on 2019-02-21 11:17:22 UTC --- (In reply to Nithya Balachandran from comment #1) > Is this high throughput consistent? > Please provide a tcpdump of the client process for about 30s to 1 min during > the high throughput to see what packets gluster is sending: > > In a terminal to the client machine: > tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22 > > Wait for 30s-1min and stop the capture. Send us the pcap file. > > Another user reported that turning off readdir-ahead worked for him. Please > try that after capturing the statedump and see if it helps you. I'm the another user and I can confirm the same behaviour here. On our tests we did: - Mounted the new cluster servers (running 5.3 version) using client 5.3 - Started a find . -type d on a directory with lots of directories. - It generated an outgoing traffic (on the client) of around 90mbps (so, inbound traffic on gluster server). We repeated the same test using 3.8 client (on 5.3 cluster) and the outgoing traffic on the client was just around 1.3 mbps. I can provide pcaps if needed. Cheers, Alberto Bengoa --- Additional comment from Nithya Balachandran on 2019-02-22 04:09:41 UTC --- Assigning this to Amar to be reassigned appropriately. --- Additional comment from Jacob on 2019-02-25 13:42:45 UTC --- i'm not able to upload in the bugzilla portal due to the size of the pcap. You can download from here: https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 --- Additional comment from Poornima G on 2019-03-04 15:23:14 UTC --- Disabling readdir-ahead fixed the issue? --- Additional comment from Hubert on 2019-03-04 15:32:17 UTC --- We seem to have the same problem with a fresh install of glusterfs 5.3 on a debian stretch. We migrated from an existing setup (version 4.1.6, distribute-replicate) to a new setup (version 5.3, replicate), and traffic on clients went up significantly, maybe causing massive iowait on the clients during high-traffic times. Here are some munin graphs: network traffic on high iowait client: https://abload.de/img/client-eth1-traffic76j4i.jpg network traffic on old servers: https://abload.de/img/oldservers-eth1nejzt.jpg network traffic on new servers: https://abload.de/img/newservers-eth17ojkf.jpg performance.readdir-ahead is on by default. I could deactivate it tomorrow morning (07:00 CEST), and provide tcpdump data if necessary. Regards, Hubert --- Additional comment from Hubert on 2019-03-05 12:03:11 UTC --- i set performance.readdir-ahead to off and watched network traffic for about 2 hours now, but traffic is still as high. 5-8 times higher than it was with old 4.1.x volumes. just curious: i see hundreds of thousands of these messages: [2019-03-05 12:02:38.423299] W [dict.c:761:dict_ref] (-->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/quick-read.so(+0x6df4) [0x7f0db452edf4] -->/usr/lib/x86_64-linux-gnu/glusterfs/5.3/xlator/performance/io-cache.so(+0xa39d) [0x7f0db474039d] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_ref+0x58) [0x7f0dbb7e4a38] ) 5-dict: dict is NULL [Invalid argument] see https://bugzilla.redhat.com/show_bug.cgi?id=1674225 - could this be related? --- Additional comment from Jacob on 2019-03-06 09:54:26 UTC --- Disabling readdir-ahead doesn't change the througput. --- Additional comment from Alberto Bengoa on 2019-03-06 10:07:59 UTC --- Neither to me. BTW, read-ahead/readdir-ahead shouldn't generate traffic in the opposite direction? ( Server -> Client) --- Additional comment from Nithya Balachandran on 2019-03-06 11:40:49 UTC --- (In reply to Jacob from comment #4) > i'm not able to upload in the bugzilla portal due to the size of the pcap. > You can download from here: > > https://mega.nz/#!FNY3CS6A!70RpciIzDgNWGwbvEwH-_b88t9e1QVOXyLoN09CG418 @Poornima, the following are the calls and instances from the above: 104 proc-1 (stat) 8259 proc-11 (open) 46 proc-14 (statfs) 8239 proc-15 (flush) 8 proc-18 (getxattr) 68 proc-2 (readlink) 5576 proc-27 (lookup) 8388 proc-41 (forget) Not sure if it helps. --- Additional comment from Hubert on 2019-03-07 08:34:21 UTC --- i made a tcpdump as well: tcpdump -i eth1 -s 0 -w /tmp/dirls.pcap tcp and not port 2222 tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 259699 packets captured 259800 packets received by filter 29 packets dropped by kernel The file is 1.1G big; gzipped and uploaded it: https://ufile.io/5h6i2 Hope this helps. --- Additional comment from Hubert on 2019-03-07 09:00:12 UTC --- Maybe i should add that the relevant IP addresses of the gluster servers are: 192.168.0.50, 192.168.0.51, 192.168.0.52 --- Additional comment from Hubert on 2019-03-18 13:45:51 UTC --- fyi: on a test setup (debian stretch, after upgrade 5.3 -> 5.5) i did a little test: - copied 11GB of data - via rsync: rsync --bwlimit=10000 --inplace --- bandwith limit of max. 10000 KB/s - rsync pulled data over interface eth0 - rsync stats: sent 1,484,200 bytes received 11,402,695,074 bytes 5,166,106.13 bytes/sec - so external traffic average was about 5 MByte/s - result was an internal traffic up to 350 MBit/s (> 40 MByte/s) on eth1 (LAN interface) - graphic of internal traffic: https://abload.de/img/if_eth1-internal-trafdlkcy.png - graphic of external traffic: https://abload.de/img/if_eth0-external-trafrejub.png --- Additional comment from Poornima G on 2019-03-19 06:15:50 UTC --- Apologies for the delay, there have been some changes done to quick-read feature, which deals with reading the content of a file in lookup fop, if the file is smaller than 64KB. I m suspecting that with 5.3 the increase in bandwidth may be due to more number of reads of small file(generated by quick-read). Please try the following: gluster vol set quick-read off gluster vol set read-ahead off gluster vol set io-cache off And let us know if the network bandwidth consumption decreases, meanwhile i will try to reproduce the same locally. --- Additional comment from Hubert on 2019-03-19 08:12:04 UTC --- I deactivated the 3 params and did the same test again. - same rsync params: rsync --bwlimit=10000 --inplace - rsync stats: sent 1,491,733 bytes received 11,444,330,300 bytes 6,703,263.27 bytes/sec - so ~6,7 MByte/s or ~54 MBit/s in average (peak of 60 MBit/s) over external network interface - traffic graphic of the server with rsync command: https://abload.de/img/if_eth1-internal-traf4zjow.png - so server is sending with an average of ~110 MBit/s and with peak at ~125 MBit/s over LAN interface - traffic graphic of one of the replica servers (disregard first curve: is the delete of the old data): https://abload.de/img/if_enp5s0-internal-trn5k9v.png - so one of the replicas receices data with ~55 MBit/s average and peak ~62 MBit/s - as a comparison - traffic before and after changing the 3 params (rsync server, highest curve is relevant): - https://abload.de/img/if_eth1-traffic-befortvkib.png So it looks like the traffic was reduced to about a third. Is it this what you expected? If so: traffic would be still a bit higher when i compare 4.1.6 and 5.3 - here's a graphic of one client in our live system after switching from 4.1.6 (~20 MBit/s) to 5.3. (~100 MBit/s in march): https://abload.de/img/if_eth1-comparison-gly8kyx.png So if this traffic gets reduced to 1/3: traffic would be ~33 MBit/s then. Way better, i think. And could be "normal"? Thx so far :-) --- Additional comment from Poornima G on 2019-03-19 09:23:48 UTC --- Awesome thank you for trying it out, i was able to reproduce this issue locally, one of the major culprit was the quick-read. The other two options had no effect in reducing the bandwidth consumption. So for now as a workaround, can disable quick-read: # gluster vol set quick-read off Quick-read alone reduced the bandwidth consumption by 70% for me. Debugging the rest 30% increase. Meanwhile, planning to make this bug a blocker for our next gulster-6 release. Will keep the bug updated with the progress. --- Additional comment from Hubert on 2019-03-19 10:07:35 UTC --- i'm running another test, just alongside... simply deleting and copying data, no big effort. Just curious :-) 2 little questions: - does disabling quick-read have any performance issues for certain setups/scenarios? - bug only blocker for v6 release? update for v5 planned? --- Additional comment from Poornima G on 2019-03-19 10:36:20 UTC --- (In reply to Hubert from comment #17) > i'm running another test, just alongside... simply deleting and copying > data, no big effort. Just curious :-) I think if the volume hosts small files, then any kind of operation around these files will see increased bandwidth usage. > > 2 little questions: > > - does disabling quick-read have any performance issues for certain > setups/scenarios? Small file reads(files with size <= 64kb) will see reduced performance. Eg: web server use case. > - bug only blocker for v6 release? update for v5 planned? Yes there will be updated for v5, not sure when. The updates for major releases are made once in every 3 or 4 weeks not sure. For critical bugs the release will be made earlier. --- Additional comment from Alberto Bengoa on 2019-03-19 11:54:58 UTC --- Hello guys, Thanks for your update Poornima. I was already running quick-read off here so, on my case, I noticed the traffic growing consistently after enabling it. I've made some tests on my scenario, and I wasn't able to reproduce your 70% reduction results. To me, it's near 46% of traffic reduction (from around 103 Mbps to around 55 Mbps, graph attached here: https://pasteboard.co/I68s9qE.png ) What I'm doing is just running a find . type -d on a directory with loads of directories/files. Poornima, if you don't mind to answer a question, why are we seem this traffic on the inbound of gluster servers (outbound of clients)? On my particular case, the traffic should be basically on the opposite direction I think, and I'm very curious about that. Thank you, Alberto --- Additional comment from Poornima G on 2019-03-22 17:42:54 UTC --- Thank You all for the report. We have the RCA, working on the patch will be posting it shortly. The issue was with the size of the payload being sent from the client to server for operations like lookup and readdirp. Hence worakload involving lookup and readdir would consume a lot of bandwidth. --- Additional comment from Worker Ant on 2019-03-24 09:31:53 UTC --- REVIEW: https://review.gluster.org/22402 (client-rpc: Fix the payload being sent on the wire) posted (#1) for review on master by Poornima G --- Additional comment from Worker Ant on 2019-03-29 02:21:43 UTC --- REVIEW: https://review.gluster.org/22402 (client-rpc: Fix the payload being sent on the wire) merged (#3) on master by Raghavendra G Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 [Bug 1673058] Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1692093 [Bug 1692093] Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1692101 [Bug 1692101] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 05:02:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:02:03 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693935 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 [Bug 1693935] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 05:02:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:02:03 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693935 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 [Bug 1693935] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 05:02:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:02:03 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693935 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 [Bug 1693935] Network throughput usage increased x5 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 05:06:34 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:06:34 +0000 Subject: [Bugs] [Bug 1693935] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Version|unspecified |rhgs-3.5 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 05:07:46 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:07:46 +0000 Subject: [Bugs] [Bug 1693935] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 Poornima G changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #2 from Poornima G --- Posted upstream, will post downstream soon. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 05:08:57 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:08:57 +0000 Subject: [Bugs] [Bug 1693935] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 --- Comment #3 from Atin Mukherjee --- Upstream patch : https://review.gluster.org/#/c/22402/ -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 05:22:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 05:22:40 +0000 Subject: [Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 Nithya Balachandran changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 07:25:10 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 07:25:10 +0000 Subject: [Bugs] [Bug 1693575] gfapi: do not block epoll thread for upcall notifications In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693575 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-29 07:25:10 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22436 (gfapi: Unblock epoll thread for upcall processing) merged (#4) on master by Amar Tumballi -- You are receiving this mail because: You are the QA Contact for the bug. You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 07:25:36 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 07:25:36 +0000 Subject: [Bugs] [Bug 1688068] Proper error message needed for FUSE mount failure when /var is filled. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1688068 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-29 07:25:36 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22346 (mount.glusterfs: change the error message) merged (#2) on master by Amar Tumballi -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 08:36:16 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 08:36:16 +0000 Subject: [Bugs] [Bug 1686398] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-29 08:36:16 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22323 (afr: thin-arbiter read txn fixes) merged (#4) on master by Pranith Kumar Karampuri -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 08:47:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 08:47:35 +0000 Subject: [Bugs] [Bug 1693992] New: Thin-arbiter minor fixes Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Bug ID: 1693992 Summary: Thin-arbiter minor fixes Product: GlusterFS Version: 6 Status: NEW Component: replicate Keywords: Triaged Assignee: bugs at gluster.org Reporter: ravishankar at redhat.com CC: bugs at gluster.org Depends On: 1686398 Target Milestone: --- Classification: Community +++ This bug was initially created as a clone of Bug #1686398 +++ Description of problem: Address post-merge review comments for commit 69532c141be160b3fea03c1579ae4ac13018dcdf --- Additional comment from Worker Ant on 2019-03-07 11:39:36 UTC --- REVIEW: https://review.gluster.org/22323 (afr: thin-arbiter read txn minor fixes) posted (#1) for review on master by Ravishankar N --- Additional comment from Worker Ant on 2019-03-29 08:36:16 UTC --- REVIEW: https://review.gluster.org/22323 (afr: thin-arbiter read txn fixes) merged (#4) on master by Pranith Kumar Karampuri Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 [Bug 1686398] Thin-arbiter minor fixes -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 08:47:35 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 08:47:35 +0000 Subject: [Bugs] [Bug 1686398] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1686398 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Blocks| |1693992 Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 [Bug 1693992] Thin-arbiter minor fixes -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 08:47:50 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 08:47:50 +0000 Subject: [Bugs] [Bug 1693992] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Ravishankar N changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |ravishankar at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 08:49:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 08:49:19 +0000 Subject: [Bugs] [Bug 1693992] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22446 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 08:49:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 08:49:20 +0000 Subject: [Bugs] [Bug 1693992] Thin-arbiter minor fixes In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693992 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22446 (afr: thin-arbiter read txn fixes) posted (#1) for review on release-6 by Ravishankar N -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 09:04:51 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:04:51 +0000 Subject: [Bugs] [Bug 1694002] New: Geo-re: Geo replication failing in "cannot allocate memory" Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Bug ID: 1694002 Summary: Geo-re: Geo replication failing in "cannot allocate memory" Product: GlusterFS Version: 6 Hardware: x86_64 OS: Linux Status: NEW Component: geo-replication Keywords: ZStream Severity: medium Priority: medium Assignee: bugs at gluster.org Reporter: khiremat at redhat.com CC: abhishku at redhat.com, avishwan at redhat.com, bkunal at redhat.com, bugs at gluster.org, csaba at redhat.com, khiremat at redhat.com, rhinduja at redhat.com, rhs-bugs at redhat.com, sankarshan at redhat.com, skandark at redhat.com, smulay at redhat.com, storage-qa-internal at redhat.com, sunkumar at redhat.com Depends On: 1670429, 1693648 Target Milestone: --- Classification: Community Description of the Problem: Geo-rep is 'Faulty' and not syncing Slave worker crash: [2019-01-21 14:46:36.338450] I [resource(slave):1422:connect] GLUSTER: Mounting gluster volume locally... [2019-01-21 14:46:47.581492] I [resource(slave):1435:connect] GLUSTER: Mounted gluster volume duration=11.2428 [2019-01-21 14:46:47.582036] I [resource(slave):905:service_loop] GLUSTER: slave listening [2019-01-21 14:47:36.831804] E [repce(slave):117:worker] : call failed: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 113, in worker res = getattr(self.obj, rmeth)(*in_data[2:]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 756, in entry_ops [ESTALE, EINVAL, EBUSY]) File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 553, in errno_wrap return call(*arg) File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 79, in lsetxattr cls.raise_oserr() File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr raise OSError(errn, os.strerror(errn)) OSError: [Errno 12] Cannot allocate memory Master worker crash: [2019-01-21 14:46:36.7253] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1700:connect_remote] SSH: Initializing SSH connection between master and slave... [2019-01-21 14:46:36.7440] I [changelogagent(/glusterfs/glprd01-vsb-pil-modshape000/brick1):73:__init__] ChangelogAgent: Agent listining... [2019-01-21 14:46:47.585638] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1707:connect_remote] SSH: SSH connection between master and slave established. duration=11.5781 [2019-01-21 14:46:47.585905] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1422:connect] GLUSTER: Mounting gluster volume locally... [2019-01-21 14:46:48.650470] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1435:connect] GLUSTER: Mounted gluster volume duration=1.0644 [2019-01-21 14:46:48.650816] I [gsyncd(/glusterfs/glprd01-vsb-pil-modshape000/brick1):803:main_i] : Worker spawn successful. Acknowledging back to monitor [2019-01-21 14:46:50.675277] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1583:register] _GMaster: Working dir path=/var/lib/misc/glusterfsd/pil-vbs-modshape/ssh%3A%2F%2Fgeoaccount%40172.21.142. 33%3Agluster%3A%2F%2F127.0.0.1%3Apil-vbs-modshape/5eaac78a29ba1e2e24b401621c5240c3 [2019-01-21 14:46:50.675633] I [resource(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1582:service_loop] GLUSTER: Register time time=1548082010 [2019-01-21 14:46:50.690826] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):482:mgmt_lock] _GMaster: Didn't get lock Becoming PASSIVE brick=/glusterfs/glprd01-vsb-pil-modshape000/brick1 [2019-01-21 14:46:50.703552] I [gsyncdstatus(/glusterfs/glprd01-vsb-pil-modshape000/brick1):282:set_passive] GeorepStatus: Worker Status Change status=Passive [2019-01-21 14:47:35.797741] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):436:mgmt_lock] _GMaster: Got lock Becoming ACTIVE brick=/glusterfs/glprd01-vsb-pil-modshape000/brick1 [2019-01-21 14:47:35.802330] I [gsyncdstatus(/glusterfs/glprd01-vsb-pil-modshape000/brick1):276:set_active] GeorepStatus: Worker Status Change status=Active [2019-01-21 14:47:35.804092] I [gsyncdstatus(/glusterfs/glprd01-vsb-pil-modshape000/brick1):248:set_worker_crawl_status] GeorepStatus: Crawl Status Change status=History Crawl [2019-01-21 14:47:35.804485] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1497:crawl] _GMaster: starting history crawl turns=1 stime=(1548059316, 0) entry_stime=(1548059310, 0) etime=15480 82055 [2019-01-21 14:47:36.808142] I [master(/glusterfs/glprd01-vsb-pil-modshape000/brick1):1526:crawl] _GMaster: slave's time stime=(1548059316, 0) [2019-01-21 14:47:36.833885] E [repce(/glusterfs/glprd01-vsb-pil-modshape000/brick1):209:__call__] RepceClient: call failed call=32116:139676615182144:1548082056.82 method=entry_ops error=OSError [2019-01-21 14:47:36.834212] E [syncdutils(/glusterfs/glprd01-vsb-pil-modshape000/brick1):349:log_raise_exception] : FAIL: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 210, in main main_i() File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 805, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1588, in service_loop g3.crawlwrap(oneshot=True) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 597, in crawlwrap self.crawl() File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1535, in crawl self.changelogs_batch_process(changes) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1435, in changelogs_batch_process self.process(batch) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1269, in process self.process_change(change, done, retry) File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1165, in process_change failures = self.slave.server.entry_ops(entries) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 228, in __call__ return self.ins(self.meth, *a) File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 210, in __call__ raise res OSError: [Errno 12] Cannot allocate memory [2019-01-21 14:47:36.846298] I [syncdutils(/glusterfs/glprd01-vsb-pil-modshape000/brick1):289:finalize] : exiting. [2019-01-21 14:47:36.849236] I [repce(/glusterfs/glprd01-vsb-pil-modshape000/brick1):92:service_loop] RepceServer: terminating --- Additional comment from Worker Ant on 2019-03-29 07:24:23 UTC --- REVIEW: https://review.gluster.org/22438 (geo-rep: Fix syncing multiple rename of symlink) merged (#2) on master by Amar Tumballi Referenced Bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1693648 [Bug 1693648] Geo-re: Geo replication failing in "cannot allocate memory" -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 09:05:11 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:05:11 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Kotresh HR changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED Assignee|bugs at gluster.org |khiremat at redhat.com -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 09:07:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:07:12 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Bug 1694002 depends on bug 1693648, which changed state. Bug 1693648 Summary: Geo-re: Geo replication failing in "cannot allocate memory" https://bugzilla.redhat.com/show_bug.cgi?id=1693648 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 09:08:38 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:08:38 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22447 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 09:08:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:08:39 +0000 Subject: [Bugs] [Bug 1694002] Geo-re: Geo replication failing in "cannot allocate memory" In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694002 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|ASSIGNED |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22447 (geo-rep: Fix syncing multiple rename of symlink) posted (#2) for review on release-6 by Kotresh HR -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 09:10:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:10:49 +0000 Subject: [Bugs] [Bug 1640109] Default ACL cannot be removed In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1640109 --- Comment #2 from homma at allworks.co.jp --- The problem seems to be resolved by commit 36e2ec3c88eba7a1bcd8aa6f64e4672349ff1d0c on master branch, but not on release-4.1 and release-5 branches. Please consider applying the fix to release 5. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 09:42:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:42:53 +0000 Subject: [Bugs] [Bug 1694010] New: peer gets disconnected during a rolling upgrade. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 Bug ID: 1694010 Summary: peer gets disconnected during a rolling upgrade. Product: GlusterFS Version: 6 Status: NEW Component: glusterd Severity: low Assignee: bugs at gluster.org Reporter: hgowtham at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: When we do a rolling upgrade of the cluster from 3.12, 4.1 or 5.5 to 6, the upgraded node goes into disconnected state. Version-Release number of selected component (if applicable): 6.0 How reproducible: 100% Steps to Reproduce: 1.create a replica 3 cluster 2.kill gluster process on one node 3.upgrade the node and start glusterd Actual results: the upgrade node goes into disconnected state Expected results: the peer shouldnt get disconnected. Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 09:45:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 09:45:05 +0000 Subject: [Bugs] [Bug 1694010] peer gets disconnected during a rolling upgrade. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694010 --- Comment #1 from hari gowtham --- To over come this issue the following steps were performed. Upgrade all the node in the cluster one after other. Once all the nodes are upgraded, kill glusterd process alone. Let other process keep running. Now do a "iptables -F" And then restart the glusterd on all the nodes Try gluster peer state after this to check if the nodes are connected. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 11:05:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 11:05:15 +0000 Subject: [Bugs] [Bug 1691187] fix Coverity CID 1399758 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1691187 --- Comment #2 from Worker Ant --- REVIEW: https://review.gluster.org/22390 (server.c: fix Coverity CID 1399758) merged (#1) on release-6 by MOHIT AGRAWAL -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 11:09:05 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 11:09:05 +0000 Subject: [Bugs] [Bug 1693155] Excessive AFR messages from gluster showing in RHGSWA. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693155 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22424 (gfapi: add function to set client-pid) merged (#2) on release-6 by Shyamsundar Ranganathan -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 11:16:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 11:16:37 +0000 Subject: [Bugs] [Bug 1668286] READDIRP incorrectly updates posix-acl inode ctx In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1668286 --- Comment #11 from homma at allworks.co.jp --- The problem still exists on release 5.5. THe cause of the problem may be that posix_acl_readdirp_cbk() updates its ctx without checking that dentries contain valid iatts. If so, please change the component to posix-acl. -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 13:46:15 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 13:46:15 +0000 Subject: [Bugs] [Bug 1672318] "failed to fetch volume file" when trying to activate host in DC with glusterfs 3.12 domains In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1672318 Sahina Bose changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(sabose at redhat.com |needinfo?(info at netbulae.com |) |) --- Comment #25 from Sahina Bose --- Redirecting to reporter. Could you answer questions in Comment 24? -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:29:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:29:58 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #23 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22404 (client-rpc: Fix the payload being sent on the wire) posted (#2) for review on release-5 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 14:29:59 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:29:59 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22404 | -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 14:30:01 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:30:01 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22404 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:30:02 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:30:02 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- Keywords| |Reopened --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22404 (client-rpc: Fix the payload being sent on the wire) posted (#2) for review on release-5 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:30:03 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:30:03 +0000 Subject: [Bugs] [Bug 1693935] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1693935 Bug 1693935 depends on bug 1692093, which changed state. Bug 1692093 Summary: Network throughput usage increased x5 https://bugzilla.redhat.com/show_bug.cgi?id=1692093 What |Removed |Added ---------------------------------------------------------------------------- Status|CLOSED |POST Resolution|NEXTRELEASE |--- -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 14:32:17 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:32:17 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 --- Comment #2 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22403 (client-rpc: Fix the payload being sent on the wire) posted (#2) for review on release-6 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:32:18 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:32:18 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22403 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:32:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:32:20 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22403 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:32:21 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:32:21 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 --- Comment #4 from Worker Ant --- REVIEW: https://review.gluster.org/22403 (client-rpc: Fix the payload being sent on the wire) posted (#2) for review on release-6 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 14:39:53 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:39:53 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #24 from Poornima G --- (In reply to Znamensky Pavel from comment #22) > Unfortunately, it's blocker for us too. As Jacob, we've faced with 4x > increasing outgoing traffic on clients. > Disabling read-ahead and readdir-ahead didn't help. Disabling quick-read > helped a little bit. > Look forward to the fix and hope this bug is marked as critical so fix for > the 5x branch will be released earlier. Will try to make a release as soon as the patch is merged. Thanks for your update. Have posted the patch, the link can be found in the previous comment. -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 14:45:44 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 14:45:44 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #25 from Znamensky Pavel --- >Will try to make a release as soon as the patch is merged. Thanks for your update. >Have posted the patch, the link can be found in the previous comment. Thanks for the quick fix! -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 15:23:55 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:23:55 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 --- Comment #5 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22404 (client-rpc: Fix the payload being sent on the wire) posted (#3) for review on release-5 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:23:58 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:23:58 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22404 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:24:00 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:24:00 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22404 -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 15:24:06 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:24:06 +0000 Subject: [Bugs] [Bug 1673058] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1673058 --- Comment #26 from Worker Ant --- REVIEW: https://review.gluster.org/22404 (client-rpc: Fix the payload being sent on the wire) posted (#3) for review on release-5 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Fri Mar 29 15:25:24 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:25:24 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 --- Comment #6 from Worker Ant --- REVISION POSTED: https://review.gluster.org/22403 (client-rpc: Fix the payload being sent on the wire) posted (#3) for review on release-6 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:25:25 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:25:25 +0000 Subject: [Bugs] [Bug 1692093] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692093 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID|Gluster.org Gerrit 22403 | -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:25:26 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:25:26 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22403 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:25:27 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:25:27 +0000 Subject: [Bugs] [Bug 1692101] Network throughput usage increased x5 In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692101 --- Comment #3 from Worker Ant --- REVIEW: https://review.gluster.org/22403 (client-rpc: Fix the payload being sent on the wire) posted (#3) for review on release-6 by Poornima G -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:46:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:46:29 +0000 Subject: [Bugs] [Bug 1694139] New: Error waiting for job 'heketi-storage-copy-job' to complete on one-node k3s deployment. Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694139 Bug ID: 1694139 Summary: Error waiting for job 'heketi-storage-copy-job' to complete on one-node k3s deployment. Product: GlusterFS Version: 4.1 Hardware: x86_64 OS: Linux Status: NEW Component: glusterd Assignee: bugs at gluster.org Reporter: it.sergm at gmail.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Deploying k3s with gluster in single-node deployment. Solution worked with kubernetes, but not working with k3s(https://github.com/rancher/k3s) Version-Release number of selected component (if applicable): gluster-kubernetes 1.2.0(https://github.com/gluster/gluster-kubernetes.git) k3s - tested with v0.2.0 and v0.3.0-rc4 (https://github.com/rancher/k3s) Glusterfs package - tested with 3.8.x, 3.12.x and 3.13.2 OS - tested with Ubuntu 16.04.6(4.4.0-143-generic) and Ubuntu 18.04.2(4.15.0-46-generic) with `apt full-upgrade` applied. Steps to Reproduce: 1. install and configure k3s. # make sure hostname included in /etc/hosts with relevant ip git clone --depth 1 https://github.com/rancher/k3s.git cd k3s; sh install.sh # Label node: kubectl label node k3s-gluster node-role.kubernetes.io/master="" 2. pre-configuring gluster. # install packages needed for gluster apt -y install thin-provisioning-tools glusterfs-client # required modules cat << 'EOF' > /etc/modules-load.d/kubernetes-glusterfs.conf # this module is required for glusterfs deployment on kubernetes dm_thin_pool EOF ## load the module modprobe dm_thin_pool # get the gk-deploy code cd $HOME mkdir src cd src git clone https://github.com/gluster/gluster-kubernetes.git cd gluster-kubernetes/deploy # creating topology file. Ip 10.0.0.10 was added in separate deployment as private one using 'ip addr add dev ens3 10.0.0.10/24' cat < topology.json { "clusters": [ { "nodes": [ { "node": { "hostnames": { "manage": [ "k3s-gluster" ], "storage": [ "10.0.0.10" ] }, "zone": 1 }, "devices": [ "/dev/vdb" ] } ] } ] } EOF # patching kube-templates/glusterfs-daemonset.yaml regarding to patch https://github.com/gluster/gluster-kubernetes/issues/539#issuecomment-454668538 3. Deploying gluster: root at k3s-gluster:~/src/gluster-kubernetes/deploy# ./gk-deploy -n kube-system --single-node -gvy topology.json Using Kubernetes CLI. Checking status of namespace matching 'kube-system': kube-system Active 4m36s Using namespace "kube-system". Checking for pre-existing resources... GlusterFS pods ... Checking status of pods matching '--selector=glusterfs=pod': Timed out waiting for pods matching '--selector=glusterfs=pod'. not found. deploy-heketi pod ... Checking status of pods matching '--selector=deploy-heketi=pod': Timed out waiting for pods matching '--selector=deploy-heketi=pod'. not found. heketi pod ... Checking status of pods matching '--selector=heketi=pod': Timed out waiting for pods matching '--selector=heketi=pod'. not found. gluster-s3 pod ... Checking status of pods matching '--selector=glusterfs=s3-pod': Timed out waiting for pods matching '--selector=glusterfs=s3-pod'. not found. Creating initial resources ... /usr/local/bin/kubectl -n kube-system create -f /root/src/gluster-kubernetes/deploy/kube-templates/heketi-service-account.yaml 2>&1 serviceaccount/heketi-service-account created /usr/local/bin/kubectl -n kube-system create clusterrolebinding heketi-sa-view --clusterrole=edit --serviceaccount=kube-system:heketi-service-account 2>&1 clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created /usr/local/bin/kubectl -n kube-system label --overwrite clusterrolebinding heketi-sa-view glusterfs=heketi-sa-view heketi=sa-view clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled OK Marking 'k3s-gluster' as a GlusterFS node. /usr/local/bin/kubectl -n kube-system label nodes k3s-gluster storagenode=glusterfs --overwrite 2>&1 node/k3s-gluster labeled Deploying GlusterFS pods. sed -e 's/storagenode\: glusterfs/storagenode\: 'glusterfs'/g' /root/src/gluster-kubernetes/deploy/kube-templates/glusterfs-daemonset.yaml | /usr/local/bin/kubectl -n kube-system create -f - 2>&1 daemonset.extensions/glusterfs created Waiting for GlusterFS pods to start ... Checking status of pods matching '--selector=glusterfs=pod': glusterfs-xvkrp 1/1 Running 0 70s OK /usr/local/bin/kubectl -n kube-system create secret generic heketi-config-secret --from-file=private_key=/dev/null --from-file=./heketi.json --from-file=topology.json=topology.json secret/heketi-config-secret created /usr/local/bin/kubectl -n kube-system label --overwrite secret heketi-config-secret glusterfs=heketi-config-secret heketi=config-secret secret/heketi-config-secret labeled sed -e 's/\${HEKETI_EXECUTOR}/kubernetes/' -e 's#\${HEKETI_FSTAB}#/var/lib/heketi/fstab#' -e 's/\${HEKETI_ADMIN_KEY}' -e 's/\${HEKETI_USER_KEY}' /root/src/gluster-kubernetes/deploy/kube-templates/deploy-heketi-deployment.yaml | /usr/local/bin/kubectl -n kube-system create -f - 2>&1 service/deploy-heketi created deployment.extensions/deploy-heketi created Waiting for deploy-heketi pod to start ... Checking status of pods matching '--selector=deploy-heketi=pod': deploy-heketi-5f6c465bb8-zl959 1/1 Running 0 19s OK Determining heketi service URL ... OK /usr/local/bin/kubectl -n kube-system exec -i deploy-heketi-5f6c465bb8-zl959 -- heketi-cli -s http://localhost:8080 --user admin --secret '' topology load --json=/etc/heketi/topology.json 2>&1 Creating cluster ... ID: 949e5d5063a1c1589940b7ff4705dae8 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node k3s-gluster ... ID: 6f8e3cbc0cbf6d668d718cd9bd6022f5 Adding device /dev/vdb ... OK heketi topology loaded. /usr/local/bin/kubectl -n kube-system exec -i deploy-heketi-5f6c465bb8-zl959 -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --help --durability=none >/dev/null 2>&1 /usr/local/bin/kubectl -n kube-system exec -i deploy-heketi-5f6c465bb8-zl959 -- heketi-cli -s http://localhost:8080 --user admin --secret '' setup-openshift-heketi-storage --listfile=/tmp/heketi-storage.json --durability=none 2>&1 Saving /tmp/heketi-storage.json /usr/local/bin/kubectl -n kube-system exec -i deploy-heketi-5f6c465bb8-zl959 -- cat /tmp/heketi-storage.json | /usr/local/bin/kubectl -n kube-system create -f - 2>&1 secret/heketi-storage-secret created endpoints/heketi-storage-endpoints created service/heketi-storage-endpoints created job.batch/heketi-storage-copy-job created Checking status of pods matching '--selector=job-name=heketi-storage-copy-job': heketi-storage-copy-job-xft9f 0/1 ContainerCreating 0 5m16s Timed out waiting for pods matching '--selector=job-name=heketi-storage-copy-job'. Error waiting for job 'heketi-storage-copy-job' to complete. Actual results: root at k3s-gluster:~/src/gluster-kubernetes/deploy# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7748f7f6df-cchx7 1/1 Running 0 177m deploy-heketi-5f6c465bb8-5f27p 1/1 Running 0 173m glusterfs-ntmq7 1/1 Running 0 174m heketi-storage-copy-job-qzpr7 0/1 ContainerCreating 0 170m svclb-traefik-957cdf677-c4j76 2/2 Running 1 177m traefik-7b6bd6cbf6-rnrxj 1/1 Running 0 177m root at k3s-gluster:~/src/gluster-kubernetes/deploy# kubectl -n kube-system describe po/heketi-storage-copy-job-qzpr7 Name: heketi-storage-copy-job-qzpr7 Namespace: kube-system Priority: 0 PriorityClassName: Node: k3s-gluster/104.36.17.63 Start Time: Fri, 29 Mar 2019 08:54:08 +0000 Labels: controller-uid=36e114ae-5200-11e9-a826-227e2ba50104 job-name=heketi-storage-copy-job Annotations: Status: Pending IP: Controlled By: Job/heketi-storage-copy-job Containers: heketi: Container ID: Image: heketi/heketi:dev Image ID: Port: Host Port: Command: cp /db/heketi.db /heketi State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: Mounts: /db from heketi-storage-secret (rw) /heketi from heketi-storage (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-98jvk (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: heketi-storage: Type: Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime) EndpointsName: heketi-storage-endpoints Path: heketidbstorage ReadOnly: false heketi-storage-secret: Type: Secret (a volume populated by a Secret) SecretName: heketi-storage-secret Optional: false default-token-98jvk: Type: Secret (a volume populated by a Secret) SecretName: default-token-98jvk Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedMount 3m56s (x74 over 169m) kubelet, k3s-gluster Unable to mount volumes for pod "heketi-storage-copy-job-qzpr7_kube-system(36e1b013-5200-11e9-a826-227e2ba50104)": timeout expired waiting for volumes to attach or mount for pod "kube-system"/"heketi-storage-copy-job-qzpr7". list of unmounted volumes=[heketi-storage]. list of unattached volumes=[heketi-storage heketi-storage-secret default-token-98jvk] Expected results: all pods running, gk-deploy works with no errors Additional info: Same Gluster procedure works with single-node kubernetes, but won't work with k3s. Firewall is default and only modified with k3s iptables rules. I've been trying different configurations and they don't work: private IP in the topology(also used main public ip) deploying with a clean drive mounting the volume from outside updating the gluster client to v3.12.x on ubuntu16 and 3.13.2 on ubuntu18 Gluster logs, volumes: root at k3s-gluster:~/src/gluster-kubernetes/deploy# kubectl -n kube-system exec -it glusterfs-ntmq7 /bin/bash [root at k3s-gluster /]# cat /var/log/glusterfs/glusterd.log [2019-03-29 08:50:44.968074] I [MSGID: 100030] [glusterfsd.c:2741:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 4.1.7 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO) [2019-03-29 08:50:44.977762] I [MSGID: 106478] [glusterd.c:1423:init] 0-management: Maximum allowed open file descriptors set to 65536 [2019-03-29 08:50:44.977790] I [MSGID: 106479] [glusterd.c:1481:init] 0-management: Using /var/lib/glusterd as working directory [2019-03-29 08:50:44.977797] I [MSGID: 106479] [glusterd.c:1486:init] 0-management: Using /var/run/gluster as pid file working directory [2019-03-29 08:50:45.002831] W [MSGID: 103071] [rdma.c:4629:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed [No such device] [2019-03-29 08:50:45.002862] W [MSGID: 103055] [rdma.c:4938:init] 0-rdma.management: Failed to initialize IB Device [2019-03-29 08:50:45.002873] W [rpc-transport.c:351:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2019-03-29 08:50:45.002957] W [rpcsvc.c:1781:rpcsvc_create_listener] 0-rpc-service: cannot create listener, initing the transport failed [2019-03-29 08:50:45.002968] E [MSGID: 106244] [glusterd.c:1764:init] 0-management: creation of 1 listeners failed, continuing with succeeded transport [2019-03-29 08:50:46.040712] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory] [2019-03-29 08:50:46.040765] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory] [2019-03-29 08:50:46.040768] I [MSGID: 106514] [glusterd-store.c:2262:glusterd_restore_op_version] 0-management: Detected new install. Setting op-version to maximum : 40100 [2019-03-29 08:50:46.044266] I [MSGID: 106194] [glusterd-store.c:3850:glusterd_store_retrieve_missed_snaps_list] 0-management: No missed snaps list. Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option rpc-auth-allow-insecure on 7: option transport.listen-backlog 10 8: option event-threads 1 9: option ping-timeout 0 10: option transport.socket.read-fail-log off 11: option transport.socket.keepalive-interval 2 12: option transport.socket.keepalive-time 10 13: option transport-type rdma 14: option working-directory /var/lib/glusterd 15: end-volume 16: +------------------------------------------------------------------------------+ [2019-03-29 08:50:46.044640] I [MSGID: 101190] [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2019-03-29 08:54:07.698610] E [MSGID: 101032] [store.c:441:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/glusterd.info. [No such file or directory] [2019-03-29 08:54:07.698686] I [MSGID: 106477] [glusterd.c:190:glusterd_uuid_generate_save] 0-management: generated UUID: 9dc908c2-0e7d-4b40-a951-095b78dbaeeb [2019-03-29 08:54:07.706214] W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/4.1.7/xlator/nfs/server.so: cannot open shared object file: No such file or directory [2019-03-29 08:54:07.730620] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f7f4e7f1c9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2765) [0x7f7f4e7f1765] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f7f5395d0f5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/create/post/S10selinux-label-brick.sh --volname=heketidbstorage [2019-03-29 08:54:07.863432] I [glusterd-utils.c:6090:glusterd_brick_start] 0-management: starting a fresh brick process for brick /var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7/brick [2019-03-29 08:54:07.989260] I [MSGID: 106142] [glusterd-pmap.c:297:pmap_registry_bind] 0-pmap: adding brick /var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7/brick on port 49152 [2019-03-29 08:54:07.998472] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2019-03-29 08:54:08.007817] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-snapd: setting frame-timeout to 600 [2019-03-29 08:54:08.008060] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-gfproxyd: setting frame-timeout to 600 [2019-03-29 08:54:08.008256] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-nfs: setting frame-timeout to 600 [2019-03-29 08:54:08.008335] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: nfs already stopped [2019-03-29 08:54:08.008360] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: nfs service is stopped [2019-03-29 08:54:08.008376] I [MSGID: 106599] [glusterd-nfs-svc.c:82:glusterd_nfssvc_manager] 0-management: nfs/server.so xlator is not installed [2019-03-29 08:54:08.008402] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-glustershd: setting frame-timeout to 600 [2019-03-29 08:54:08.008493] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-quotad: setting frame-timeout to 600 [2019-03-29 08:54:08.008656] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-bitd: setting frame-timeout to 600 [2019-03-29 08:54:08.008772] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped [2019-03-29 08:54:08.008785] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: bitd service is stopped [2019-03-29 08:54:08.008808] I [rpc-clnt.c:1059:rpc_clnt_connection_init] 0-scrub: setting frame-timeout to 600 [2019-03-29 08:54:08.008907] I [MSGID: 106131] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped [2019-03-29 08:54:08.008917] I [MSGID: 106568] [glusterd-svc-mgmt.c:235:glusterd_svc_stop] 0-management: scrub service is stopped [2019-03-29 08:54:08.015319] I [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f7f4e7f1c9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2765) [0x7f7f4e7f1765] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f7f5395d0f5] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh --volname=heketidbstorage --first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd [2019-03-29 08:54:08.025189] E [run.c:241:runner_log] (-->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe2c9a) [0x7f7f4e7f1c9a] -->/usr/lib64/glusterfs/4.1.7/xlator/mgmt/glusterd.so(+0xe26c3) [0x7f7f4e7f16c3] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f7f5395d0f5] ) 0-management: Failed to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh --volname=heketidbstorage --first=yes --version=1 --volume-op=start --gd-workdir=/var/lib/glusterd [root at k3s-gluster /]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 87.9M 1 loop vdb 252:16 0 100G 0 disk ??vg_fef96eab984d116ab3815e7479781110-tp_c3a55a7f206b236eba17b954622543b4_tdata 253:1 0 2G 0 lvm ? ??vg_fef96eab984d116ab3815e7479781110-tp_c3a55a7f206b236eba17b954622543b4-tpool 253:2 0 2G 0 lvm ? ??vg_fef96eab984d116ab3815e7479781110-brick_65d5aa6369e265d641f3557e6c9736b7 253:4 0 2G 0 lvm /var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7 ? ??vg_fef96eab984d116ab3815e7479781110-tp_c3a55a7f206b236eba17b954622543b4 253:3 0 2G 0 lvm ??vg_fef96eab984d116ab3815e7479781110-tp_c3a55a7f206b236eba17b954622543b4_tmeta 253:0 0 12M 0 lvm ??vg_fef96eab984d116ab3815e7479781110-tp_c3a55a7f206b236eba17b954622543b4-tpool 253:2 0 2G 0 lvm ??vg_fef96eab984d116ab3815e7479781110-brick_65d5aa6369e265d641f3557e6c9736b7 253:4 0 2G 0 lvm /var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7 ??vg_fef96eab984d116ab3815e7479781110-tp_c3a55a7f206b236eba17b954622543b4 253:3 0 2G 0 lvm loop2 7:2 0 91M 1 loop loop0 7:0 0 89.3M 1 loop vda 252:0 0 10G 0 disk ??vda2 252:2 0 10G 0 part /var/lib/misc/glusterfsd ??vda1 252:1 0 1M 0 part [root at k3s-gluster /]# gluster volume info Volume Name: heketidbstorage Type: Distribute Volume ID: 32608bdb-a4a3-494e-9c6e-68d8f780f12c Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: 10.0.0.10:/var/lib/heketi/mounts/vg_fef96eab984d116ab3815e7479781110/brick_65d5aa6369e265d641f3557e6c9736b7/brick Options Reconfigured: transport.address-family: inet nfs.disable: on [root at k3s-gluster /]# mount -t glusterfs 10.0.0.10:/heketidbstorage /mnt/glustertest WARNING: getfattr not found, certain checks will be skipped.. [root at k3s-gluster /]# mount | grep 10.0.0.10 10.0.0.10:/heketidbstorage on /mnt/glustertest type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 29 15:47:29 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 29 Mar 2019 15:47:29 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Cal Calhoun changed: What |Removed |Added ---------------------------------------------------------------------------- Flags| |needinfo?(rkavunga at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sat Mar 30 07:34:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 30 Mar 2019 07:34:37 +0000 Subject: [Bugs] [Bug 1694291] New: Smoke test build artifacts do not contain gluster logs Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694291 Bug ID: 1694291 Summary: Smoke test build artifacts do not contain gluster logs Product: GlusterFS Version: mainline Status: NEW Component: project-infrastructure Severity: medium Assignee: bugs at gluster.org Reporter: ykaul at redhat.com CC: bugs at gluster.org, gluster-infra at gluster.org Target Milestone: --- Classification: Community Description of problem: See for example https://build.gluster.org/job/smoke/48042/ The build artifacts do not contain the Gluster logs (the /var/log/glusterfs/* contents) -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 30 10:25:12 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 30 Mar 2019 10:25:12 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22452 -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sat Mar 30 10:25:13 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sat, 30 Mar 2019 10:25:13 +0000 Subject: [Bugs] [Bug 789278] Issues reported by Coverity static analysis tool In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=789278 --- Comment #1601 from Worker Ant --- REVIEW: https://review.gluster.org/22452 (GlusterD1: Resolves the issue of referencing memory after it has been freed) posted (#1) for review on master by Rishubh Jain -- You are receiving this mail because: You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 31 02:57:49 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 02:57:49 +0000 Subject: [Bugs] [Bug 1390914] Glusterfs create a flock lock by anonymous fd, but can't release it forever. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1390914 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|POST |CLOSED Resolution|--- |NEXTRELEASE Last Closed| |2019-03-31 02:57:49 --- Comment #10 from Worker Ant --- REVIEW: https://review.gluster.org/15804 (protocol/client: Do not fallback to anon-fd if fd is not open) merged (#8) on master by Raghavendra G -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 31 12:00:37 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 12:00:37 +0000 Subject: [Bugs] [Bug 1692441] [GSS] Problems using ls or find on volumes using RDMA transport In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1692441 Yaniv Kaul changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |amukherj at redhat.com Flags| |needinfo?(amukherj at redhat.c | |om) -- You are receiving this mail because: You are on the CC list for the bug. From bugzilla at redhat.com Sun Mar 31 15:19:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 15:19:39 +0000 Subject: [Bugs] [Bug 1694455] New: file level snapshots using reflink in a posix only volume Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694455 Bug ID: 1694455 Summary: file level snapshots using reflink in a posix only volume Product: GlusterFS Version: mainline Status: NEW Component: posix Assignee: bugs at gluster.org Reporter: rabhat at redhat.com CC: bugs at gluster.org Target Milestone: --- Classification: Community Description of problem: Ability to take file level snapshots for a single brick volume. Using setxattr interface, provide a mechanism to take snapshots of file(s). The same setxattr interface can be used to delete the snapshots as well. Ex: setfattr -n glusterfs.snapshot -v setfattr -n glusterfs.removesnap -v The snapshots can be viewed from /.fsnaps/ Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 31 20:36:39 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 20:36:39 +0000 Subject: [Bugs] [Bug 1694455] file level snapshots using reflink in a posix only volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694455 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22453 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 31 20:36:40 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 20:36:40 +0000 Subject: [Bugs] [Bug 1694455] file level snapshots using reflink in a posix only volume In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1694455 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |POST --- Comment #1 from Worker Ant --- REVIEW: https://review.gluster.org/22453 ([WIP]: file snapshots using reflink) posted (#1) for review on master by Raghavendra Bhat -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 31 20:52:19 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 20:52:19 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 Worker Ant changed: What |Removed |Added ---------------------------------------------------------------------------- External Bug ID| |Gluster.org Gerrit 22454 -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Sun Mar 31 20:52:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Sun, 31 Mar 2019 20:52:20 +0000 Subject: [Bugs] [Bug 1193929] GlusterFS can be improved In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1193929 --- Comment #604 from Worker Ant --- REVIEW: https://review.gluster.org/22454 ([WIP][RFC]dict.{c|h}: slightly reduce work under lock.) posted (#1) for review on master by Yaniv Kaul -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug. From bugzilla at redhat.com Fri Mar 22 17:53:20 2019 From: bugzilla at redhat.com (bugzilla at redhat.com) Date: Fri, 22 Mar 2019 17:53:20 -0000 Subject: [Bugs] [Bug 1690769] GlusterFS 5.5 crashes in 1x4 replicate setup. In-Reply-To: References: Message-ID: https://bugzilla.redhat.com/show_bug.cgi?id=1690769 Amar Tumballi changed: What |Removed |Added ---------------------------------------------------------------------------- Flags|needinfo?(atumball at redhat.c | |om) | --- Comment #3 from Amar Tumballi --- (gdb) p *(afr_private_t*)this->private $5 = {lock = {spinlock = 0, mutex = {_data = {_lock = 0, _count = 0, _owner = 0, _nusers = 0, _kind = 256, _spins = 0, _elision = 0, _list = {_prev = 0x0, _next = 0x0}}, _size = '\000' , "\001", '\000' , __align = 0}}, child_count = 4, arbiter_count = 0, children = 0x7f82ac049eb0, root_inode = 0x7f829c014388, thin_arbiter_count = 0, ta_gfid = '\000' , ta_child_up = 0 '\000', ta_bad_child_index = 0, ta_notify_dom_lock_offset = 0, release_ta_notify_dom_lock = false, ta_in_mem_txn_count = 0, ta_on_wire_txn_count = 0, ta_waitq = {next = 0x0, prev = 0x0}, ta_onwireq = {next = 0x0, prev = 0x0}, child_up = 0x7f82ac049de0 "\001\001\001\001", , child_latency = 0x7f82ac049e40, local = 0x7f82ac049d80 "", pending_key = 0x7f82ac049f20, data_self_heal = 0x7f82abb4c5fd "on", data_self_heal_algorithm = 0x7f82ac015410 "full", data_self_heal_window_size = 1, heal_waiting = {next = 0x7f82ac049b70, prev = 0x7f82ac049b70}, heal_wait_qlen = 128, heal_waiters = 0, healing = {next = 0x7f82ac049b88, prev = 0x7f82ac049b88}, background_self_heal_count = 8, healers = 0, metadata_self_heal = true, entry_self_heal = true, metadata_splitbrain_forced_heal = false, read_child = 3, hash_mode = 1, pending_reads = 0x7f82ac049d10, favorite_child = -1, fav_child_policy = AFR_FAV_CHILD_NONE, wait_count = 1, timer = 0x0, optimistic_change_log = true, eager_lock = true, pre_op_compat = true, post_op_delay_secs = 1, quorum_count = 1, vol_uuid = '\000' , last_event = 0x7f82ac04a210, event_generation = 4, choose_local = true, did_discovery = true, sh_readdir_size = 1024, ensure_durability = true, sh_domain = 0x7f82ac04a190 "androidpolice_data3-replicate-0:self-heal", afr_dirty = 0x7f82abb4c7e4 "trusted.afr.dirty", halo_enabled = false, halo_max_latency_msec = 5, halo_max_replicas = 99999, halo_min_replicas = 2, shd = {iamshd = false, enabled = true, timeout = 600, index_healers = 0x7f82ac04a7f0, full_healers = 0x7f82ac04aae0, split_brain = 0x7f82ac04add0, statistics = 0x7f82ac04cf40, max_threads = 1, wait_qlength = 1024, halo_max_latency_msec = 99999}, nfsd = {iamnfsd = false, halo_max_latency_msec = 5}, consistent_metadata = false, spb_choice_timeout = 300, need_heal = false, pump_private = 0x0, use_afr_in_pump = false, locking_scheme = 0x7f82abb488ca "full", full_lock = true, esh_granular = true, consistent_io = false} Thread 12 (Thread 0x7f82a9768700 (LWP 22871)): #0 0x00007f82b6e2c986 in epoll_pwait () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f82b7c0139e in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #3 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 11 (Thread 0x7f82a9f69700 (LWP 22870)): #0 0x00007f82b6e2c986 in epoll_pwait () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f82b7c0139e in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #3 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 10 (Thread 0x7f82a37fe700 (LWP 22876)): #0 0x00007f82b70fb89d in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b3fab75b in notify_kernel_loop (data=) at fuse-bridge.c:4037 len = rv = this = priv = 0x556281c00b50 node = tmp = 0x0 pfoh = iov_out = { iov_base = 0x7f82a4029ff0, iov_len = 40 } __FUNCTION__ = "notify_kernel_loop" #2 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #3 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 9 (Thread 0x7f82b808d880 (LWP 22862)): #0 0x00007f82b70f691d in pthread_join () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b7c00a5b in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00005562812f3dfc in main (argc=, argv=) at glusterfsd.c:2762 ctx = 0x556281bb6920 ret = 0 cmdlinestr = "/usr/sbin/glusterfs --lru-limit=0 --process-name fuse --volfile-server=localhost --volfile-id=/androidpolice_data3 /mnt/androidpolice_data3", '\000' cmd = 0x556281bb6920 __FUNCTION__ = "main" Thread 8 (Thread 0x7f82a3fff700 (LWP 22875)): #0 0x00007f82b6e234e4 in readv () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f82b7bceb59 in sys_readv () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f82b3fc22ad in fuse_thread_proc (data=0x556281bfbc40) at fuse-bridge.c:5031 mount_point = 0x0 this = 0x556281bfbc40 priv = 0x556281c00b50 res = iobuf = 0x7f8294040598 finh = iov_in = {{ iov_base = 0x7f82940fe1b0, iov_len = 80 }, { iov_base = 0x7f82a2b9e000, iov_len = 131072 }} msg = fuse_ops = 0x7f82b41d7d60 pfd = {{ fd = 6, events = 25, revents = 1 }, { fd = 8, events = 25, revents = 1 }} __FUNCTION__ = "fuse_thread_proc" #3 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #4 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 7 (Thread 0x7f82b279f700 (LWP 22867)): #0 0x00007f82b70fbc56 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b7bdfd98 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f82b7be0a60 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #3 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #4 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 6 (Thread 0x7f82b08ad700 (LWP 22869)): #0 0x00007f82b70fec4d in __lll_lock_wait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b71018b7 in __lll_lock_elision () from /lib64/libpthread.so.0 No symbol table info available. #2 0x00007f82abb2de3d in afr_frame_return (frame=frame at entry=0x7f82942079a8) at afr-common.c:2105 local = 0x7f8294103cb8 call_count = 0 #3 0x00007f82abb40d51 in afr_lookup_cbk (frame=0x7f82942079a8, cookie=, this=0x7f82ac015c20, op_ret=, op_errno=, inode=, buf=0x7f82b08ac820, xdata=0x7f8294002668, postparent=0x7f82b08ac8c0) at afr-common.c:2951 local = 0x7f8294103cb8 call_count = -1 child_index = ret = need_heal = 0 '\000' #4 0x00007f82abdc5e1f in client4_0_lookup_cbk (req=, iov=, count=, myframe=0x7f8294021318) at client-rpc-fops_v2.c:2641 fn = 0x7f82abb40b80 _parent = 0x7f82942079a8 old_THIS = 0x7f82ac00a340 __local = 0x7f82940251d8 rsp = { op_ret = -1, op_errno = 2, xdata = { xdr_size = 28, count = 1, pairs = { pairs_len = 1, pairs_val = 0x7f82ac060500 } }, prestat = { ia_gfid = '\000' , ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_blocks = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, mode = 0 }, poststat = { ia_gfid = '\000' , ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_blocks = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, mode = 0 } } local = frame = 0x7f8294021318 ret = stbuf = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } postparent = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } op_errno = xdata = 0x7f8294002668 inode = 0x7f829404c648 this = 0x7f82ac00a340 __FUNCTION__ = "client4_0_lookup_cbk" #5 0x00007f82b796e820 in ?? () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #6 0x00007f82b796eb6f in ?? () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #7 0x00007f82b796b063 in rpc_transport_notify () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #8 0x00007f82b15890ce in socket_event_poll_in (notify_handled=true, this=0x7f82ac05ed00) at socket.c:2506 ret = pollin = 0x7f82ac075ca0 priv = 0x7f82ac05f330 ctx = 0x556281bb6920 ret = pollin = priv = ctx = #9 socket_event_handler (fd=, idx=5, gen=1, data=0x7f82ac05ed00, poll_in=, poll_out=, poll_err=) at socket.c:2907 this = 0x7f82ac05ed00 priv = 0x7f82ac05f330 ret = ctx = socket_closed = false notify_handled = false __FUNCTION__ = "socket_event_handler" #10 0x00007f82b7c01519 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #11 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #12 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 5 (Thread 0x7f82b37a1700 (LWP 22865)): #0 0x00007f82b70ffddf in do_sigwait () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b70ffe6d in sigwait () from /lib64/libpthread.so.0 No symbol table info available. #2 0x00005562812f4293 in glusterfs_sigwaiter (arg=) at glusterfsd.c:2306 set = { __val = {18947, 0 } } ret = sig = 0 file = #3 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #4 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 4 (Thread 0x7f82b2fa0700 (LWP 22866)): #0 0x00007f82b6dfa040 in nanosleep () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f82b6df9f4a in sleep () from /lib64/libc.so.6 No symbol table info available. #2 0x00007f82b7bccca2 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #3 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #4 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 3 (Thread 0x7f82b3fa2700 (LWP 22864)): #0 0x00007f82b70ff770 in nanosleep () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b7bb1d9e in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #3 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 2 (Thread 0x7f82b1f9e700 (LWP 22868)): #0 0x00007f82b70fbc56 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 No symbol table info available. #1 0x00007f82b7bdfd98 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #2 0x00007f82b7be0a60 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #3 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #4 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. Thread 1 (Thread 0x7f82a8f67700 (LWP 22872)): #0 0x00007f82b6d6a0e0 in raise () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f82b6d6b6c1 in abort () from /lib64/libc.so.6 No symbol table info available. #2 0x00007f82b6d626fa in __assert_fail_base () from /lib64/libc.so.6 No symbol table info available. #3 0x00007f82b6d62772 in __assert_fail () from /lib64/libc.so.6 No symbol table info available. #4 0x00007f82b70f80b8 in pthread_mutex_lock () from /lib64/libpthread.so.0 No symbol table info available. #5 0x00007f82abb2de3d in afr_frame_return (frame=frame at entry=0x7f82942079a8) at afr-common.c:2105 local = 0x7f8294103cb8 call_count = 0 #6 0x00007f82abb40d51 in afr_lookup_cbk (frame=0x7f82942079a8, cookie=, this=0x7f82ac015c20, op_ret=, op_errno=, inode=, buf=0x7f82a8f66820, xdata=0x7f829408e248, postparent=0x7f82a8f668c0) at afr-common.c:2951 local = 0x7f8294103cb8 call_count = -1 child_index = ret = need_heal = 0 '\000' #7 0x00007f82abdc5e1f in client4_0_lookup_cbk (req=, iov=, count=, myframe=0x7f82940a8d18) at client-rpc-fops_v2.c:2641 fn = 0x7f82abb40b80 _parent = 0x7f82942079a8 old_THIS = 0x7f82ac010200 __local = 0x7f82940c20e8 rsp = { op_ret = -1, op_errno = 2, xdata = { xdr_size = 28, count = 1, pairs = { pairs_len = 1, pairs_val = 0x7f829c0ab040 } }, prestat = { ia_gfid = '\000' , ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_blocks = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, mode = 0 }, poststat = { ia_gfid = '\000' , ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_blocks = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, mode = 0 } } local = frame = 0x7f82940a8d18 ret = stbuf = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } postparent = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } op_errno = xdata = 0x7f829408e248 inode = 0x7f829404c648 this = 0x7f82ac010200 __FUNCTION__ = "client4_0_lookup_cbk" #8 0x00007f82b796e820 in ?? () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #9 0x00007f82b796eb6f in ?? () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #10 0x00007f82b796b063 in rpc_transport_notify () from /usr/lib64/libgfrpc.so.0 No symbol table info available. #11 0x00007f82b15890ce in socket_event_poll_in (notify_handled=true, this=0x7f82ac0561c0) at socket.c:2506 ret = pollin = 0x7f829c0aab80 priv = 0x7f82ac0567f0 ctx = 0x556281bb6920 ret = pollin = priv = ctx = #12 socket_event_handler (fd=, idx=1, gen=4, data=0x7f82ac0561c0, poll_in=, poll_out=, poll_err=) at socket.c:2907 this = 0x7f82ac0561c0 priv = 0x7f82ac0567f0 ret = ctx = socket_closed = false notify_handled = false __FUNCTION__ = "socket_event_handler" #13 0x00007f82b7c01519 in ?? () from /usr/lib64/libglusterfs.so.0 No symbol table info available. #14 0x00007f82b70f5559 in start_thread () from /lib64/libpthread.so.0 No symbol table info available. #15 0x00007f82b6e2c81f in clone () from /lib64/libc.so.6 No symbol table info available. $3 = { op = GF_FOP_LOOKUP, call_count = 2, event_generation = 4, open_fd_count = 0, update_open_fd_count = false, num_inodelks = 0, update_num_inodelks = false, saved_lk_owner = { len = 0, data = '\000' }, op_ret = -1, op_errno = 117, pending = 0x0, dirty = {0, 0, 0}, loc = { path = 0x7f8294054a90 "/uploads/wp-security-audit-log/custom-alerts.php", name = 0x7f8294054aaf "custom-alerts.php", inode = 0x7f829404c648, parent = 0x7f82940707e8, gfid = '\000' , pargfid = "\266?*\340oB\255\202?\177\362\244p\203" }, newloc = { path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, fd = 0x0, fd_ctx = 0x0, child_up = 0x7f82940858d0 "\001\001\001\001", , read_attempted = 0x7f829400fc70 "", readfn = 0x0, refreshed = false, inode = 0x7f829404c648, parent = 0x0, parent2 = 0x0, readable = 0x7f82940527d0 "", readable2 = 0x7f8294123940 "", read_subvol = -1, refreshfn = 0x0, refreshinode = 0x0, refreshgfid = '\000' , pre_op_compat = false, xattr_req = 0x7f829405a8e8, internal_lock = { lk_loc = 0x0, lockee_count = 0, lockee = {{ loc = { path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, __xpg_basename = 0x0, locked_nodes = 0x0, locked_count = 0 }, { loc = { path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, __xpg_basename = 0x0, locked_nodes = 0x0, locked_count = 0 }, { loc = { path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, __xpg_basename = 0x0, locked_nodes = 0x0, locked_count = 0 }}, flock = { l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = { len = 0, data = '\000' } }, lk_basename = 0x0, lower_basename = 0x0, higher_basename = 0x0, lower_locked = 0 '\000', higher_locked = 0 '\000', locked_nodes = 0x0, lower_locked_nodes = 0x0, lock_count = 0, entrylk_lock_count = 0, lk_call_count = 0, lk_expected_count = 0, lk_attempted_count = 0, lock_op_ret = 0, lock_op_errno = 0, lock_cbk = 0x0, domain = 0x0 }, dict = 0x0, optimistic_change_log = 0, stable_write = false, append_write = false, cont = { lookup = { needs_fresh_lookup = false, gfid_req = "\210\204\000\204I\371CL\230?\310\035\021Co" }, statfs = { buf_set = 0 '\000', buf = { f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0} } }, open = { flags = 0, fd = 0x0 }, lk = { cmd = 0, user_flock = { l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = { len = 0, data = '\000' } }, ret_flock = { l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = { len = 0, data = '\000' } }, locked_nodes = 0x0 }, access = { mask = 0, last_index = 0 }, stat = { last_index = 0 }, fstat = { last_index = 0 }, readlink = { size = 0, last_index = 0 }, getxattr = { name = 0x0, last_index = 0, xattr_len = 0 }, readv = { size = 0, offset = 0, last_index = 0, flags = 0 }, opendir = { success_count = 0, op_ret = 0, op_errno = 0, checksum = 0x0 }, readdir = { op_ret = 0, op_errno = 0, size = 0, offset = 0, dict = 0x0, failed = false, last_index = 0 }, inode_wfop = { prebuf = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } }, postbuf = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } }, writev = { op_ret = 0, vector = 0x0, iobref = 0x0, count = 0, write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } }, preparent = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } }, postparent = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } }, prenewparent = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } }, postnewparent = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } }, create = { fd = 0x0, params = 0x0, flags = 0, mode = 0 }, mknod = { dev = 0, mode = 0, params = 0x0 }, mkdir = { mode = 0, params = 0x0 }, rmdir = { flags = 0 }, symlink = { params = 0x0, linkpath = 0x0 }, fallocate = { mode = 0, offset = 0, len = 0 }, discard = { offset = 0, len = 0 }, zerofill = { offset = 0, len = 0, prebuf = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } }, postbuf = { ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' , ia_type = IA_INVAL, ia_prot = { suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, group = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' }, other = { read = 0 '\000', write = 0 '\000', exec = 0 '\000' } } } }, inodelk = { volume = 0x0, cmd = 0, in_cmd = 0, in_flock = { l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = { len = 0, data = '\000' } }, flock = { l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = { len = 0, data = '\000' } }, xdata = 0x0 }, entrylk = { volume = 0x0, __xpg_basename = 0x0, in_cmd = ENTRYLK_LOCK, cmd = ENTRYLK_LOCK, type = ENTRYLK_RDLCK, xdata = 0x0 }, seek = { offset = 0, what = GF_SEEK_DATA }, fsync = { datasync = 0 }, lease = { user_lease = { cmd = 0, lease_type = NONE, lease_id = '\000' , lease_flags = 0 }, ret_lease = { cmd = 0, lease_type = NONE, lease_id = '\000' , lease_flags = 0 }, locked_nodes = 0x0 } }, transaction = { start = 0, len = 0, eager_lock_on = false, do_eager_unlock = false, __xpg_basename = 0x0, new_basename = 0x0, parent_loc = { path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, new_parent_loc = { path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' , pargfid = '\000' }, type = AFR_DATA_TRANSACTION, resume_stub = 0x0, owner_list = { next = 0x0, prev = 0x0 }, wait_list = { next = 0x0, prev = 0x0 }, pre_op = 0x0, changelog_xdata = 0x0, pre_op_sources = 0x0, failed_subvols = 0x0, dirtied = false, inherited = false, no_uninherit = false, uninherit_done = false, uninherit_value = false, in_flight_sb = false, in_flight_sb_errno = 0, changelog_resume = 0x0, main_frame = 0x0, frame = 0x0, wind = 0x0, unwind = 0x0 }, barrier = { initialized = true, guard = { __data = { __lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = { __prev = 0x0, __next = 0x0 } }, __size = '\000' , __align = 0 }, cond = { __data = { { __wseq = 0, __wseq32 = { __low = 0, __high = 0 } }, { __g1_start = 0, __g1_start32 = { __low = 0, __high = 0 } }, __g_refs = {0, 0}, __g_size = {0, 0}, __g1_orig_size = 0, __wrefs = 0, __g_signals = {0, 0} }, __size = '\000' , __align = 0 }, waitq = { next = 0x7f8294106340, prev = 0x7f8294106340 }, count = 0, waitfor = 0 }, xdata_req = 0x0, xdata_rsp = 0x0, xattr_rsp = 0x0, umask = 0, xflag = 0, do_discovery = false, replies = 0x7f82940aabe0, healer = { next = 0x7f8294106388, prev = 0x7f8294106388 }, heal_frame = 0x0, need_full_crawl = false, fop_lock_state = AFR_FOP_LOCK_PARALLEL, is_read_txn = false, inode_ctx = 0x0, ta_child_up = 0 '\000', ta_waitq = { next = 0x0, prev = 0x0 }, ta_onwireq = { next = 0x0, prev = 0x0 }, fop_state = TA_WAIT_FOR_NOTIFY_LOCK_REL, ta_failed_subvol = 0, is_new_entry = false } (gdb) p *frame->root $3 = {{all_frames = {next = 0x7f82ac004098, prev = 0x556281bf2a30}, {next_call = 0x7f82ac004098, prev_call = 0x556281bf2a30}}, pool = 0x556281bf2a30, stack_lock = {spinlock = 0, mutex = {_data = {_lock = 0, _count = 0, _owner = 0, _nusers = 0, _kind = 256, _spins = 0, _elision = 0, _list = {_prev = 0x0, _next = 0x0}}, _size = '\000' , "\001", '\000' , __align = 0}}, client = 0x0, unique = 11064699, state = 0x7f8294094340, uid = 30, gid = 8, pid = 22120, identifier = '\000' , ngrps = 1, groups_small = {8, 0 }, groups_large = 0x0, groups = 0x7f8294153dbc, lk_owner = {len = 8, data = '\000' }, ctx = 0x556281bb6920, myframes = {next = 0x7f82941b17b8, prev = 0x7f82940d55b8}, op = 27, type = 1 '\001', tv = {tv_sec = 2226858, tv_nsec = 724965078}, err_xl = 0x7f82ac010200, error = 2, flags = 0, ctime = { tv_sec = 0, tv_nsec = 0}, ns_info = {hash = 0, found = false}} Also note that I tried to check local->replies[0] and seems that it was NULL (and 1,2,3 were all junk). -- You are receiving this mail because: You are on the CC list for the bug. You are the assignee for the bug.